NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

Size: px
Start display at page:

Download "NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS"

Transcription

1 NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic joint results with Luc Giraud, Julien Langou and Jasper van den Eshof but also Alicja Smoktunowicz and Jesse Barlow! Seminarium Pierścienie, macierze i algorytmy numeryczne, Politechnika Warszawska, Warszawa, November 13,

2 OUTLINE 1. HISTORICAL REMARKS 2. CLASSICAL AND MODIFIED GRAM-SCHMIDT ORTHOG- ONALIZATION 3. GRAM-SCHMIDT IN FINITE PRECISION ARITHMETIC: LOSS OF ORTHOGONALITY IN THE CLASSICAL GRAM- SCHMIDT PROCESS 4. NUMERICAL NONSINGULARITY AND GRAM-SCHMIDT ALGORITHM WITH REORTHOGONALIZATION 2

3 GRAM-SCHMIDT PROCESS AS QR ORTHOGONALIZATION A = (a 1,..., a n ) R m,n m rank(a) = n orthogonal basis Q of span(a) Q = (q 1,..., q n ) R m,n, Q T Q = I n A = QR, R upper triangular (A T A = R T R) 3

4 HISTORICAL REMARKS I origination of the QR factorization used for orthogonalization of functions: J. P. Gram: Über die Entwicklung reeller Funktionen in Reihen mittelst der Methode der Kleinsten Quadrate. Journal f. r. a. Math., 94: 41-73, algorithm of the QR decomposition but still in terms of functions: E. Schmidt: Zur Theorie der linearen und nichlinearen Integralgleichungen. I Teil. Entwicklung willkürlichen Funktionen nach system vorgeschriebener. Mathematische Annalen, 63: , name of the QR decomposition in the paper on nonsymmetric eigenvalue problem, rumor: the Q in QR was originally an O standing for orthogonal: J.G.F. Francis: The QR transformation, parts I and II. Computer Journal 4: , , 1961,

5 HISTORICAL REMARKS II modified Gram-Schmidt (MGS) interpreted as an elimination method using weighted row sums not as an orthogonalization technique: P.S. Laplace: Theorie Analytique des Probabilités. Courcier, Paris, third edition, Reprinted in P.S. Laplace. (Evres Compeétes. Gauthier-Vilars, Paris, ). classical Gram-Schmidt (CGS) algorithm to solve linear systems of infinitely many solutions: E. Schmidt: Über die Auflösung linearen Gleichungen mit unendlich vielen Unbekanten, Rend. Circ. Mat. Palermo. Ser. 1, 25 (1908), pp first application to finite-dimensional set of vectors: G. Kowalewski: Einfuehrung in die Determinantentheorie. Verlag von Veit & Comp., Leipzig,

6 CLASSICAL AND MODIFIED GRAM-SCHMIDT ALGORITHMS classical (CGS) modified (MGS) for j = 1,..., n for j = 1,..., n u j = a j u j = a j for k = 1,..., j 1 for k = 1,..., j 1 u j = u j (a j, q k )q k end q j = u j / u j end u j = u j (u j, q k )q k end q j = u j / u j end 6

7 CLASSICAL AND MODIFIED GRAM-SCHMIDT ALGORITHMS finite precision arithmetic: Q = ( q 1,..., q n ), Q T Q I n, I Q T Q? A Q R, A Q R? R?, cond( R)? classical and modified Gram-Schmidt are mathematically equivalent, but they have different numerical properties classical Gram-Schmidt can be quite unstable, can quickly lose all semblance of orthogonality 7

8 ILLUSTRATION, EXAMPLE Läuchli, 1961, Björck, 1967: A = σ σ σ κ(a) = σ 1 (n + σ 2 ) 1/2 σ 1 n, σ 1 σ min (A) = σ, A = n + σ 2 assume first that σ 2 u, so fl(1 + σ 2 ) = 1 if no other rounding errors are made, the matrices computed in CGS and MGS have the following form: 8

9 ILLUSTRATION, EXAMPLE σ , σ CGS: ( q 3, q 1 ) = σ/ 2, ( q 3, q 2 ) = 1/2, MGS: ( q 3, q 1 ) = σ/ 6, ( q 3, q 2 ) = 0 complete loss of orthogonality ( loss of lin. independence, loss of (numerical) rank ): σ 2 u (CGS), σ u (MGS) 9

10 GRAM-SCHMIDT PROCESS VERSUS ROUNDING ERRORS modified Gram-Schmidt (MGS): assuming ĉ 1 uκ(a) < 1 I Q T Q ĉ2uκ(a) 1 ĉ 1 uκ(a) classical Gram-Schmidt (CGS)? Björck, 1967, Björck, Paige, 1992 I Q T Q c 2uκ n 1 (A) 1 c 1 uκ n 1 (A)? Kielbasinski, Schwettlik, 1994 Polish version of the book, 2nd edition 10

11 TRIANGULAR FACTOR FROM CLASSICAL GRAM-SCHMIDT VS. CHOLESKY FACTOR OF THE CROSS-PRODUCT MATRIX exact arithmetic: r i,j = ( ) a j, q i = a j, a i i 1 k=1 r k,iq k r i,i = (a j, a i ) i 1 k=1 r k,ir k,j r i,i The computation of R in the classical Gram-Schmidt is closely related to the left-looking Cholesky factorization of the crossproduct matrix A T A = R T R 11

12 r i,j = fl(a j, q i ) = (a j, q i ) + e (1) = r i,i r i,j = a j, fl(a i i 1 k=1 q k r k,i ) + e (2) i + e (1) i,j r i,i a j, a i i 1 k=1 i,j q k r k,i + e (3) [ + r i,i (a j, e (2) i ) + e (1) i,j = (a i, a j ) i 1 k=1 i ] r k,i [ r k,j e (1) k,j ] + (a j, e (3) [ i ) + r i,i (a j, e (2) i ) + e (1) i,j ] 12

13 CLASSICAL GRAM-SCHMIDT PROCESS: COMPUTED TRIANGULAR FACTOR i k=1 r k,i r k,j = (a i, a j ) + e i,j, i < j [A T A + E 1 ] i,j = [ R T R] i,j! E 1 c 1 u A 2 The CGS process is another way how to compute a backward stable Cholesky factor of the cross-product matrix A T A! 13

14 CLASSICAL GRAM-SCHMIDT PROCESS: DIAGONAL ELEMENTS u j = (I Q j 1 Q T j 1 )a j u j = a j Q j 1 (Q T j 1 a j) = ( a j 2 Q T j 1 a j 2 ) 1/2 computing q j = u j ( a j Q T j 1 a j ) 1/2 ( a j + Q T j 1 a j ) 1/2, a j 2 = j k=1 r2 k,j + e j,j, e j,j c 1 u a j 2 J. Barlow, A. Smoktunowicz, Langou,

15 CLASSICAL GRAM-SCHMIDT PROCESS: THE LOSS OF ORTHOGONALITY A T A + E 1 = R T R, A + E 2 = Q R R T (I Q T Q) R = ( E 2 ) T A A T E 2 ( E 2 ) T E 2 + E 1 assuming c 2 uκ(a) < 1 I Q T Q c 3uκ 2 (A) 1 c 2 uκ(a) 15

16 GRAM-SCHMIDT PROCESS VERSUS ROUNDING ERRORS modified Gram-Schmidt (MGS): assuming ĉ 1 uκ(a) < 1 I Q T Q ĉ2uκ(a) 1 ĉ 1 uκ(a) Björck, 1967, Björck, Paige, 1992 classical Gram-Schmidt (CGS): assuming c 2 uκ(a) < 1 I Q T Q c 3uκ 2 (A) 1 c 2 uκ(a)! Giraud, Van den Eshof, Langou, R, 2006 J. Barlow, A. Smoktunowicz, Langou,

17 10 0 CGS x CHOL QR x MGS x CGS loss of orthgonality I Q k T Qk logarithm of the condition number κ(a k ) Stewart, Matrix algorithms book, p. 284,

18 GRAM-SCHMIDT ALGORITHMS WITH COMPLETE REORTHOGONALIZATION classical (CGS2) modified (MGS2) for j = 1,..., n for j = 1,..., n u j = a j u j = a j for i = 1,2 for i = 1,2 for k = 1,..., j 1 for k = 1,..., j 1 u j = u j (a j, q k )q k end end q j = u j / u j end u j = u j (u j, q k )q k end end q j = u j / u j end 18

19 GRAM-SCHMIDT PROCESS VERSUS ROUNDING ERRORS Gram-Schmidt (MGS, CGS): assuming c 1 uκ(a) < 1 I Q T Q c 2uκ 1,2 (A) 1 c 1 uκ(a) Björck, 1967, Björck, Paige, 1992 Giraud, van den Eshof, Langou, R, 2005 Gram-Schmidt with reorthogonalization (CGS2, MGS2): assuming c 3 uκ(a) < 1 I Q T Q c 4 u Hoffmann, 1988 Giraud, van den Eshof, Langou, R,

20 ROUNDING ERROR ANALYSIS OF REORTHOGONALIZATION STEP u j = (I Q j 1 Q T j 1 )a j, v j = (I Q j 1 Q T j 1 )2 a j u j = r j,j σ min (R j ) = σ min (A j ) σ min (A) a j u j κ(a), u j v j = 1 A + E 2 = Q R, E 2 c 2 u A a j ū j κ(a) 1 c 1 uκ(a), ū j v j [1 c 2uκ(A)] 1 20

21 FLOPS VERSUS PARALLELISM classical Gram-Schmidt (CGS): mn 2 saxpys classical Gram-Schmidt with reorthogonalization (CGS2): 2mn 2 saxpys Householder orthogonalization: 2(mn 2 n 3 /3) saxpys in parallel environment and using BLAS2, CGS2 may be faster than (plain) MGS! Frank, Vuik, 1999, Lehoucq, Salinger,

22 THANK YOU FOR YOUR ATTENTION! 22

23 THE ARNOLDI PROCESS AND THE GMRES METHOD WITH THE CLASSICAL GRAM-SCHMIDT PROCESS V n = [v 1, v 2,..., v n ] [r 0, AV n ] = V n+1 [ r 0 e 1, H n+1,n ] H n+1,n is an upper Hessenberg matrix Arnoldi process is a (recursive) column-oriented QR decomposition of the (special) matrix [r 0, AV n ]! x n = x 0 + V n y n, r 0 e 1 H n+1,n y n = min y r 0 e 1 H n+1,n y 23

24 THE GRAM-SCHMIDT PROCESS IN THE ARNOLDI CONTEXT: LOSS OF ORTHOGONALITY modified Gram-Schmidt (MGS): I V T n+1 V n+1 c 1 uκ([ v 1, A V n ]) classical Gram-Schmidt (CGS): I V T n+1 V n+1 c 2 uκ 2 ([ v 1, A V n ]) Björck, Paige 1967, 1992 Giraud, Langou, R, Van den Eshof

25 CONDITION NUMBER IN ARNOLDI VERSUS RESIDUAL NORM IN GMRES The loss of orthogonality in Arnoldi is controlled by the convergence of the residual norm in GMRES: I V T n+1 V n+1 c α uκ α ([ v 1, A V n ]), α = 1, 2 Björck 1967, Björck and Paige, 1992 Giraud, Langou, R, Van den Eshof 2003 κ([ v 1, A V n ]) [ v 1,A V n ] ˆrn r 0 [1+ ŷ n 2 1 δn 2 ] 1/2 ˆr n r 0 = v 1 A V n ŷ n = min y v 1 A V n y, δ n = σ n+1([ v 1,A V n ]) σ n (A V n ) < 1 Paige, Strakoš, Greenbaum, R, Strakoš,

26 THE GMRES METHOD WITH THE GRAM-SCHMIDT PROCESS The total loss of orthogonality (rank-deficiency) in the Arnoldi process with Gram-Schmidt can occur only after GMRES reaches its final accuracy level: modified Gram-Schmidt (MGS): ˆr n r 0 [1+ ŷ n 2 c 1[ v 1, A V n ] u 1 δn 2 ] 1/2 classical Gram-Schmidt (CGS): ˆr n r 0 [1+ ŷ n 2 [ c 2 [ v 1, A V n ] u ] 1/2 1 δn 2 ] 1/2 26

27 10 2 implementations of the GMRES method ADD1(4960) 10 4 loss of orthogonality iteration number 27

28 10 2 implementations of the GMRES method ADD1(4960) relative true residual norms iteration number 28

29 LEAST SQUARES PROBLEM WITH CLASSICAL GRAM-SCHMIDT b Ax = min u b Au, r = b Ax A T Ax = A T b r = (I Q Q T )b + e 1 ( R + E 3 ) x = Q T b + e 2 e 1, e 2 c 0 u b, E 3 c 0 u R 29

30 LEAST SQUARES PROBLEM WITH CLASSICAL GRAM-SCHMIDT R T ( R + E 3 ) x = ( Q R) T b + R T e 2 (A T A + E 1 + R T E 3 ) x = (A + E 2 ) T b + R T e 2 (A T A + E) x = A T b + e E c 4 u A 2, e c 4 u A b 30

31 LEAST SQUARES PROBLEM WITH CLASSICAL GRAM-SCHMIDT r r b κ(a)(2κ(a) + 1) c 5 u [1 c 1 )uκ 2 (A)] 1/2 x x x κ 2 (A) ( 2 + r ) A x c 5 u 1 c 1 uκ 2 (A) The least square solution with classical Gram-Schmidt has the same forward error bound as the normal equation method: R Q T A = R R T (A + E 2 ) T A = R T [ E 1 + ( E 2 ) T A] Björck,

32 LEAST SQUARES PROBLEM WITH BACKWARD STABLE QR FACTORIZATION r r b (2κ(A) + 1)c 6 u x x x κ(a) [ 2 + (κ(a) + 1) r ] A x c 6 u 1 c 6 uκ(a) Householder QR factorization, modified Gram-Schmidt Wilkinson, Golub, 1966, Björck,

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Numer. Math. (2005) 101: 87 100 DOI 10.1007/s00211-005-0615-4 Numerische Mathematik Luc Giraud Julien Langou Miroslav Rozložník Jasper van den Eshof Rounding error analysis of the classical Gram-Schmidt

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process ELSEVFX Available online at www.sciencedirect.corn =...c.@o,..~.- An nternational Journal computers & mathematics with applications Computers and Mathematics with Applications 50 (2005) 1069-1075 www.elsevier.com/locate/camwa

More information

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN H. GUTKNECHT Abstract. In this paper we analyze the numerical behavior of several minimum residual methods, which

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 4, pp. 1483 1499 c 2008 Society for Industrial and Applied Mathematics HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

QR decomposition: History and its Applications

QR decomposition: History and its Applications Mathematics & Statistics Auburn University, Alabama, USA Dec 17, 2010 decomposition: and its Applications Tin-Yau Tam èuî Æâ w f ŒÆêÆ ÆÆ Page 1 of 37 email: tamtiny@auburn.edu Website: www.auburn.edu/

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

ORTHOGONALIZATION WITH A NON-STANDARD INNER PRODUCT WITH THE APPLICATION TO PRECONDITIONING

ORTHOGONALIZATION WITH A NON-STANDARD INNER PRODUCT WITH THE APPLICATION TO PRECONDITIONING ORTHOGONALIZATION WITH A NON-STANDARD INNER PRODUCT WITH THE APPLICATION TO PRECONDITIONING Mroslav Rozložník jont work wth Jří Kopal, Alcja Smoktunowcz and Mroslav Tůma Insttute of Computer Scence, Czech

More information

Linear Least Squares Problems

Linear Least Squares Problems Linear Least Squares Problems Introduction We have N data points (x 1,y 1 ),...(x N,y N ). We assume that the data values are given by y j = g(x j ) + e j, j = 1,...,N where g(x) = c 1 g 1 (x) + + c n

More information

ROUNDOFF ERROR ANALYSIS OF THE CHOLESKYQR2 ALGORITHM

ROUNDOFF ERROR ANALYSIS OF THE CHOLESKYQR2 ALGORITHM Electronic Transactions on Numerical Analysis. Volume 44, pp. 306 326, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA ROUNDOFF ERROR ANALYSIS OF THE CHOLESKYQR2 ALGORITHM YUSAKU YAMAMOTO, YUJI NAKATSUKASA,

More information

Continuous analogues of matrix factorizations NASC seminar, 9th May 2014

Continuous analogues of matrix factorizations NASC seminar, 9th May 2014 Continuous analogues of matrix factorizations NSC seminar, 9th May 2014 lex Townsend DPhil student Mathematical Institute University of Oxford (joint work with Nick Trefethen) Many thanks to Gil Strang,

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

MODIFIED GRAM SCHMIDT (MGS), LEAST SQUARES, AND BACKWARD STABILITY OF MGS-GMRES

MODIFIED GRAM SCHMIDT (MGS), LEAST SQUARES, AND BACKWARD STABILITY OF MGS-GMRES SIAM J. MATRIX ANAL. APPL. Vol. 8, No., pp. 64 84 c 006 Society for Industrial and Applied Mathematics MODIFIED GRAM SCHMIDT (MGS), LEAST SQUARES, AND BACKWARD STABILITY OF MGS-GMRES CHRISTOPHER C. PAIGE,

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor

Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Performance Evaluation of Some Inverse Iteration Algorithms on PowerXCell T M 8i Processor Masami Takata 1, Hiroyuki Ishigami 2, Kini Kimura 2, and Yoshimasa Nakamura 2 1 Academic Group of Information

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Math Fall Final Exam

Math Fall Final Exam Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University Lecture 13 Stability of LU Factorization; Cholesky Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

The Full-rank Linear Least Squares Problem

The Full-rank Linear Least Squares Problem Jim Lambers COS 7 Spring Semeseter 1-11 Lecture 3 Notes The Full-rank Linear Least Squares Problem Gien an m n matrix A, with m n, and an m-ector b, we consider the oerdetermined system of equations Ax

More information

Aplikace matematiky. Gene Howard Golub Numerical methods for solving linear least squares problems

Aplikace matematiky. Gene Howard Golub Numerical methods for solving linear least squares problems Aplikace matematiky Gene Howard Golub Numerical methods for solving linear least squares problems Aplikace matematiky, Vol. 10 (1965), No. 3, 213--216 Persistent URL: http://dml.cz/dmlcz/102951 Terms of

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Q T A = R ))) ) = A n 1 R

Q T A = R ))) ) = A n 1 R Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Block Krylov Space Solvers: a Survey

Block Krylov Space Solvers: a Survey Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information