The Lanczos and conjugate gradient algorithms

Size: px
Start display at page:

Download "The Lanczos and conjugate gradient algorithms"

Transcription

1 The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008

2 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization algorithm 5 The block Lanczos algorithm 6 The conjugate gradient algorithm

3 The Lanczos algorithm Let A be a real symmetric matrix of order n The Lanczos algorithm constructs an orthogonal basis of a Krylov subspace spanned by the columns of K k = ( v, Av,, A k 1 v ) Gram Schmidt orthogonalization (Arnoldi) v 1 = v h i,j = (Av j, v i ), i = 1,..., j j v j = Av j h i,j v i i=1 h j+1,j = v j, if h j+1,j = 0 then stop v j+1 = v j h j+1,j

4 AV k = V k H k + h k+1,k v k+1 (e k ) T H k is an upper Hessenberg matrix with elements h i,j Note that h i,j = 0, j = 1,..., i 2, i > 2 H k = V T k AV k If A is symmetric, H k is symmetric and therefore tridiagonal H k = J k We also have AV n = V n J n, if no v j is zero before step n since v n+1 = 0 because v n+1 is a vector orthogonal to a set of n orthogonal vectors in a space of dimension n Otherwise there exists an m < n for which AV m = V m J m and the algorithm has found an invariant subspace of A, the eigenvalues of J m being eigenvalues of A

5 starting from a vector ṽ 1 = v/ v α 1 = (Av 1, v 1 ), ṽ 2 = Av 1 α 1 v 1 and then, for k = 2, 3,... η k 1 = ṽ k v k = ṽ k η k 1 α k = (v k, Av k ) = (v k ) T Av k ṽ k+1 = Av k α k v k η k 1 v k 1

6 A variant of the Lanczos algorithm has been proposed by Chris Paige to improve the local orthogonality in finite precision computations α k = (v k ) T (Av k η k 1 v k 1 ) ṽ k+1 = (Av k η k 1 v k 1 ) α k v k Since we can suppose that η i 0, the tridiagonal Jacobi matrix J k has real and simple eigenvalues which we denote by θ (k) j They are known as the Ritz values and are the approximations of the eigenvalues of A given by the Lanczos algorithm

7 Theorem Let χ k (λ) be the determinant of J k λi (which is a monic polynomial), then v k = p k (A)v 1, p k (λ) = ( 1) k 1 χ k 1(λ) η 1 η k 1 The polynomials p k of degree k 1 are called the normalized Lanczos polynomials The polynomials p k satisfy a scalar three term recurrence η k p k+1 (λ) = (λ α k )p k (λ) η k 1 p k 1 (λ), k = 1, 2,... with initial conditions, p 0 0, p 1 1

8 Theorem Consider the Lanczos vectors v k. There exists a measure α such that (v k, v l ) = p k, p l = b a p k (λ)p l (λ)dα(λ) where a λ 1 = λ min and b λ n = λ max, λ min and λ max being the smallest and largest eigenvalues of A Proof. Let A = QΛQ T be the spectral decomposition of A Since the vectors v j are orthonormal and p k (A) = Qp k (Λ)Q T, we have where ˆv = Q T v 1 (v k, v l ) = (v 1 ) T p k (A) T p l (A)v 1 = (v 1 ) T Qp k (Λ)Q T Qp l (Λ)Q T v 1 = (v 1 ) T Qp k (Λ)p l (Λ)Q T v 1 n = p k (λ j )p l (λ j )[ˆv j ] 2, j=1

9 The last sum can be written as an integral for a measure α which is piecewise constant 0 if λ < λ 1 α(λ) = i j=1 [ˆv j] 2 if λ i λ < λ i+1 n j=1 [ˆv j] 2 if λ n λ The measure α has a finite number of points of increase at the (unknown) eigenvalues of A

10 The Lanczos algorithm can be used to solve linear systems Ax = c when A is symmetric and c is a given vector Let x 0 be a given starting vector and r 0 = c Ax 0 be the corresponding residual Let v = v 1 = r 0 / r 0 x k = x 0 + V k y k We request the residual r k = c Ax k to be orthogonal to the Krylov subspace of dimension k V T k r k = V T k c V T k Ax 0 V T k AV ky k = V T k r 0 J k y k = 0 But, r 0 = r 0 v 1 and V T k r 0 = r 0 e 1 J k y k = r 0 e 1

11 The Lanczos algorithm in finite precision arithmetic It is well known since Lanczos that the basis vectors v k may loose their orthogonality. Moreover multiple copies of the already converged Ritz values appear again and again Consider an example devised by Z. Strakoš: a diagonal matrix with elements ( ) i 1 λ i = λ 1 + (λ n λ 1 )ρ n i, i = 1,..., n n 1 We choose n = 30, λ 1 = 0.1, λ n = 100, ρ = 0.9

12 log 10 ( Ṽ T 30Ṽ30 ) for the Strakos30 matrix 30

13 In this example the first Ritz value to converge is the largest one λ n Then v k n = p k (λ n )v 1 n must converge to zero (in exact arithmetic). What happens? 0 Log 10 v, i= Strakos30, log 10 ( v k 30 ) with (dashed), without (solid) reorthogonalization and their difference (dotted)

14 More iterations 0 Log 10 v, i= Strakos30, log 10 ( v k 30 ) with (dashed) and without reorthogonalization (solid)

15 Distances to the largest eigenvalue of A 5 Log 10 v, i= Strakos30, log 10 ( v30 k ) and the distances to the 10 largest Ritz values

16 This behavior can be studied by looking at perturbed scalar three-term recurrences Theorem Let j be given and p j,k be the polynomial determined by p j,j 1 = 0, p j,j = 1 η k+1 p j,k+1 (λ) = (λ α k ) p j,k (λ) η k p j,k 1 (λ), k = j,... Then the computed Lanczos vector is ṽ k+1 = p 1,k+1 (A)ṽ 1 + k p l+1,k+1 (A) f l l=1 η l+1

17 Note that the first term ˇv k+1 = p 1,k+1 (A)ṽ 1 is different from what we have in exact arithmetic since the coefficients of the polynomial are the ones computed in finite precision Proposition The associated polynomial p j,k, k j is given by p j,k (λ) = ( 1) k j χ j,k 1(λ) η j+1 η k where χ j,k (λ) is the determinant of J j,k λi, J j,k being the tridiagonal matrix obtained from the coefficients of the second order recurrence from step j to step k, that is discarding the j 1 first rows and columns of J k

18 The nonsymmetric Lanczos algorithm When the matrix A is not symmetric we cannot generally construct a vector v k+1 orthogonal to all the previous basis vectors by only using the two previous vectors v k and v k 1 Construct bi-orthogonal sequences using A T choose two starting vectors v 1 and ṽ 1 with (v 1, ṽ 1 ) 0 normalized such that (v 1, ṽ 1 ) = 1. We set v 0 = ṽ 0 = 0. Then for k = 1, 2,... z k = Av k ω k v k η k 1 v k 1 w k = A T ṽ k ω k ṽ k η k 1 ṽ k 1 ω k = (ṽ k, Av k ), η k η k = (z k, w k ) v k+1 = zk η k, ṽ k+1 = w k η k

19 and ω 1 η 1 J k = Then, in matrix form η 1 ω 2 η η k 2 ω k 1 η k 1 η k 1 ω k V k = [v 1 v k ], Ṽ k = [ṽ 1 ṽ k ] AV k = V k J k + η k v k+1 (e k ) T A T Ṽ k = Ṽ k J T k + η kṽ k+1 (e k ) T

20 Theorem If the nonsymmetric Lanczos algorithm does not break down with η k η k being zero, the algorithm yields biorthogonal vectors such that (ṽ i, v j ) = 0, i j, i, j = 1, 2,... The vectors v 1,..., v k span K k (A, v 1 ) and ṽ 1,..., ṽ k span K k (A T, ṽ 1 ). The two sequences of vectors can be written as v k = p k (A)v 1, ṽ k = p k (A T )ṽ 1 where p k and p k are polynomials of degree k 1 η k p k+1 = (λ ω k )p k η k 1 p k 1 η k p k+1 = (λ ω k ) p k η k 1 p k 1

21 The algorithm breaks down if at some step we have (z k, w k ) = 0 Either a) z k = 0 and/or w k = 0 If z k = 0 we can compute the eigenvalues or the solution of the linear system Ax = c. If z k 0 and w k = 0, the only way to deal with this situation is to restart the algorithm b) The more dramatic situation ( serious breakdown ) is when (z k, w k ) = 0 with z k and w k 0 Need to use look ahead strategies or restart

22 For our purposes we will use the nonsymmetric Lanczos algorithm with a symmetric matrix! We can choose η k = ± η k = ± (z k, w k ) with for instance, η k 0 and η k = sgn[(z k, w k )] η k. Then p k = ±p k

23 The Golub Kahan bidiagonalization algorithm Useful when the matrix is A T A, ex. A T Ax = c The first algorithm (LB1) reduces A to upper bidiagonal form Let q 0 = c/ c, r 0 = Aq 0, δ 1 = r 0, p 0 = r 0 /δ 1, then for k = 1, 2,... u k = A T p k 1 δ k q k 1 γ k = u k q k = u k /γ k r k = Aq k γ k p k 1 δ k+1 = r k p k = r k /δ k+1

24 If and P k = ( p 0 p k 1), Q k = ( q 0 q k 1) δ 1 γ B k = δ k 1 γ k 1 δ k then P k and Q k, which is an orthogonal matrix, satisfy the equations AQ k = P k B k A T P k = Q k Bk T + γ kq k (e k ) T and A T AQ k = Q k B T k B k + γ k δ k q k (e k ) T

25 The second algorithm (LB2) reduces A to lower bidiagonal form Let p 0 = c/ c, u 0 = A T p 0, γ 1 = u 0, q 0 = u 0 /γ 1, r 1 = Aq 0 γ 1 p 0, δ 1 = r 1, p 1 = r 1 /δ 1, then for k = 2, 3,... u k 1 = A T p k 1 δ k 1 q k 2 γ k = u k 1 q k 1 = u k 1 /γ k r k = Aq k 1 γ k p k 1 δ k = r k p k = r k /δ k

26 If and P k+1 = ( p 0 p k), Q k = ( q 0 q k 1) γ 1. δ.. 1 C k = γk a k + 1 by k matrix, then P k and Q k, which is an orthogonal matrix, satisfy the equations δ k AQ k = P k+1 C k A T P k+1 = Q k Ck T + γ k+1q k (e k+1 ) T

27 Of course, by eliminating P k+1 in these equations we obtain A T AQ k = Q k C T k C k + γ k+1 δ k q k (e k ) T and C T k C k = B T k B k = J k B k is the Cholesky factor of J k and C T k C k

28 The block Lanczos algorithm See Golub and Underwood We consider only 2 2 blocks Let X 0 be an n 2 given matrix, such that X T 0 X 0 = I 2. Let X 1 = 0 be an n 2 matrix. Then, for k = 1, 2,... Ω k = X T k 1 AX k 1 R k = AX k 1 X k 1 Ω k X k 2 Γ T k 1 X k Γ k = R k The last step is the QR decomposition of R k such that X k is n 2 with X T k X k = I 2 We obtain a block tridiagonal matrix

29 The matrix R k can eventually be rank deficient and in that case Γ k is singular One of the columns of X k can be chosen arbitrarily To complete the algorithm, we choose this column to be orthogonal with the previous block vectors X j The block Lanczos algorithm generates a sequence of matrices such that X T j X i = δ ij I 2

30 Proposition where C (i) k X i = are 2 2 matrices i k=0 A k X 0 C (i) k Theorem The matrix valued polynomials p k satisfy p k (λ)γ k = λp k 1 (λ) p k 1 (λ)ω k p k 2 (λ)γ T k 1 p 1 (λ) 0, p 0 (λ) I 2 where λ is a scalar and p k (λ) = k j=0 λj X 0 C (k) j

31 λ[p 0 (λ),..., p N 1 (λ)] = [p 0 (λ),..., p N 1 (λ)]j N +[0,..., 0, p N (λ)γ N ] and as P(λ) = [p 0 (λ),..., p N 1 (λ)] T J N P(λ) = λp(λ) [0,..., 0, p N (λ)γ N ] T where J N is block tridiagonal Theorem Considering the matrices X k, there exists a matrix measure α such that b Xi T X j = p i (λ) T dα(λ)p j (λ) = δ ij I 2 a where a λ 1 = λ min and b λ n = λ max

32 Proof. δ ij I 2 = X T i X j = = k,l = k,l ( i (C (i) k k=0 = k,l ( n = m=1 )T X T 0 A k ) ( j l=0 A l X 0 C (j) l (C (i) k )T X0 T QΛ k+l Q T X 0 C (j) l (C (i) k )T ˆX Λ k+l ˆX T C (j) l (C (i) k )T k ( n m=1 λ k+l m λ k m(c (i) k )T ) ˆX m ˆX T m ˆX m ˆX T m ) C (j) l ( l ) λ l mc (j) l where ˆX m are the columns of ˆX = X0 T Q which is a 2 n matrix )

33 Hence X T i X j = n p i (λ m ) T ˆX m ˆX m T p j (λ m ) m=1 The sum in the right hand side can be written as an integral for a 2 2 matrix measure 0 if λ < λ 1 α(λ) = i j=1 ˆX j ˆX j T if λ i λ < λ i+1 n ˆX j=1 j ˆX j T if λ n λ Then b Xi T X j = p i (λ) T dα(λ) p j (λ) a

34 The conjugate gradient algorithm The conjugate gradient (CG) algorithm is an iterative method to solve linear systems Ax = c where the matrix A is symmetric positive definite (Hestenes and Stiefel 1952) It can be obtained from the Lanczos algorithm by using the LU factorization of J k starting from a given x 0 and r 0 = c Ax 0 : for k = 0, 1,... until convergence do β k = (r k, r k ) (r k 1, r k 1 ), β 0 = 0 p k = r k + β k p k 1 γ k = (r k, r k ) (Ap k, p k ) x k+1 = x k + γ k p k r k+1 = r k γ k Ap k

35 In exact arithmetic the residuals r k are orthogonal and Moreover v k+1 = ( 1) k r k / r k α k = 1 γ k 1 + β k 1 γ k 2, β 0 = 0, γ 1 = 1 η k = βk γ k 1 The iterates are given by x k+1 = x 0 + s k (A)r 0 where s k is a polynomial of degree k

36 Let ɛ k A = (Aɛ k, ɛ k ) 1/2 be the A-norm of the error ɛ k = x x k Theorem Consider all the iterative methods that can be written as x k+1 = x 0 + q k (A)r 0, x 0 = x 0, r 0 = c Ax 0 where q k is a polynomial of degree k Of all these methods, CG is the one which minimizes ɛ k A at each iteration

37 As a consequence Theorem ɛ k+1 2 A max 1 i n (t k+1(λ i )) 2 ɛ 0 2 A for all polynomials t k+1 of degree k + 1 such that t k+1 (0) = 1 Theorem ( ) k κ 1 ɛ k A 2 ɛ 0 A κ + 1 where κ = λn λ 1 is the condition number of A This bound is usually overly pessimistic

38 CG convergence in variable precision 10 CG, Strakos30, n= stand reorth Strakos30, log 10 ( r k )

39 W.E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quarterly of Appl. Math., v 9, (1951), pp G.H. Golub and C. Van Loan, Matrix Computations, Third Edition, Johns Hopkins University Press, (1996) G.H. Golub and R. Underwood, The block Lanczos method for computing eigenvalues, in Mathematical Software III, J. Rice Ed., (1977), pp M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Nat. Bur. Stand., v 49 n 6, (1952), pp C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Nat. Bur. Standards, v 45, (1950), pp C. Lanczos, Solution of systems of linear equations by minimized iterations, J. Res. Nat. Bur. Standards, v 49, (1952), pp 33 53

40 G. Meurant, Computer solution of large linear systems, North Holland, (1999) G. Meurant, The Lanczos and Conjugate Gradient algorithms, from theory to finite precision computations, SIAM, (2006) G. Meurant and Z. Strakoš, The Lanczos and conjugate gradient algorithms in finite precision arithmetic, Acta Numerica, (2006) C.C. Paige, The computation of eigenvalues and eigenvectors of very large sparse matrices, Ph.D. thesis, University of London, (1971) Z. Strakoš, On the real convergence rate of the conjugate gradient method, Linear Alg. Appl., v , (1991), pp

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble. Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium Dr. ir. Gerrit C. Groenenboom 14 june - 18 june, 1993 Last revised: May 11, 2005 Contents 1 Introduction

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007. AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Probabilistic upper bounds for the matrix two-norm

Probabilistic upper bounds for the matrix two-norm Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices Jean Christophe Tremblay and Tucker Carrington Chemistry Department Queen s University 23 août 2007 We want to

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method

Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. Emory

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information