FEM and sparse linear system solving

Size: px
Start display at page:

Download "FEM and sparse linear system solving"

Transcription

1 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Lecture 9, Nov 17, 2017: Krylov space methods Peter Arbenz Computer Science Department, ETH Zürich

2 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Survey on lecture Survey on lecture The finite element method Direct solvers for sparse systems Iterative solvers for sparse systems Stationary iterative methods, preconditioning Steepest descent and conjugate gradient methods Krylov space methods for nonsymmetric systems GMRES, MINRES Preconditioning Nonsymmetric Lanczos iteration based methods Bi-CG, QMR, CGS, BiCGstab Multigrid (preconditioning)

3 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Survey on lecture Outline of this lecture 1. Krylov (sub)spaces 2. Orthogonal bases for Krylov spaces 3. GMRES 4. MINRES Krylov space methods are matrix-free : only a MatVec function is necessary, that computes y Ax.

4 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Krylov spaces In steepest descent and CG we have rk = rk 1 α k 1 Apk 1, pk = rk + β k 1 pk 1, where rk = b Axk is the residual and pk is the search direction. Since pk depends on pk 1 and rk we can write rk = r0 + k c j A j r0 rk = p k (A)r0, p k (0) = 1, j=1 Thus, the residuals of these methods can be written as a k-th order polynomial in A applied to the initial residual r0.

5 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Krylov spaces (cont.) Moreover, k c j A j rj = rk r0 j=1 = (b Axk) (b Ax0) = A(xk x0) such that k 1 xk = x0 c j+1 A j r0. j=0

6 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Krylov spaces (cont.) Definition For given A, the m-th Krylov space generated by the vector r is given by K m = K m (A, r) := span { r, Ar, A 2 r,..., A m 1 r}. We can also write K m (A, r) = {p(a)r p P m 1 }. where P d denotes the set of polynomials of degree at most d. For the residuals rm of stationary or CG iterations we have rm r0 + AK m 1 (A, r0) K m (A, r0).

7 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Cayley Hamilton theorem Theorem: Let χ A (ζ) := det(ζi A) denote the characteristic polynomial of A R n n. Then χ A (A) = O. Corollary: Let A be nonsingular and χ A (ζ) = ζ n + a n 1 ζ n 1 + a n 2 ζ n a 0, with a 0 = det(a) 0, then A 1 = 1 [ A n 1 + a n 1 A n 2 + a n 2 A n a 1 I ]. a 0

8 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Cayley Hamilton theorem (cont.) Consequence: If A nonsingular, then A 1 is a polynomial in A of degree at most n 1. If Ax = b then x = (A n 1 b + a n 1 A n 2 b + a n 2 A n 3 b + + a 1 b)/( a 0 ). As Ae0 = r0 we have e0 = (A n 1 r0 + a n 1 A n 2 r0 + a n 2 A n 3 r0 + + a 1 r0)/( a 0 ). x x0 + K n (A, r0). Therefore it makes sense to look for approximate solutions in Krylov spaces.

9 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Minimal polynomial Definition For A R n n, the unique monic polynomial µ A of minimal degree d satisfying µ A (A) = 0 is called the minimal polynomial of A. Theorem: Let A have distinct eigenvalues λ 1,..., λ k, then µ A (ζ) = k (ζ λ i ) r i, where r i is the order of the largest Jordan block of A for λ i. Note: i=1 The degree of µ A is at most n (Cayley Hamilton). e0 K d, interesting when d n Ideal: few distinct eigenvalues, small Jordan blocks!

10 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Dependence on right-hand side What does it mean when K 1 K 2 K p = K p+1 =? A p r0 = α 0 r0 + α 1 Ar0 + α 2 A 2 r0 + + α p 1 A p 1 r0 Multiplying by A 1 and noting that A 1 r0 = e0 gives A p 1 r0 = α 0 e0 + α 1 r0 + α 2 Ar0 + + α p 1 A p 2 r0 Therefore e0 K p (A, r0) or x x0 + K p (A, r0). We get the solution after precisely p steps. Note that p depends on r0 (and thus on x0).

11 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces Dependence on right-hand side (cont.) Let A be diagonalizable with A = UΛU 1. This means that there n linearly independent vectors (eigenvectors) with Aui = uiλ i, 1 i n. If r0 = p i=1 α i ui then A j r0 = p i=1 α iλ j ui. So, if r0 span{u1,..., up} with Aui = uiλ i, then K p = K p+1 means that the solution is found in p iteration steps. This is an uncommon situation! Usually, p n and it is not practical to iterate until the Krylov space is exhausted. We try to get a good approximation of x after m n iteration steps.

12 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Krylov spaces What is the best xm from K m? Want to best approximate x = A 1 b from x0 + K m (A, r0). Various answers lead to different methods: Ideal but not practical: minimize em 2 = x xm 2. When A is SPD, then we can minimize em A = rm A 1. Conjugate Gradients Can minimize rm 2. For symmetric A: MINimum RESidual, for nonsymmetric A: Generalized Minimum RESidual Can enforce Galerkin condition rm K m or Petrov-Galerkin condition rm L m, e.g. BiConjugate Gradients and Quasi Minimum Residual, these are m constraints to compute m parameters α 0,..., α m 1 from xm = x0 + α 0 r0 + α 1 Ar0 + α 2 A 2 r0 + + α m 1 A m 1 r0.

13 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis An orthogonal basis for K m Problem: The matrix K m (A, r0) := r0 Ar0 A m 1 r0 becomes more and more ill-conditioned as k increases. (Remember vector iteration for computing largest eigenvalue.) Solution: We have to find a well-conditioned basis of K m.

14 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Arnoldi & Lanczos algorithms Task: For j = 1, 2,..., m, compute orthonormal bases {v1,..., vj} for the Krylov spaces K j = span { r0, Ar0, A 2 r0,..., A j 1 r0}. The algorithms that do this are Lanczos algorithm for A symmetric/hermitian. Arnoldi algorithm for A nonsymmetric. In principle, Lanczos and Arnoldi algorithms implement the Gram Schmidt orthogonalization procedure. Difficulty: Because of ill-conditioning, we do not want to explicitly form r0, Ar0,..., A j r0.

15 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Arnoldi & Lanczos algorithms (cont.) Instead of using A j r0 we proceed with Avj. (Notice that Avi K i+1 K j for all i < j.) Orthogonalize Avj against v1,..., vj by the Gram Schmidt procedure: j wj = Avj vih ij. wj points in the desired new direction (unless it is 0). Therefore, i=1 vj+1 = wj/ wj. Q: What happens if wj = 0?

16 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Arnoldi: Classical and modified Gram Schmidt v1 = r0/ r0 2 for j:=1 to m do wj = Avj; for i:=1 to j do Orthogonalize Avj against v1,..., vj: { hi,j := (Avj) vi; (Classical Gram Schmidt) h i,j := w j v i; (Modified Gram Schmidt) wj := wj h i,j vi; end for h j+1,j = wj 2. (Stop if h j+1,j is zero.) vj+1 = wj/h j+1,j ; end for In practice: MGS is more numerically stable than CGS

17 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Remark: Classical vs. modified Gram Schmidt Given vectors a and v1,..., vj with v T k v l = 0 for all 1 k, l j. Task: Compute z s.t. a = i Classical Gram Schmidt z = a; for i:=1 to j do α i = v i T a; z = z viα i ; end for MGS is more stable than CGS. α i vi + z with z T vl = 0 for all l. Modified Gram Schmidt z = a; for i:=1 to j do α i = v i T z; z = z viα i ; end for There are other solutions as, e.g., Householder reflectors. See Golub van Loan: Matrix Computations.

18 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Arnoldi relation Define V m := [v1,..., vm]. Then we get the Arnoldi relation AV m = V m H m + wme T m = V m+1 Hm. H m A V m = V m + O w m

19 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 Orthogonal basis Arnoldi relation (cont.) Here, h 11 h 12 h 1,m h 21 h 22 h 2,m H m = h 3,2 h 3,m.... R (m+1) m h m+1,m The square matrix H m R m m is obtained from H m by deleting the last row. Notice that H m = V T m AV m. Therefore, if A is symmetric H m T m is tridiagonal! The Lanczos relation is AV m = V m T m + wme T m = V m+1 T m.

20 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES GMRES The GMRES (Generalized Minimal Residual) algorithm computes xm x0 + K m (A, r0) that leads to the smallest residual exploiting the Arnoldi relation. The MINRES algorithm does the same for symmetric matrices using the Lanczos relation. Goal: Cheaply minimize rm 2 = b Axm 2, xm x0 + K m (A, r0), using the Arnoldi relation and R(V m ) = K m (A, r0). AV m = V m+1 H m

21 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES A Hessenberg Least-Squares problem Arnoldi basis expansion xm = x0 + V m y gives with β := r0 2 : min rm 2 = min b A(x0 + V m y) 2 y = min r0 AV m y 2 y = min r0 V m+1 Hm y 2 y ( = min V m+1 βe 1 H m y) 2 y = min βe1 H m y 2. y Hessenberg least squares problem of dimension (m + 1) m. QR factorization of H m can be cheaply computed using Givens rotations.

22 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES QR factorization of H m using m Givens row rotations h 11 h 12 h 13 h 14 h 21 h 22 h 23 h 24 h 32 h 33 h 34 h 43 h 44 h 54 r 11 r 12 r 13 r 14 0 r 22 r 23 r 24 0 r 33 r 34 h 43 h 44 h 54 r 11 r 12 r 13 r 14 0 r 22 r 23 r 24 h 32 h 33 h 34 h 43 h 44 h 54 r 11 r 12 r 13 r 14 r 0 r 22 r 23 r r 12 r 13 r 14 0 r 33 r 34 0 r 44 0 r 22 r 23 r 24 0 r 33 r 34 0 r 44 h 54 0 Apply same rotations to rhs & solve triangular m m system: min rm 2 = min βe1 H m y 2 = min βz R m y 2.

23 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Residual results Monotonicity: rm+1 2 rm 2 because K m K m+1. Residual polynomial: Since rm = b Axm r0 + AK m, we have rm = p m (A)r0. p m is a polynomial of degree m and p m (0) = 1. Denote the set of all such polynomials by P m. Theorem Let A = QΛQ 1 be diagonalizable. Then at step m of the GMRES iteration, the residual rm satisfies rm 2 r0 2 inf p m P m p m (A) 2 κ(q) inf p m P m max p m(λ). λ σ(a)

24 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Residual results (cont.) We have seen a similar result for the CG algorithm. There κ(q) = 1 and σ(a) [α, β], α > 0. In more general situations, the distribution of A s eigenvalues is more complicated. One may replace σ(a) by a simpler shaped set D σ(a) to simplify the analysis. E.g., D may be an ellipse for which more general Chebyshev polynomials have been developed. But 0 must never be contained in D!

25 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Sample matrices & convergence Remember: minimal polynomial µ A, assume that A diagonalizable all Jordan blocks size one Ideally: few distinct eigenvalues Heuristically: eigenvalues should best be squeezed in small ball Perfect: A is (multiple of) identity: one point spectrum Visual interpretation of preconditioning goal Compare convergence of GMRES for two matrices with quite different spectra Pictures from Trefethen Bau, Numerical Linear Algebra, SIAM 1997, Lecture 35.

26 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Matrix 1 spectrum Matrix A is obtained in Matlab by m = 200; A = 2*eye(m) + 0.5*randn(m)/sqrt(m);. A has a spectrum in a ball with radius 1/2 centered at 2 Let us choose p n (z) = (1 z/2) n. On the spectrum 1 z/2 1/4 and therefore p n (z) (1/4) n. The convergence in this case is extraordinarily steady at a rate 4 n, see next slide.

27 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Matrix 1 GMRES convergence

28 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Matrix 2 - spectrum A = A + D where A is matrix 1 and D is the diagonal matrix with d k = ( 2+2 sin θ k )+i cos θ k, θ k = kπ/(m+1), 0 k < m. A has a spectrum that is surrounding the origin. The eigenvalues lie in a semicircular cloud that bends around the origin. The convergence rate is much worse than in example 1. The condition of Q is not large. The slow convergence is due to the location of the eigenvalues.

29 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Matrix 2 GMRES convergence

30 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES Practical issues The GMRES algorithm with m steps needs memory space for m + O(1) vectors of length n plus the matrices A and M. CG needed just 4 vectors Slow convergence may entail expensive memory requirements. Recipe: limit m. Execute m steps of GMRES. Then restart with xm as initial approximation of the next GMRES cycle. GMRES does not care about symmetries. There is no gain in choosing a symmetric preconditioner M when solving M 1 Ax = M 1 b. Incomplete LU factorizations (ILU) are popular preconditioners.

31 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 GMRES The GMRES(m) algorithm Set an initial guess x0 until convergence do Compute initial residual, r0 = b Ax0 Solve for preconditioned residual, Mz0 = r0 Normalize residual: v1 = z0/ z0 for k = 1, 2,..., m do Compute matrix-vector product rk = Avk Solve Mzk = rk Orthonormalize zk with respect to all vj, j = 1,..., k by modified Gram-Schmidt procedure to obtain vk+1 end Solve GMRES minimization problem. Copy solution xm into x0. Check convergence, if satisfied leave GMRES. end

32 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 MINRES Krylov space methods for symmetric systems Let A be symmetric indefinite. The CG method does not work since A does not induce a norm: x T Ax and x T A 1 x can be negative and it does not make sense to minimize r A 1. Then the minimal residual (MINRES) algorithm is a feasible solution method. MINRES is essentially GMRES adapted to symmetric systems. Two things are worth noting: Avk does not need to be orthogonalized against vj for j < k 1: (Avk) T vj = vk T AT A=A T vj = vk T Avj = 0 if j + 1 < k. }{{} K j+1 We have a short recurrence.

33 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 MINRES Krylov space methods for symmetric systems (cont.) Hm =: T m is tridiagonal. t 11 t 12 t 21 t 22 t 23. T m = t..... [ 3,2... = tm 1,m 1 t m 1,m t m,m 1 t m,m t m+1,m T m t m+1,m e T m Note that the submatrix T m is symmetric. Operating with Givens rotations on T m zeros the lower off-diagonal elements at the expense of an additional upper off-diagonal. Of course the resulting matrix is upper-triangular. ]

34 The updates of xk and rk can be done by 3-term recurrencies. Preconditioners for A must be SPD in order to use this procedure. (Why?) FEM & sparse linear system solving, Lecture 9, Nov 19, /36 MINRES The (unpreconditioned) MINRES algorithm Choose x0. Compute r0 = b Ax0. β = r0. Set v0 = r0/β. for k = 1, 2,... do wk := Avk 1; if k > 1 then wk = wk vk 2t k,k 1 t k,k := w T k v k 1; wk = wk vk 1t k,k t k+1,k := wk 2 ; yk = wk/t k+1,k ; Update the QR factorization T k = Q k R k. Solve R k hk = βq T k e 1. xk := V k hk. rk := b Axk. Check for convergence. end do

35 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 MINRES Note on CG CG MINRES In both methods: xm x0 + K m (A, r0) MINRES: Choose xm such that rm 2 is minimized. min rm 2 = min r0 AV m y 2 = min β e1 T m y 2. y y CG: Choose xm such that the Galerkin condition rm K m is satisfied. 0 = V T m (b Axm) = V T m (r0 AV m ym) = β e1 T m ym. (β = r0 2 )

36 FEM & sparse linear system solving, Lecture 9, Nov 19, /36 MINRES Note on CG Main practical issues for Krylov space methods Select reasonable convergence criterion, and reasonable accuracy threshold τ. Without a good preconditioner, the method might not converge within a reasonable number of iteration steps. Cannot keep increasing the dimension of K, not enough memory! = need to decide how and when to restart.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Ax=b. Zack 10/4/2013

Ax=b. Zack 10/4/2013 Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information