Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Size: px
Start display at page:

Download "Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1"

Transcription

1 Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations page 1 of 1

2 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations 2 Elementary Linear Algebra Problems 2.1 BLAS: Basic Linear Algebra Subroutines 2.2 Matrix-Vector Operations 2.3 Matrix-Matrix-Product 3 Linear Systems of Equations with Dense Matrices 3.1 Gaussian Elimination 3.2 Parallelization 3.3 QR-Decomposition with Householder matrices 4 Sparse Matrices 4.1 General Properties, Storage 4.2 Sparse Matrices and Graphs 4.3 Reordering 4.4 Gaussian Elimination for Sparse Matrices 5 Iterative Methods for Sparse Linear Systems of Equations 5.1 Stationary Methods 5.2 Nonstationary Methods 5.3 Preconditioning 6 Domain Decomposition 6.1 Overlapping Domain Decomposition 6.2 Non-overlapping Domain Decomposition 6.3 Schur Complements page 2 of 1

3 Disadvantages of direct methods (in parallel): strongly sequential may lead to dense matrices sparsity pattern changes, additional entries necessary indirect addressing storage computational effort page 3 of 1

4 Disadvantages of direct methods (in parallel): strongly sequential may lead to dense matrices sparsity pattern changes, additional entries necessary indirect addressing storage computational effort Iterative solver: choose initial guess = starting vector x (0), e.g., x (0) = 0 iteration function x (k+1) := Φ(x (k) ) page 3 of 1

5 Disadvantages of direct methods (in parallel): strongly sequential may lead to dense matrices sparsity pattern changes, additional entries necessary indirect addressing storage computational effort Iterative solver: choose initial guess = starting vector x (0), e.g., x (0) = 0 iteration function x (k+1) := Φ(x (k) ) Applied on solving a linear system: Main part of Φ should be a matrix-vector multiplication Ax (matrix-free!?) Easy to parallelize, no change in the pattern of A. x (k) k x = A 1 b Main problem: Fast convergence! page 3 of 1

6 5.1. Stationary Methods Richardson Iteration page 4 of 1

7 Construct from Ax = b an iteration process: b = Ax = ( A I + I )x = x (I A)x x = b + (I A)x }{{} (artificial) splitting of A = b + Nx page 4 of 1

8 Construct from Ax = b an iteration process: b = Ax = ( A I + I )x = x (I A)x x = b + (I A)x }{{} (artificial) splitting of A = b + Nx Leads to equation x = Φ(x) with Φ(x) := b + Nx: start: x (0) ; x (k+1) := Φ(x (k) ) = b + Nx (k) = b + (I A)x (k) page 4 of 1

9 Richardson Iteration (cont.) start: x (0) ; x (k+1) := Φ(x (k) ) = b + Nx (k) = b + (I A)x (k) If x (k) is convergent, x (k) x, then x = Φ( x) = b + N x = b + (I A) x A x = b and therefore it holds x (k) x = x := A 1 b page 5 of 1

10 Richardson Iteration (cont.) start: x (0) ; x (k+1) := Φ(x (k) ) = b + Nx (k) = b + (I A)x (k) If x (k) is convergent, x (k) x, then x = Φ( x) = b + N x = b + (I A) x A x = b and therefore it holds x (k) x = x := A 1 b Residual-based formulation: x (k+1) = Φ(x (k) ) = b + (I A)x (k) = b + x (k) Ax (k) = x (k) + (b Ax (k) ) = x (k) + r(x (k) ) }{{} r(x) = residual page 5 of 1

11 Convergence Analysis via Neumann Series x (k) = b + Nx (k 1) = b + N(b + Nx (k 2) ) = b + Nb + N 2 x (k 2) =... = b + Nb + N 2 b + + N k 1 b + N k x (0) = = ( k 1 k 1 ) j=0 Nj b + N k x (0) = j=0 Nj b + N k x (0) Special case x (0) = 0: x (k) = ( k 1 j=0 Nj ) b page 6 of 1

12 Convergence Analysis via Neumann Series x (k) = b + Nx (k 1) = b + N(b + Nx (k 2) ) = b + Nb + N 2 x (k 2) =... = b + Nb + N 2 b + + N k 1 b + N k x (0) = = ( k 1 k 1 ) j=0 Nj b + N k x (0) = j=0 Nj b + N k x (0) Special case x (0) = 0: x (k) = ( k 1 j=0 Nj ) b x (k) span{b, Nb, N 2 b,..., N k 1 b} = span{b, Ab, A 2 b,..., A k 1 b} which is called the Krylov space to A and b. = K k (A, b) page 6 of 1

13 Convergence Analysis via Neumann Series x (k) = b + Nx (k 1) = b + N(b + Nx (k 2) ) = b + Nb + N 2 x (k 2) =... = b + Nb + N 2 b + + N k 1 b + N k x (0) = = ( k 1 k 1 ) j=0 Nj b + N k x (0) = j=0 Nj b + N k x (0) Special case x (0) = 0: x (k) = ( k 1 j=0 Nj ) b x (k) span{b, Nb, N 2 b,..., N k 1 b} = span{b, Ab, A 2 b,..., A k 1 b} which is called the Krylov space to A and b. For N < 1 holds: k 1 j=0 Nj = K k (A, b) j=0 Nj = (I N) 1 = (I (I A)) 1 = A 1 page 6 of 1

14 Convergence Analysis via Neumann Series (cont.) ( x (k) Nj) b = (I N) 1 b = A 1 b = x j=0 Richardson iteration is convergent for N < 1 or A I. page 7 of 1

15 Convergence Analysis via Neumann Series (cont.) ( x (k) Nj) b = (I N) 1 b = A 1 b = x j=0 Richardson iteration is convergent for N < 1 or A I. Error analysis for e (k) := x (k) x: e (k+1) = x (k+1) x = Φ(x (k) ) Φ( x) = (b + Nx (k) ) (b + N x) = = N(x (k) x) = Ne (k) e (k) N e (k 1) N 2 e (k 2) N k e (0) N < 1 N k k 0 e (k) k 0 page 7 of 1

16 Convergence Analysis via Neumann Series (cont.) ( x (k) Nj) b = (I N) 1 b = A 1 b = x j=0 Richardson iteration is convergent for N < 1 or A I. Error analysis for e (k) := x (k) x: e (k+1) = x (k+1) x = Φ(x (k) ) Φ( x) = (b + Nx (k) ) (b + N x) = = N(x (k) x) = Ne (k) e (k) N e (k 1) N 2 e (k 2) N k e (0) N < 1 N k k 0 e (k) k 0 Convergence, if ρ(n) = ρ(i A) < 1, where ρ is spectral radius ρ(n) = λ max = max( λ i ) (λ i is eigenvalue of N) i Eigenvalues of A have to be all in circle around 1 with radius 1. page 7 of 1

17 Splittings of A Convergence of Richardson only in very special cases! Try to improve the iteration for better convergence! Write A in form A := M N b = Ax = (M N)x = Mx Nx x = M 1 b + M 1 Nx = Φ(x) Φ(x) = M 1 b + M 1 Nx = M 1 b + M 1 (M A)x = = M 1 (b Ax) + x = x + M 1 r(x) page 8 of 1

18 Splittings of A Convergence of Richardson only in very special cases! Try to improve the iteration for better convergence! Write A in form A := M N b = Ax = (M N)x = Mx Nx x = M 1 b + M 1 Nx = Φ(x) Φ(x) = M 1 b + M 1 Nx = M 1 b + M 1 (M A)x = = M 1 (b Ax) + x = x + M 1 r(x) N should be such that Ny can be evaluated efficiently. M should be such that M 1 y can be evaluated efficiently. x (k+1) = x (k) + M 1 r (k) Iteration with splitting M N is equivalent to Richardson on M 1 Ax = M 1 b page 8 of 1

19 Convergence Iteration with splitting A = M N is convergent if ρ(m 1 N) = ρ(i M 1 A) < 1 For fast convergence it should hold M 1 A I M 1 A should be better conditioned than A itself page 9 of 1

20 Convergence Iteration with splitting A = M N is convergent if ρ(m 1 N) = ρ(i M 1 A) < 1 For fast convergence it should hold M 1 A I M 1 A should be better conditioned than A itself Such a matrix M is called a preconditioner for A. Is used in other iterative methods to accelerate convergence. Condition number: κ(a) = A 1 A, λ max λ min, or σ max σ min page 9 of 1

21 Jacobi (Diagonal) Splitting Choose A = M N = D (L + U) with D = diag(a) L the lower triangular part of A, and U the upper triangular part. x (k+1) = D 1 b + D 1 (L + U)x (k) = = D 1 b + D 1 (D A)x (k) = x (k) + D 1 r (k) page 10 of 1

22 Jacobi (Diagonal) Splitting Choose A = M N = D (L + U) with D = diag(a) L the lower triangular part of A, and U the upper triangular part. x (k+1) = D 1 b + D 1 (L + U)x (k) = = D 1 b + D 1 (D A)x (k) = x (k) + D 1 r (k) Convergent for A diag(a) or diagonal dominant matrices: ρ(d 1 N) = ρ(i D 1 A) < 1 page 10 of 1

23 Jacobi (Diagonal) Splitting (cont.) Iteration process written elementwise: x (k+1) = D 1 (b (A D)x (k) ) x (k+1) j a jj x (k+1) j = b j j 1 a j,mx (k) m m=1 = 1 b j a jj n n m=1,m j a j,mx (k) m m=j+1 a j,m x (k) m page 11 of 1

24 Jacobi (Diagonal) Splitting (cont.) Iteration process written elementwise: x (k+1) = D 1 (b (A D)x (k) ) x (k+1) j a jj x (k+1) j = b j j 1 a j,mx (k) m m=1 = 1 b j a jj n n m=1,m j a j,mx (k) m m=j+1 Damping or relaxation for improving convergence Idea: Iterative method as correction of last iterate in search direction. a j,m x (k) m page 11 of 1

25 Jacobi (Diagonal) Splitting (cont.) Iteration process written elementwise: x (k+1) = D 1 (b (A D)x (k) ) x (k+1) j a jj x (k+1) j = b j j 1 a j,mx (k) m m=1 = 1 b j a jj n n m=1,m j a j,mx (k) m m=j+1 Damping or relaxation for improving convergence Idea: Iterative method as correction of last iterate in search direction. Introduce step length for this correction step: a j,m x (k) x (k+1) = x (k) + D 1 r (k) x (k+1) = x (k) + ωd 1 r (k) with additional damping parameter ω. Damped Jacobi iteration: x (k+1) damped = (ω + 1 ω)x (k) + ωd 1 r (k) = ωx (k+1) + (1 ω)x (k) m page 11 of 1

26 Damped Jacobi Iteration x (k+1) = x (k) + ωd 1 r (k) = x (k) + ωd 1 (b Ax (k) ) = is convergent for =... = ωd 1 b + [(1 ω)i + ωd 1 (L + U)]x (k) ρ([(1 ω)i + ωd 1 (L + U)] ) < 1 }{{} ω 0 I Look for optimal ω with best convergence (add. degree of freedom). page 12 of 1

27 Parallelism in the Jacobi Iteration Jacobi method is easy to parallelize: only Ax and D 1 x. But often too slow convergence! Improvement: block Jacobi iteration page 13 of 1

28 Gauss-Seidel Iteration Always use newest information available! Jacobi iteration: a jj x (k+1) j j 1 = b j Gauss-Seidel iteration: a jj x (k+1) j m=1 j 1 = b j m=1 a j,m a j,m x (k) m }{{} already computed x (k+1) m }{{} already computed n m=j+1 n m=j+1 a j,m x (k) m a j,m x (k) m page 14 of 1

29 Gauss-Seidel Iteration (cont.) Compare dependancy graphs for general iterative algorithms. Here: x = f (x) = D 1 (b + (D A)x) = D 1 (b (L + U)x) to splitting A = (D L) U = M N x (k+1) = (D L) 1 b + (D L) 1 Ux (k) = = (D L) 1 b + (D L) 1 (D L A)x (k) = = x (k) + (D L) 1 r (k) Convergence depends on spectral radius ρ(i (D L) 1 A) < 1 page 15 of 1

30 Parallelism in the Gauss-Seidel Iteration Linear system in D L is easy to solve because D L is lower triangular but strongly sequential! Use red-black ordering or graph colouring for compromise: parallel convergence page 16 of 1

31 Successive Over Relaxation (SOR) Damping or relaxation: x (k+1) = x (k) +ω(d L) 1 r (k) = ω(d L) 1 b+[(1 ω)+ω(d L) 1 U]x (k) Convergence depends on spectral radius of iteration matrix (1 ω) + ω(d L) 1 U Parallelization of SOR == parallelization of GS page 17 of 1

32 Stationary Methods (in General) Can always be written in the two normal forms x (k+1) = c + Bx (k) with convergence depending on ρ(b) of iteration matrix and x (k+1) = x (k) + Fr (k) with preconditioner F, B = I FA page 18 of 1

33 Stationary Methods (in General) Can always be written in the two normal forms x (k+1) = c + Bx (k) with convergence depending on ρ(b) of iteration matrix and x (k+1) = x (k) + Fr (k) with preconditioner F, B = I FA For x (0) = 0: x (k+1) K k (B, c), which is the Krylov space with respect to matrix B and vector c. Slow convergence (but good smoothing properties! multigrid) page 18 of 1

34 MATLAB Example clear; n=100;k=10;omega=1;stationary tridiag(.5, 1,.5): Jacobi norm cos(π/n) GS norm cos(π/n) 2 both < 1 convergence, but slow To improve convergence nonstationary methods (or multigrid) page 19 of 1

35 Chair of Informatics V SCCS Efficient Numerical Algorithms Parallel & HPC High-dimensional numerics (sparse grids) Fast iterative solvers (multi-level methods, preconditioners, eigenvalue solvers) Uncertainty Quantification Space-filling curves Numerical linear algebra Numerical algorithms for image processing HW-aware numerical programming Fields of application in simulation CFD (incl. fluid-structure interaction) Plasma physics Molecular dynamics Quantum chemistry Further info www5.in.tum.de Feel free to come around and ask for thesis topics! page 20 of 1

36 5.2. Nonstationary Methods Gradient Method Consider A = A T > 0 (A SPD) Function Φ(x) = 1 2 x T Ax b T x n-dim. paraboloid R n R Gradient Φ(x) = Ax b Position with Φ(x) = 0 is exactly minimum of paraboloid page 21 of 1

37 5.3. Nonstationary Methods Gradient Method Consider A = A T > 0 (A SPD) Function Φ(x) = 1 2 x T Ax b T x n-dim. paraboloid R n R Gradient Φ(x) = Ax b Position with Φ(x) = 0 is exactly minimum of paraboloid Instead of solving Ax = b consider min x Φ(x) Local descent direction in y: Φ(x) y is minimum for y = Φ(x) page 21 of 1

38 Gradient Method (cont.) Optimization: start with x (0) x (k+1) := x (k) + α k d (k) with search direction d (k) and step size α k. In view of previous results the optimal (local) search direction is Φ(x (k) ) =: d (k) page 22 of 1

39 Gradient Method (cont.) Optimization: start with x (0) x (k+1) := x (k) + α k d (k) with search direction d (k) and step size α k. In view of previous results the optimal (local) search direction is To define α k : Φ(x (k) ) =: d (k) min g(α) := min(φ(x (k) + α(b Ax (k) ))) α α ( ) 1 = min α 2 (x (k) + αd (k) ) T A(x (k) + αd (k) ) b T (x (k) + αd (k) ) ( 1 = min α 2 α2 d (k) T Ad (k) αd (k)t d (k) + 1 ) 2 x (k)t Ax (k) x (k)t b α k = d (k)t d (k) d (k)t Ad (k) d (k) = Φ(x (k) ) = b Ax (k) page 22 of 1

40 Gradient Method (cont. 2) x (k+1) = x (k) + b Ax (k) 2 2 (b Ax (k) ) T A(b Ax (k) ) (b Ax (k) ) Method of steepest descent. page 23 of 1

41 Gradient Method (cont. 2) x (k+1) = x (k) + b Ax (k) 2 2 (b Ax (k) ) T A(b Ax (k) ) (b Ax (k) ) Method of steepest descent. Disadvantage: Distorted contour lines. Slow convergence (zig zag path) Local descent direction is not globally optimal page 23 of 1

42 Analysis of the Gradient Method Definition A-norm: x A := x T Ax Consider error: x x 2 A = x A 1 b 2 A = (x T b T A 1 )A(x A 1 b) = x T Ax 2b T x + b T A 1 b = 2Φ(x) + b T A 1 b page 24 of 1

43 Analysis of the Gradient Method Definition A-norm: Consider error: x A := x T Ax x x 2 A = x A 1 b 2 A = (x T b T A 1 )A(x A 1 b) = x T Ax 2b T x + b T A 1 b = 2Φ(x) + b T A 1 b Therefore, minimizing Φ is equivalent to minimizing the error in the A-norm! More detailed analysis reveals: ( x (k+1) x 2 A 1 1 ) x (k) x 2 A κ(a) Therefore, for κ(a) 1 very slow convergence! page 24 of 1

44 The Conjugate Gradient Method Improving descent direction being globally optimal. x (k+1) := x (k) + α k p (k) with search direction not being negative gradient, but projection of gradient that is A-conjugate to all previous search directions: p (k) Ap (j) p (k) A p (j) for all j < k or or p (k)t Ap (j) = 0 for j < k page 25 of 1

45 The Conjugate Gradient Method Improving descent direction being globally optimal. x (k+1) := x (k) + α k p (k) with search direction not being negative gradient, but projection of gradient that is A-conjugate to all previous search directions: p (k) Ap (j) p (k) A p (j) for all j < k or or p (k)t Ap (j) = 0 for j < k We choose new search direction as component of last residual that is A-conjugate to all previous search directions. α k again by 1-dim. minimization as before (for chosen p (k) ) page 25 of 1

46 The Conjugate Gradient Algorithm p (0) = r (0) = b Ax (0) for k = 1, 2,... do α (k) = r (k),r (k) p (k),ap (k) x (k+1) = x (k) α (k) p (k) r (k+1) = r (k) + α (k) Ap (k) if r (k+1) 2 2 ɛ then break β (k) = r (k+1),r (k+1) r (k),r (k) p (k+1) = r (k+1) + β (k) p (k) page 26 of 1

47 Properties of Conjugate Gradients It holds p (j)t Ap (k) = 0 = r (j)t r (k) for j k and span(p (1),..., p (j) ) = span(r (0),..., r (j 1) ) = = span(r (0), Ar (0),..., A j 1 r (0) ) = K j (A, r (0) ) Especially for x (0) = 0 it holds K j (A, r (0) ) = span(b, Ab,..., A j 1 b) page 27 of 1

48 Properties of Conjugate Gradients It holds p (j)t Ap (k) = 0 = r (j)t r (k) for j k and span(p (1),..., p (j) ) = span(r (0),..., r (j 1) ) = = span(r (0), Ar (0),..., A j 1 r (0) ) = K j (A, r (0) ) Especially for x (0) = 0 it holds K j (A, r (0) ) = span(b, Ab,..., A j 1 b) x (k) is best approximate solution to Ax = b in subspace K k (A, r (0) ). For x (0) = 0 : x (k) K k (A, b) Error: x (k) x A = min x K k (A,b) x x A Cheap 1-dim. minimization gives optimal k-dim. solution for free! page 27 of 1

49 Properties of Conjugate Gradients (cont.) Consequence: After n steps K n (A, b) = R n and therefore x (n) = A 1 b is solution in exact arithmetic. Unfortunately, this is not true in floating point arithmetic. Furthermore, O(n) iteration steps would be too costly: costs: #iterations matrix-vector-product page 28 of 1

50 Properties of Conjugate Gradients (cont.) Consequence: After n steps K n (A, b) = R n and therefore x (n) = A 1 b is solution in exact arithmetic. Unfortunately, this is not true in floating point arithmetic. Furthermore, O(n) iteration steps would be too costly: costs: #iterations matrix-vector-product Matrix-vector-product easy in parallel. But, how to get fast convergence and reduce #iterations? page 28 of 1

51 Error Estimation (x (0) = 0) e (k) A = x (k) x A = min x K k (A,b) x x A = = min α 0,...,α k 1 = min P (k 1) (x) = min P (k 1) (x) = min P (k 1) (x) = min α j(a j b) x j=0 = A k 1 P (k 1) (A)b x = A P (k 1) (A)A x x = A (P (k 1) (A)A I)( x x (0) ) = A Q (k) (x),q (k) (0)=1 Q (k) (A)e (0) A for polynomial Q (k) (x) of degree k with Q (k) (0) = 1. page 29 of 1

52 Error Estimation Matrix A has orthonormal basis of eigenvectors u j, j = 1,..., n, eigenvalues λ j It holds Au j = λ j u j, j = 1,..., n, and u T j u k = 0 for j k and 1 for j = k Start error in ONB: e (0) = n j=1 ρ ju j page 30 of 1

53 Error Estimation Matrix A has orthonormal basis of eigenvectors u j, j = 1,..., n, eigenvalues λ j It holds Au j = λ j u j, j = 1,..., n, and u T j u k = 0 for j k and 1 for j = k Start error in ONB: e (0) = n ρ ju j j=1 n n e (k) A = min Q (k) (0)=1 Q(k) (A) ρ j u j = min ρ j Q (k) (A)u j = j=1 Q (k) (0)=1 j=1 A A n { = min ρ j Q (k) (λ j )u j min max Q (k) (λ j ) } n ρ j u j Q (k) (0)=1 j=1 Q (k) (0)=1 j=1,...,n = j=1 { A } A e = min max Q (k) (0)=1 j=1,...,n Q(k) (λ j ) (0) A page 30 of 1

54 Error Estimates By choosing polynomials with Q (k) (0) = 1, we can derive error estimates for the error after the kth step: Choose, e.g. Q (k) (x) := 1 2 x λ max + λ min k page 31 of 1

55 Error Estimates By choosing polynomials with Q (k) (0) = 1, we can derive error estimates for the error after the kth step: Choose, e.g. Q (k) (x) := 1 2 x λ max + λ min k e (k) A max j=1,...,n Q(k) (λ j ) e (0) A = 1 2λ k max λ max + λ min e (0) A = ( ) k ( ) k λmax λ min κ(a) 1 e (0) A = e (0) A λ max + λ min κ(a) + 1 page 31 of 1

56 Better Estimates Choose normalized Chebyshev polynomials T n (x) = cos(n arccos(x)) e (k) A 1 T k ( κ(a)+1 κ(a) 1 ) e (0) A 2 ( κ(a) 1 κ(a) + 1 ) k e (0) A page 32 of 1

57 Better Estimates Choose normalized Chebyshev polynomials T n (x) = cos(n arccos(x)) e (k) A 1 T k ( κ(a)+1 κ(a) 1 ) e (0) A 2 ( κ(a) 1 κ(a) + 1 ) k e (0) A For clustered eigenvalues choose special polynomial, e.g. assume that A has only two eigenvalues λ 1 and λ 2 : Q (2) (x) := (λ 1 x)(λ 2 x) λ 1 λ 2 e (2) A max Q (2) (λ j ) e (0) A = 0 j=1,2 Convergence of CG after two steps! page 32 of 1

58 Outliers/Cluster Assume the matrix has an eigenvalue λ 1 > 1 and all other eigenvalues are contained in an ɛ-neighborhood of 1: λ λ 1 : λ 1 < ɛ e (2) A λ 1 <ɛ Q (2) (x) := (λ 1 x)(1 x) λ 1 max (λ 1 λ)(1 λ) λ 1 e(0) A (λ ɛ)ɛ = O(ɛ) λ 1 Very good approximation of CG after only two steps! Important: small number of outliers combined with cluster. page 33 of 1

59 Conjugate Gradients Summary To get fast convergence and reduce the number of iterations: find preconditioner M, such that M 1 Ax = M 1 b with clustered eigenvalues. Conjugate gradients (CG) is always the method of choice for symmetric positive definite A (in general). To improve convergence, include preconditioning (PCG). CG has two important properties: optimal and cheap. page 34 of 1

60 Parallel Conjugate Gradients Algorithm page 35 of 1

61 Non-Blocking Collective Operations Use to hide communication! Allows overlap of numerical computations and communcations. In MPI-1/MPI-2 only possible for point-to-point communication: MPI Isend and MPI Irecv. Additional libraries necessary for collective operations! Example: LibNBC (non-blocking collectives) Are included in new MPI-3 standard. page 36 of 1

62 Example: Pseudocode for Nonblock. Reduct. MPI_Request req; int sbuf1[size], rbuf1[size], buf2[size]; // compute sbuf1 compute(sbuf1, SIZE); // start non-blocking allreduce of subf1 // computation and communication overlap MPI_Iallreduce(sbuf1,rbuf1,SIZE,MPI_INT,MPI_SUM,MPI_COMM, &req); // compute buf2 (independent of buf1) compute(buf2, SIZE); // synchronization MPI_WAIT(&req, &stat); // use data in rbuf1; final computation evaluate(rbuf1, buf2, SIZE); page 37 of 1

63 Iter. Methods for general (nonsymmetric) A: BiCGSTAB Applicable if little memory at hand and A T not available. Computational costs per iteration similar to BiCG and CGS. Alternative for CGS that avoids irregular convergence patterns of CGS maintaining similar convergence speed. Less loss of accuracy in the updated residual. page 38 of 1

64 Iter. Methods for general (nonsymmetric) A: GMRES Leads to smallest residual for fixed number of iteration steps, but these steps become increasingly expensive. To limit increasing storage requirments and work per iteration step, restarting is necessary. When to do so depends on A and b; it requires skill and experience. page 39 of 1

65 Iter. Methods for general (nonsymmetric) A: GMRES Leads to smallest residual for fixed number of iteration steps, but these steps become increasingly expensive. To limit increasing storage requirments and work per iteration step, restarting is necessary. When to do so depends on A and b; it requires skill and experience. Requires only matrix-vector products with the coeff. matrix. Number of inner products grows linearly with iteration number (up to restart point). Implementation based on Gram-Schmidt inner products independent only one synchronization point. Using modified Gram-Schmidt one synchronization point per inner product. We consider GMRES in the following. page 39 of 1

66 GMRES Iterative solution method for general A Consider small subspace U m and determine optimal approximate solution for Ax = b in U m. Hence x is of the form x := U m y min x U m Ax b 2 = min y A(U m y) b 2 Can be solved by normal equations: (U T ma T AU m )y = U T ma T b page 40 of 1

67 GMRES Iterative solution method for general A Consider small subspace U m and determine optimal approximate solution for Ax = b in U m. Hence x is of the form x := U m y min x U m Ax b 2 = min y A(U m y) b 2 Can be solved by normal equations: (U T ma T AU m )y = U T ma T b Try to find sequence of good subspaces U 1 U 2 U 3... such that iteratively we can update the optimal solutions x 1 x 2 x 3... A 1 b using mainly matrix-vector products. page 40 of 1

68 GMRES: Subspace What subspace for fast convergence and easy computations? page 41 of 1

69 GMRES: Subspace What subspace for fast convergence and easy computations? U m := K m (A, b) = span(b, Ab,..., A m 1 b) (Krylov space) Problem: b, Ab, A 2 b,... is bad basis for this subspace! So we need a first step to compute a good basis for U m : page 41 of 1

70 GMRES: Subspace What subspace for fast convergence and easy computations? U m := K m (A, b) = span(b, Ab,..., A m 1 b) (Krylov space) Problem: b, Ab, A 2 b,... is bad basis for this subspace! So we need a first step to compute a good basis for U m : Start with u 1 := b b 2 and do for j = 2 : m: j 1 j 1 ũ j := Au j 1 (uk T Au j 1)u k = Au j 1 h k,j 1 u k k=1 k=1 u j := ũ j ũ j 2 = ũj h j,j 1 which is the standard orthogonalization method (Arnoldi method) page 41 of 1

71 Matrix Form of Arnoldi ONB U m = span(b, Ab,..., A m 1 b) = span(u 1, u 2,..., u m ) (ONB) Write this orthogonalization method in matrix form j 1 j Au j 1 = h k,j 1 u k + ũ j = h k,j 1 u k k=1 k=1 page 42 of 1

72 Matrix Form of Arnoldi ONB U m = span(b, Ab,..., A m 1 b) = span(u 1, u 2,..., u m ) (ONB) Write this orthogonalization method in matrix form j 1 j Au j 1 = h k,j 1 u k + ũ j = h k,j 1 u k k=1 k=1 AU m = A(u 1... u m ) = (u 1... u m+1 ) H m+1,m = U m+1 Hm+1,m h 11 h 1m. h H m+1,m = hmm 0 0 h m+1,m page 42 of 1

73 GMRES: Minimization This leads to minimization problem min Ax b = min AU m y b x U m y Um+1 = min Hm+1,m y b u 1 y Um+1 = min ( H m+1,m y b e 1 ) y = min Hm+1,m y b e 1 y Because U m+1 is part of an orthogonal matrix. page 43 of 1

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods

Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) Summer Semester

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information