Linear Solvers. Andrew Hazel
|
|
- Leo Long
- 5 years ago
- Views:
Transcription
1 Linear Solvers Andrew Hazel
2 Introduction Thus far we have talked about the formulation and discretisation of physical problems and stopped when we got to a discrete linear system of equations.
3 Introduction Thus far we have talked about the formulation and discretisation of physical problems and stopped when we got to a discrete linear system of equations. In explicit timestepping methods, may not have to solve a linear system (although for Navier Stokes one typically solves a pressure-poisson equation).
4 Introduction Thus far we have talked about the formulation and discretisation of physical problems and stopped when we got to a discrete linear system of equations. In explicit timestepping methods, may not have to solve a linear system (although for Navier Stokes one typically solves a pressure-poisson equation). Solve these discrete systems (often repeatedly) to find the (discrete) solution to the problem of interest: Find x R n, where A x = b, A is a known n n matrix and b R n is a known right-hand side vector.
5 Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar x y = z 0
6 Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar x y = 2 2 A 1i z 3 3 A 1i
7 Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar x y = z 15 9 B 2i
8 Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar x y = z 15 9 B 2i Once the matrix is upper triangular simple back substitution gives the answer x = 1/2, y = 8/11, z = 15/22.
9 Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar x y = z 15 9 B 2i Once the matrix is upper triangular simple back substitution gives the answer x = 1/2, y = 8/11, z = 15/22. Typical operation count is O(n 3 ), which grows dramatically as n increases.
10 LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination =
11 LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination = Using the LU decomposition A x = b LU x = b The solution proceeds via two backsolves: Solve L y = b followed by U x = y.
12 LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination = Using the LU decomposition A x = b LU x = b The solution proceeds via two backsolves: Solve L y = b followed by U x = y. Once LU factorisation is known, solution for alternative right-hand sides is quick. Operation count is still O(n 3 ).
13 Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices.
14 Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices. Direct solvers can take advantage of sparsity structure. Computing LU factors of sparse matrices requires fewer operations (don t need to eliminate all entries in all rows).
15 Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices. Direct solvers can take advantage of sparsity structure. Computing LU factors of sparse matrices requires fewer operations (don t need to eliminate all entries in all rows). SuperLU ( xiaoye/superlu/) is a direct LU solver designed to take advantage of sparsity. Typically analyse phase is run before factorisation and backsubstitution.
16 Frontal solvers Frontal solvers are a variant of direct solvers designed for element-based methods. The matrix is never completely assembled. Variables are removed as soon as every contributing element has been visited. Efficiency is dependent on order in which elements are assembled. Very good for long, thin domains... can extend the domain at very minimal run-time cost. Multifrontal methods use several fronts at once MUMPS ( is a parallel multi-frontal solver.
17 Direct solvers: summary Usually very robust. Efficient for matrices with particular sparsity structure. Expensive for large matrices. Memory is always an ultimate bottleneck, unless information is written to disk, which can be slow! Main problem is the fill-in of non-zero entries in the factorisation phase. SuperLU Dist and MUMPS work well on multi-core processors. Direct methods are very competitive for two-dimensional and axi-symmetric problems and work well for three-dimensional methods until you run out of memory.
18 Iterative Methods Iterative methods are based on repeated updates to an initial guess, x 0 x k+1 = F(x k ) These methods do not necessarily require the storage of the matrix. Much cheaper in terms of memory usage than direct methods. If application of F is cheap and method converges in a few iterations, can be much cheaper in computation time. Aim is to find optimal O(n) solver.
19 Iterative Methods Iterative methods are based on repeated updates to an initial guess, x 0 x k+1 = F(x k ) These methods do not necessarily require the storage of the matrix. Much cheaper in terms of memory usage than direct methods. If application of F is cheap and method converges in a few iterations, can be much cheaper in computation time. Aim is to find optimal O(n) solver. Questions: How do we choose F? How quickly does the method converge?
20 Conjugate Gradient Method Applies to symmetric, positive definite matrices A. Identify solution of linear system Ax = b with minimum of Minimum occurs when Φ(x) = 1 2 x ia ij x j x i b i. 1 2 δ ika ij x j x ia ij δ jk b i δ ik = (A + AT )x = b A symmetric A = A T, so minimum occurs when Ax = b. Want iteration that brings us close to the minimum as rapidly as possible.
21 Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i
22 Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i = 1 2 x k i A ij x k j α (x k i A ij p k j + p k i A ij x k j ) α2 p k i A ij p k j x k i b i αp k i b i
23 Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i = Φ(x n ) + αp k i (A ij x k j b i ) α2 p k i A ij p k j.
24 Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x n ) + αp k i (A ij x k j b i ) α2 p k i A ij p k j. Minimised when α = pk i (b i A ij x k j ) p k ma mn p k n = pk i r k i p k ma mn p k n
25 Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x n ) + αp k i (A ij x k j b i ) α2 p k i A ij p k j. Minimised when α = pk i (b i A ij x k j ) p k ma mn p k n = pk i r i k pma k mn pn k Given p k we can find α that minimises the next iterate. Question: How do we choose the set {p k }?
26 Conjugate Gradient Method The obvious choice for the search direction is the direction in which the solution decreases most rapidly the steepest gradient Φ x i = b i A ij xj k = ri k. Better to avoid repeats by choosing each search direction to be different from previous ones. If the search directions are linearly independent they form a basis of R k and convergence is guaranteed in at most n steps. Question: What s the best basis to choose?
27 Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1
28 Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1 Choose the search directions to be A-conjugate pi ka ijpj m = 0 when n m (p k and Ap m are orthogonal) All cross terms in the vector-matrix-vector product vanish. Φ(x k+1 ) = Φ(x 0 ) + k [α m p m ( Ax 0 b ) + 12 ] α2mp m Ap m m=1
29 Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1 Choose the search directions to be A-conjugate pi ka ijpj m = 0 when n m (p k and Ap m are orthogonal) All cross terms in the vector-matrix-vector product vanish. Φ(x k+1 ) = Φ(x 0 ) + k [α m p m ( Ax 0 b ) + 12 ] α2mp m Ap m m=1 Hence minimisation in each direction decouples from others.
30 Conjugate Gradient Method We construct {p k } by starting with the steepest gradient p 0 = r 0 = b Ax 0. It can be shown (e. g. Golub & Van Loan) that p k+1 = r k+1 + β k p k. and enforcing A-conjugacy gives β k = pk i A ijr k+1 j pma k mn pn k i r k+1 i = r k+1 rj kr j k,
31 Conjugate Gradient Method We construct {p k } by starting with the steepest gradient p 0 = r 0 = b Ax 0. It can be shown (e. g. Golub & Van Loan) that p k+1 = r k+1 + β k p k. and enforcing A-conjugacy gives β k = pk i A ijr k+1 j pma k mn pn k i r k+1 i = r k+1 rj kr j k, The last equality is not obvious. It follows by induction using r k = b Ax k = b A(x k 1 + α k 1 p k 1 ) = r k 1 α k 1 Ap k 1
32 Conjugate Gradient Algorithm A = [2 1 ; 1 3]; b = [5 7] ; n = 2; x = zeros(n,1); r = b - A*x; %Initial residuals p = r; %Initial search direction rdotr = r *r; %Inner product %Loop over maximum of n possible steps for i=1:n Ap = A*p; alpha=rdotr/(p *Ap); x += alpha*p; %Update guess r -= alpha*ap; %Update residuals rdotr_new = r *r; %Convergence criteria based on size of residuals if sqrt(rdotr_new) < 1e-10 break; end p = r + (rdotr_new/rdotr)*p; %Update search direction rdotr = rdotr_new end
33 Convergence Analysis The space spanned by the vectors {p k } is the same as the Krylov subspace {r 0, Ar 0,..., A k r 0 }. We can therefore write the k-th iterate in the form x k = x 0 + P k 1 (A) r 0, where P k 1 is an order k 1th polynomial. Hence the error in the k-th iterate is where Ax = b. e k = x x k = x x 0 P k 1 (A) r 0 = [I AP k 1 (A)] e 0 = P k (A) e 0,
34 Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k
35 Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k Expand error in eigenbasis of A to show that e k A min P k max P k (λ i ) e 0 A i
36 Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k Expand error in eigenbasis of A to show that e k A min P k max P k (λ i ) e 0 A i Assuming eigenvalues lie in range λ j [λ min (A), λ max (A)] and fitting a Chebyshev polynomial to achieve the minimisation gives ( ) k κ 1 e k A 2 e 0 A, κ + 1 where κ = λ max (A)/λ min (A) is the condition number of the matrix.
37 More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra.
38 More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra. A bound on the relative reduction on the size of the residuals at the k-th step can again be found in terms of a k-th degree polynomial r k r 0 κ(v ) min P k max P k (λ i ), i where V is the matrix consisting of the eigenvectors of the matrix, A.
39 More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra. A bound on the relative reduction on the size of the residuals at the k-th step can again be found in terms of a k-th degree polynomial r k r 0 κ(v ) min P k max P k (λ i ), i where V is the matrix consisting of the eigenvectors of the matrix, A. BiCGSTAB is an alternative method that generalises CG by imposing biorthogonality conditions on vectors associated with the Krylov subspaces of A and A T.
40 Preconditioning Convergence of iterative methods depends on the spectrum of the matrix the location of the eigenvalues. Idea of preconditioning is to modify the spectrum to improve convergence rate of the method. Solve the system where P is the preconditioner. P 1 Ax = P 1 b, Obviously preconditioner should be cheap to compute and apply. Ideally want to ensure that convergence does not depend on mesh or any physical parameters of the problem.
41 Generic Preconditioners A number of generic possibilities exist: Choose P to be the (block) diagonal of the matrix. Use Incomplete LU decomposition (ILU). Use a lower-order method to approximate the system (spectral). Domain decomposition preconditioners. Multigrid and algebraic multigrid. A bad solver can be a good preconditioner.
42 Problem-specific preconditioners Use the structure of the matrix.
43 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Consider a diagonal preconditioner, spectrum of the preconditioned system follows from ( D D 1 2 ) ( F B T B 0 ) ( u p ) ( u = λ p ).
44 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Consider a diagonal preconditioner, spectrum of the preconditioned system follows from ( ) ( ) ( ) ( ) F B T u D1 0 u = λ. B 0 p 0 D 2 v
45 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ).
46 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1
47 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1 If λ 1, u = 1 λ 1 F 1 B T p, so BF 1 B T p = λ(λ 1)D 2 p D 2 = BF 1 B T
48 Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1 If λ 1, u = 1 λ 1 F 1 B T p, so BF 1 B T p = λ(λ 1)D 2 p D 2 = BF 1 B T and λ(λ 1) = 1, λ = (1 ± 5)/2 Three eigenvalues... convergence in three steps! (see Elman et al.)
49 Preconditioning Software For standard techniques use packages! TRILINOS ( PETSc ( These packages also have implementations of GMRES, BiCGSTAB that run in serial and parallel. Think about the human cost... is it better to spend time developing a brilliant preconditioner or just use an off-the-shelf, sub-optimal, solver and wait longer.
50 Iterative solvers: summary Can give optimal solvers, O(n). Not as robust unless good preconditioner available. Preconditioners can be general purpose or problem-specific. Typically problem-specific preconditioners perform better, but take much longer to develop. Resolve for different right-hand sides not much cheaper. Often the only possibility for large (realistic) problems.
51 Some practical advice on numerics Check the equations Check all equations by constructing an exact solution that satisfies the boundary conditions and lump any left over terms into body force/source terms. Check your exact solution gives zero (well suitably small) initial residuals, if not check your equations.
52 Some practical advice on numerics Check the equations Solve the linear system directly Start by using a direct solver and for nonlinear problems, use Newton s method. Initially compute associated Jacobian matrix using finite differences J ij R i(u + ɛe j ) R i (u), ɛ (compute an entire column at a time) Check system converges to your exact solution. Modify Jacobian so that it is computed analytically. If convergence is not quadratic check residuals and Jacobian (compare Jacobian to finite difference version).
53 Some practical advice on numerics Check the equations Solve the linear system directly Investigate iterative solvers Investigate small unpreconditioned system. How well does it converge? Try generic preconditioners and examine performance with problem size. Develop/investigate problem-specific preconditioners.
Course Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationLecture 17: Iterative Methods and Sparse Linear Algebra
Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationFrom Stationary Methods to Krylov Subspaces
Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationAn Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems
An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationImplicit Solution of Viscous Aerodynamic Flows using the Discontinuous Galerkin Method
Implicit Solution of Viscous Aerodynamic Flows using the Discontinuous Galerkin Method Per-Olof Persson and Jaime Peraire Massachusetts Institute of Technology 7th World Congress on Computational Mechanics
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationDepartment of Computer Science, University of Illinois at Urbana-Champaign
Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationIterative Linear Solvers
Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationthe method of steepest descent
MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving
More informationMultigrid Methods and their application in CFD
Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationHigh Performance Nonlinear Solvers
What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described
More informationSolving Large Nonlinear Sparse Systems
Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationSome minimization problems
Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationNewton s Method and Efficient, Robust Variants
Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationCourse Notes: Week 4
Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived
More informationEfficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization
Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems
More informationLecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i
8.409 An Algorithmist s oolkit December, 009 Lecturer: Jonathan Kelner Lecture Last time Last time, we reduced solving sparse systems of linear equations Ax = b where A is symmetric and positive definite
More informationLINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 17 Iterative solvers for linear equations Daniel A. Spielman October 31, 2012 17.1 About these notes These notes are not necessarily an accurate representation of what happened
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationPreconditioners for the incompressible Navier Stokes equations
Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and
More informationSparse least squares and Q-less QR
Notes for 2016-02-29 Sparse least squares and Q-less QR Suppose we want to solve a full-rank least squares problem in which A is large and sparse. In principle, we could solve the problem via the normal
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationLecture 10b: iterative and direct linear solvers
Lecture 10b: iterative and direct linear solvers MATH0504: Mathématiques appliquées X. Adriaens, J. Dular Université de Liège November 2018 Table of contents 1 Direct methods 2 Iterative methods Stationary
More informationChebyshev semi-iteration in Preconditioning
Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his
More informationFast solvers for steady incompressible flow
ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationSolving Ax = b, an overview. Program
Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview
More information3.5 Finite Differences and Fast Poisson Solvers
3.5 Finite Differences and Fast Poisson Solvers It is extremely unusual to use eigenvectors to solve a linear system KU = F. You need to know all the eigenvectors of K, and (much more than that) the eigenvector
More information