Linear Solvers. Andrew Hazel

Similar documents
Course Notes: Week 1

M.A. Botchev. September 5, 2014

The Conjugate Gradient Method

Chapter 7 Iterative Techniques in Matrix Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

FEM and sparse linear system solving

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Lab 1: Iterative Methods for Solving Linear Systems

Iterative Methods for Solving A x = b

9.1 Preconditioned Krylov Subspace Methods

From Stationary Methods to Krylov Subspaces

Conjugate Gradient (CG) Method

EECS 275 Matrix Computation

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

Preface to the Second Edition. Preface to the First Edition

1 Conjugate gradients

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Computational Linear Algebra

The Conjugate Gradient Method

Contents. Preface... xi. Introduction...

Conjugate Gradient Method

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Notes on Some Methods for Solving Linear Systems

Scientific Computing

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

6.4 Krylov Subspaces and Conjugate Gradients

Computational Linear Algebra

Iterative methods for Linear System

Implicit Solution of Viscous Aerodynamic Flows using the Discontinuous Galerkin Method

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Iterative solvers for linear equations

Iterative Methods and Multigrid

Numerical Methods - Numerical Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Indefinite and physics-based preconditioning

Lecture 18 Classical Iterative Methods

Department of Computer Science, University of Illinois at Urbana-Champaign

Lecture 8: Fast Linear Solvers (Part 7)

Iterative Methods. Splitting Methods

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Iterative Linear Solvers

The Conjugate Gradient Method

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

the method of steepest descent

Multigrid Methods and their application in CFD

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

High Performance Nonlinear Solvers

Solving Large Nonlinear Sparse Systems

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

The Lanczos and conjugate gradient algorithms

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Some minimization problems

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Newton s Method and Efficient, Robust Variants

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Preconditioned inverse iteration and shift-invert Arnoldi method

Line Search Methods for Unconstrained Optimisation

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

IDR(s) as a projection method

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

Multigrid absolute value preconditioning

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Linear Algebra Review

APPLIED NUMERICAL LINEAR ALGEBRA

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Incomplete Cholesky preconditioners that exploit the low-rank property

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Introduction to Scientific Computing

4.6 Iterative Solvers for Linear Systems

Course Notes: Week 4

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Numerical Methods I Non-Square and Sparse Linear Systems

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Iterative solvers for linear equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Preconditioners for the incompressible Navier Stokes equations

Sparse least squares and Q-less QR

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

The Conjugate Gradient Method

Stabilization and Acceleration of Algebraic Multigrid Method

Lecture 10b: iterative and direct linear solvers

Chebyshev semi-iteration in Preconditioning

Fast solvers for steady incompressible flow

Notes on PCG for Sparse Linear Systems

Solving Ax = b, an overview. Program

3.5 Finite Differences and Fast Poisson Solvers

Transcription:

Linear Solvers Andrew Hazel

Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations.

Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. In explicit timestepping methods, may not have to solve a linear system (although for Navier Stokes one typically solves a pressure-poisson equation).

Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. In explicit timestepping methods, may not have to solve a linear system (although for Navier Stokes one typically solves a pressure-poisson equation). Solve these discrete systems (often repeatedly) to find the (discrete) solution to the problem of interest: Find x R n, where A x = b, A is a known n n matrix and b R n is a known right-hand side vector.

Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar. 1 4 5 x 1 2 7 6 y = 0 3 3 1 z 0

Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar. 1 4 5 x 1 0 1 4 y = 2 2 A 1i 0 9 14 z 3 3 A 1i

Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar. 1 4 5 x 1 0 1 4 y = 2 0 0 22 z 15 9 B 2i

Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar. 1 4 5 x 1 0 1 4 y = 2 0 0 22 z 15 9 B 2i Once the matrix is upper triangular simple back substitution gives the answer x = 1/2, y = 8/11, z = 15/22.

Direct methods Direct methods find solutions of the linear system by determining explicit formulæ for the unknowns. The conceptually simplest direct method is Gaussian elimination, which should be familiar. 1 4 5 x 1 0 1 4 y = 2 0 0 22 z 15 9 B 2i Once the matrix is upper triangular simple back substitution gives the answer x = 1/2, y = 8/11, z = 15/22. Typical operation count is O(n 3 ), which grows dramatically as n increases.

LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination. 1 4 5 1 0 0 1 4 5 2 7 6 = 2 1 0 0 1 4. 3 3 1 2 9 1 0 0 22

LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination. 1 4 5 1 0 0 1 4 5 2 7 6 = 2 1 0 0 1 4. 3 3 1 2 9 1 0 0 22 Using the LU decomposition A x = b LU x = b The solution proceeds via two backsolves: Solve L y = b followed by U x = y.

LU Decomposition Gaussian elimination is usually implemented via an LU factorisation which keeps the factors used in the elimination. 1 4 5 1 0 0 1 4 5 2 7 6 = 2 1 0 0 1 4. 3 3 1 2 9 1 0 0 22 Using the LU decomposition A x = b LU x = b The solution proceeds via two backsolves: Solve L y = b followed by U x = y. Once LU factorisation is known, solution for alternative right-hand sides is quick. Operation count is still O(n 3 ).

Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices.

Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices. Direct solvers can take advantage of sparsity structure. Computing LU factors of sparse matrices requires fewer operations (don t need to eliminate all entries in all rows).

Using structure in the matrix Finite-element, finite-difference and finite-volume methods typically lead to large, but sparse matrices. Spectral or boundary-element methods typically lead to small(er) but dense matrices. Direct solvers can take advantage of sparsity structure. Computing LU factors of sparse matrices requires fewer operations (don t need to eliminate all entries in all rows). SuperLU (http://eicrd-legacy.lbl.gov/ xiaoye/superlu/) is a direct LU solver designed to take advantage of sparsity. Typically analyse phase is run before factorisation and backsubstitution.

Frontal solvers Frontal solvers are a variant of direct solvers designed for element-based methods. The matrix is never completely assembled. Variables are removed as soon as every contributing element has been visited. Efficiency is dependent on order in which elements are assembled. Very good for long, thin domains... can extend the domain at very minimal run-time cost. Multifrontal methods use several fronts at once MUMPS (http://graal.ens-lyon.fr/mumps/) is a parallel multi-frontal solver.

Direct solvers: summary Usually very robust. Efficient for matrices with particular sparsity structure. Expensive for large matrices. Memory is always an ultimate bottleneck, unless information is written to disk, which can be slow! Main problem is the fill-in of non-zero entries in the factorisation phase. SuperLU Dist and MUMPS work well on multi-core processors. Direct methods are very competitive for two-dimensional and axi-symmetric problems and work well for three-dimensional methods until you run out of memory.

Iterative Methods Iterative methods are based on repeated updates to an initial guess, x 0 x k+1 = F(x k ) These methods do not necessarily require the storage of the matrix. Much cheaper in terms of memory usage than direct methods. If application of F is cheap and method converges in a few iterations, can be much cheaper in computation time. Aim is to find optimal O(n) solver.

Iterative Methods Iterative methods are based on repeated updates to an initial guess, x 0 x k+1 = F(x k ) These methods do not necessarily require the storage of the matrix. Much cheaper in terms of memory usage than direct methods. If application of F is cheap and method converges in a few iterations, can be much cheaper in computation time. Aim is to find optimal O(n) solver. Questions: How do we choose F? How quickly does the method converge?

Conjugate Gradient Method Applies to symmetric, positive definite matrices A. Identify solution of linear system Ax = b with minimum of Minimum occurs when Φ(x) = 1 2 x ia ij x j x i b i. 1 2 δ ika ij x j + 1 2 x ia ij δ jk b i δ ik = 0 1 2 (A + AT )x = b A symmetric A = A T, so minimum occurs when Ax = b. Want iteration that brings us close to the minimum as rapidly as possible.

Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i

Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i = 1 2 x k i A ij x k j + 1 2 α (x k i A ij p k j + p k i A ij x k j ) + 1 2 α2 p k i A ij p k j x k i b i αp k i b i

Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x k +αp k ) = 1 2 (x k i +αp k i )A ij (x k j +αp k j ) (x k i +αp k i )b i = Φ(x n ) + αp k i (A ij x k j b i ) + 1 2 α2 p k i A ij p k j.

Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x n ) + αp k i (A ij x k j b i ) + 1 2 α2 p k i A ij p k j. Minimised when α = pk i (b i A ij x k j ) p k ma mn p k n = pk i r k i p k ma mn p k n

Conjugate Gradient Method Consider an iterative improvement to the current solution along a line x k+1 = x k + αp k, where p k is a search direction. Φ(x k+1 ) = Φ(x n ) + αp k i (A ij x k j b i ) + 1 2 α2 p k i A ij p k j. Minimised when α = pk i (b i A ij x k j ) p k ma mn p k n = pk i r i k pma k mn pn k Given p k we can find α that minimises the next iterate. Question: How do we choose the set {p k }?

Conjugate Gradient Method The obvious choice for the search direction is the direction in which the solution decreases most rapidly the steepest gradient Φ x i = b i A ij xj k = ri k. Better to avoid repeats by choosing each search direction to be different from previous ones. If the search directions are linearly independent they form a basis of R k and convergence is guaranteed in at most n steps. Question: What s the best basis to choose?

Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1

Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1 Choose the search directions to be A-conjugate pi ka ijpj m = 0 when n m (p k and Ap m are orthogonal) All cross terms in the vector-matrix-vector product vanish. Φ(x k+1 ) = Φ(x 0 ) + k [α m p m ( Ax 0 b ) + 12 ] α2mp m Ap m m=1

Conjugate Gradient Method Now x k+1 = x k + α k p k = x 0 + k m=1 α mp m and Φ(x k+1 ) = 1 2 (x0 + k α m p m ) T A(x 0 + m=1 (x 0 + k α m p m ) T b. m=1 k α m p m ) m=1 Choose the search directions to be A-conjugate pi ka ijpj m = 0 when n m (p k and Ap m are orthogonal) All cross terms in the vector-matrix-vector product vanish. Φ(x k+1 ) = Φ(x 0 ) + k [α m p m ( Ax 0 b ) + 12 ] α2mp m Ap m m=1 Hence minimisation in each direction decouples from others.

Conjugate Gradient Method We construct {p k } by starting with the steepest gradient p 0 = r 0 = b Ax 0. It can be shown (e. g. Golub & Van Loan) that p k+1 = r k+1 + β k p k. and enforcing A-conjugacy gives β k = pk i A ijr k+1 j pma k mn pn k i r k+1 i = r k+1 rj kr j k,

Conjugate Gradient Method We construct {p k } by starting with the steepest gradient p 0 = r 0 = b Ax 0. It can be shown (e. g. Golub & Van Loan) that p k+1 = r k+1 + β k p k. and enforcing A-conjugacy gives β k = pk i A ijr k+1 j pma k mn pn k i r k+1 i = r k+1 rj kr j k, The last equality is not obvious. It follows by induction using r k = b Ax k = b A(x k 1 + α k 1 p k 1 ) = r k 1 α k 1 Ap k 1

Conjugate Gradient Algorithm A = [2 1 ; 1 3]; b = [5 7] ; n = 2; x = zeros(n,1); r = b - A*x; %Initial residuals p = r; %Initial search direction rdotr = r *r; %Inner product %Loop over maximum of n possible steps for i=1:n Ap = A*p; alpha=rdotr/(p *Ap); x += alpha*p; %Update guess r -= alpha*ap; %Update residuals rdotr_new = r *r; %Convergence criteria based on size of residuals if sqrt(rdotr_new) < 1e-10 break; end p = r + (rdotr_new/rdotr)*p; %Update search direction rdotr = rdotr_new end

Convergence Analysis The space spanned by the vectors {p k } is the same as the Krylov subspace {r 0, Ar 0,..., A k r 0 }. We can therefore write the k-th iterate in the form x k = x 0 + P k 1 (A) r 0, where P k 1 is an order k 1th polynomial. Hence the error in the k-th iterate is where Ax = b. e k = x x k = x x 0 P k 1 (A) r 0 = [I AP k 1 (A)] e 0 = P k (A) e 0,

Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k

Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k Expand error in eigenbasis of A to show that e k A min P k max P k (λ i ) e 0 A i

Convergence Analysis A-norm of error is minimised over the Krylov subspace (see Elman et al). e k A = e k i A ij e k j = min P k (A)e 0 A P k Expand error in eigenbasis of A to show that e k A min P k max P k (λ i ) e 0 A i Assuming eigenvalues lie in range λ j [λ min (A), λ max (A)] and fitting a Chebyshev polynomial to achieve the minimisation gives ( ) k κ 1 e k A 2 e 0 A, κ + 1 where κ = λ max (A)/λ min (A) is the condition number of the matrix.

More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra.

More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra. A bound on the relative reduction on the size of the residuals at the k-th step can again be found in terms of a k-th degree polynomial r k r 0 κ(v ) min P k max P k (λ i ), i where V is the matrix consisting of the eigenvectors of the matrix, A.

More general iterative methods CG cannot be applied to non-symmetric systems. GMRES is an iterative method that constructs approximate solutions within the Krylov subspace that minimise the Euclidean norm of the residual Ax n b. Implementation is a little tricky because solving the minimisation problem requires some more linear algebra. A bound on the relative reduction on the size of the residuals at the k-th step can again be found in terms of a k-th degree polynomial r k r 0 κ(v ) min P k max P k (λ i ), i where V is the matrix consisting of the eigenvectors of the matrix, A. BiCGSTAB is an alternative method that generalises CG by imposing biorthogonality conditions on vectors associated with the Krylov subspaces of A and A T.

Preconditioning Convergence of iterative methods depends on the spectrum of the matrix the location of the eigenvalues. Idea of preconditioning is to modify the spectrum to improve convergence rate of the method. Solve the system where P is the preconditioner. P 1 Ax = P 1 b, Obviously preconditioner should be cheap to compute and apply. Ideally want to ensure that convergence does not depend on mesh or any physical parameters of the problem.

Generic Preconditioners A number of generic possibilities exist: Choose P to be the (block) diagonal of the matrix. Use Incomplete LU decomposition (ILU). Use a lower-order method to approximate the system (spectral). Domain decomposition preconditioners. Multigrid and algebraic multigrid. A bad solver can be a good preconditioner.

Problem-specific preconditioners Use the structure of the matrix.

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Consider a diagonal preconditioner, spectrum of the preconditioned system follows from ( D 1 1 0 0 D 1 2 ) ( F B T B 0 ) ( u p ) ( u = λ p ).

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Consider a diagonal preconditioner, spectrum of the preconditioned system follows from ( ) ( ) ( ) ( ) F B T u D1 0 u = λ. B 0 p 0 D 2 v

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ).

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1 If λ 1, u = 1 λ 1 F 1 B T p, so BF 1 B T p = λ(λ 1)D 2 p D 2 = BF 1 B T

Problem-specific preconditioners For Navier Stokes equations the matrix structure is ( ) F B T, B 0 Choose D 1 = F. ( F B T B 0 ) ( u p ) ( F 0 = λ 0 D 2 ) ( u v ). Have one eigenvalue λ = 1 If λ 1, u = 1 λ 1 F 1 B T p, so BF 1 B T p = λ(λ 1)D 2 p D 2 = BF 1 B T and λ(λ 1) = 1, λ = (1 ± 5)/2 Three eigenvalues... convergence in three steps! (see Elman et al.)

Preconditioning Software For standard techniques use packages! TRILINOS ( http://trilinos.sandia.gov/) PETSc (http://www.mcs.anl.gov/petsc/) These packages also have implementations of GMRES, BiCGSTAB that run in serial and parallel. Think about the human cost... is it better to spend time developing a brilliant preconditioner or just use an off-the-shelf, sub-optimal, solver and wait longer.

Iterative solvers: summary Can give optimal solvers, O(n). Not as robust unless good preconditioner available. Preconditioners can be general purpose or problem-specific. Typically problem-specific preconditioners perform better, but take much longer to develop. Resolve for different right-hand sides not much cheaper. Often the only possibility for large (realistic) problems.

Some practical advice on numerics Check the equations Check all equations by constructing an exact solution that satisfies the boundary conditions and lump any left over terms into body force/source terms. Check your exact solution gives zero (well suitably small) initial residuals, if not check your equations.

Some practical advice on numerics Check the equations Solve the linear system directly Start by using a direct solver and for nonlinear problems, use Newton s method. Initially compute associated Jacobian matrix using finite differences J ij R i(u + ɛe j ) R i (u), ɛ (compute an entire column at a time) Check system converges to your exact solution. Modify Jacobian so that it is computed analytically. If convergence is not quadratic check residuals and Jacobian (compare Jacobian to finite difference version).

Some practical advice on numerics Check the equations Solve the linear system directly Investigate iterative solvers Investigate small unpreconditioned system. How well does it converge? Try generic preconditioners and examine performance with problem size. Develop/investigate problem-specific preconditioners.