Lecture 18 Classical Iterative Methods

Similar documents
Chapter 7 Iterative Techniques in Matrix Algebra

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

9.1 Preconditioned Krylov Subspace Methods

9. Iterative Methods for Large Linear Systems

Iterative methods for Linear System

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Computational Economics and Finance

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Introduction to Scientific Computing

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

CLASSICAL ITERATIVE METHODS

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Iterative Methods for Solving A x = b

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Lab 1: Iterative Methods for Solving Linear Systems

Iterative Solution methods

JACOBI S ITERATION METHOD

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Algebra C Numerical Linear Algebra Sample Exam Problems

Iterative techniques in matrix algebra

CAAM 454/554: Stationary Iterative Methods

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Numerical Methods I Non-Square and Sparse Linear Systems

Notes for CS542G (Iterative Solvers for Linear Systems)

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

The Conjugate Gradient Method

Computational Methods. Systems of Linear Equations

Numerical Methods - Numerical Linear Algebra

Iterative Methods. Splitting Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Splitting Iteration Methods for Positive Definite Linear Systems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Classical iterative methods for linear systems

Stabilization and Acceleration of Algebraic Multigrid Method

Next topics: Solving systems of linear equations

Computational Linear Algebra

4.6 Iterative Solvers for Linear Systems

Introduction and Stationary Iterative Methods

Iterative Methods for Sparse Linear Systems

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Course Notes: Week 1

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Iterative Methods and Multigrid

Lecture 17: Iterative Methods and Sparse Linear Algebra

Scientific Computing: An Introductory Survey

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Preface to the Second Edition. Preface to the First Edition

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya

From Stationary Methods to Krylov Subspaces

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

FEM and Sparse Linear System Solving

Iterative Methods and Multigrid

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Modelling and implementation of algorithms in applied mathematics using MPI

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Contents. Preface... xi. Introduction...

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

LINEAR SYSTEMS (11) Intensive Computation

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Solving Linear Systems

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 10b: iterative and direct linear solvers

Introduction to Iterative Solvers of Linear Systems

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan

Algebraic Multigrid as Solvers and as Preconditioner

Solving Ax = b, an overview. Program

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Sparse Matrices and Iterative Methods

6.4 Krylov Subspaces and Conjugate Gradients

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 8: Fast Linear Solvers (Part 7)

Iterative solution methods

The convergence of stationary iterations with indefinite splitting

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Iterative Methods for Ax=b

Numerical Methods in Matrix Computations

Motivation: Sparse matrices and numerical PDE's

Theory of Iterative Methods

APPLIED NUMERICAL LINEAR ALGEBRA

Numerical Programming I (for CSE)

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Lecture # 20 The Preconditioned Conjugate Gradient Method

Iterative Solvers for Diffusion Equations Arising in Models of Calcium Dynamics

Dense Matrices for Biofluids Applications

Transcription:

Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1

Iterative Methods for Linear Systems Direct methods for solving Ax = b, e.g. Gaussian elimination, compute an exact solution after a finite number of steps (in exact arithmetic) Iterative algorithms produce a sequence of approximations x (1), x (2),... which hopefully converges to the solution, and may require less memory than direct methods may be faster than direct methods may handle special structures (such as sparsity) in a simpler way Residual r = b Ax 10 0 10 5 10 10 Direct Iterative 15 10 0 5 10 15 20 25 Iteration 30 2

Two Classes of Iterative Methods Stationary methods (or classical iterative methods) finds a splitting A = M K and iterates x (k+1) = M 1 (Kx (k) + b) = Rx (k) + c Jacobi, Gauss-Seidel, Successive Overrelaxation (SOR), and Symmetric Successive Overrelaxation (SSOR) Krylov subspace methods use only multiplication by A (and possibly by A T ) and find solutions in the Krylov subspace {b, Ab, A 2 b,..., A k 1 b} Conjugate Gradient (CG), Generalized Minimal Residual (GMRES), BiConjugate Gradient (BiCG), etc 3

The Model Poisson Problem Test problem for linear solvers: Discretize Poisson s equation in 2-D: ( ) 2 u 2 u + = 1 x 2 y 2 on a square grid using centered finite difference approximations: 4u ij u i 1,j u i+1,j u i,j 1 u i,j+1 = h 2 Dirichlet conditions u = 0 on boundaries n+1 Grid spacing h = 1/(n + 1) Total of n 2 unknowns u ij A typical problem despite the simplicity j u ij 4 0 0 i n+1

The Model Problem in MATLAB In MATLAB: n=8; h=1/(n+1); e=ones(n,1); A1=spdiags([-e,2*e,-e],-1:1,n,n); A=kron(A1,speye(n,n))+kron(speye(n,n),A1); f=hˆ2*ones(nˆ2,1); or simply A=delsq(numgrid( S,n+2)); 10 Resulting linear system Au and banded 0 20 = f is sparse 30 40 50 60 0 20 40 60 nz = 288 5

Eigenvalues of Model Problem A is symmetric positive definite, with eigenvalues ij = i + j, i, j = 1,..., n, where ( ) ωk k = 2 1 cos n + 1 are eigenvalues of the 1-D Laplace operator Largest eigenvalue n = 2(1 cos πn ) 4 n+1 π Smallest eigenvalue 1 = 2(1 cos n+1 ) ω 2 /(n + 1) 2 Condition number κ(a) = n / 1 4(n + 1) 2 /ω 2 6

Stationary Iterative Methods Iterative methods for Ax = b that can be written x (k+1) = Rx (k) + c with constant R are called stationary iterative methods A splitting of A is a decomposition A = M K with nonsingular M Stationary iterative method from splitting: Ax = Mx Kx = b = x = M 1 Kx + M 1 b = Rx + c The iteration x (k+1) = Rx (k) + c converges to the solution x = A 1 b if and only if the spectral radius κ(r) < 1 Proof. Blackboard 7

Choosing a Splitting Find a splitting A = M K such that (1) Rx = M 1 Kx and c = M 1 b are easy to evaluate (2) κ(r) is small Example: M = I makes M 1 trivial, but probably not κ(r) small Example: M = A gives K = 0 and κ(r) = κ(m 1 K) = 0, but expensive M 1 We will study splittings based on the diagonal and the upper/lower triangular parts of A A = D L U 8

The Jacobi Method The simplest splitting is the Jacobi method, where M = D and K = L + U : x (k+1) = D 1 ( (L + U)x (k) + b ) In words: Solve for x i from equation i, assuming the other entries fixed Implementation of model problem: Trivial in MATLAB, but temporary array required with for-loops. The following code: for i = 1 to n for j = 1 to n (k+1) (k) (k) (k) (k) u i,j = (u i 1,j + u i+1,j + u i,j 1 + u i,j+1 + h 2 )/4 performs one step of the Jacobi method 9

Convergence of the Jacobi Method The following results can be shown: The Jacobi method converges if A is strictly row diagonally dominant: a ii > j =i a ij, or A is weakly row diagonally dominant: a ii j =i a ij with strict inequality at least once, and A is irreducible (strongly connected graph) Our model problem is weakly row diagonally dominant and irreducible, so the Jacobi method converges More specifically, the splitting is A = D L U = 4I (4I A), so R J = (4I) 1 (4I A) = I A/4 with eigenvalues 1 ij /4, and ω ω 2 κ(r J ) = max 1 ij /4 = 1 11 /4 = cos 1 i,j n + 1 2(n + 1) 2 Therefore, it converges with a constant factor every O(n 2 ) iteration 10

The Gauss-Seidel Method In the Gauss-Seidel method, we choose M = D L (which is triangular and therefore easy to invert) and iterate: x (k+1) = (D L) 1 (Ux (k) + b) In words: While looping over the equations, use the most recent values x i Implementation of model problem: Still easy in MATLAB using \, and for-loops are actually easier than Jacobi since no temporary array: for i = 1 to n for j = 1 to n u (k+1) = (u (k+1) + u (k) + u (k+1) + u (k) i,j i 1,j i+1,j i,j 1 i,j+1 + h2 )/4 The order matters! For Cartesian grids, the red-black ordering updates in a checkerboard pattern 11

The Successive Overrelaxation Method In the Successive overrelaxation method, or SOR, the Gauss-Seidel step is extrapolated a factor ω: where x is the Gauss-Seidel iterate x (k+1) (k+1) (k) = ωx i + (1 ω)x i ω = 1: Gauss-Seidel, ω > 1: overrelaxation, ω < 1: underrelaxation In matrix form: x (k+1) = (D ωl) 1 (ωu + (1 ω)d)x (k) + ω(d ωl) 1 b 12

Convergence of Gauss-Seidel and SOR It can be shown that with a symmetric positive definite matrix A, Gauss-Seidel and SOR converges with 0 < ω < 2 In general hard to choose ω for SOR, but if spectral ( radius of the Jacobi ) method κ(r J ) is known, the optimal ω = 2/ 1 + 1 κ(r J ) For the model problem with red-black ordering: Gauss-Seidel is twice as fast as Jacobi ( ) For SOR, the optimal ω = 2/ 1 + sin π, giving a spectral radius n+1 2ω κ(r SOR ) 1 n + 1 which is n times faster than Jacobi/Gauss-Seidel, or a constant factor improvement every O(n) iteration 13

The Symmetric Successive Overrelaxation Method To obtain an iteration matrix similar to a symmetric matrix, apply two SOR steps in opposite directions: x (k+1) = B 1 B 2 x (k) + ω(2 ω)(d ωu) 1 D(D ωl) 1 b, B 1 = (D ωu) 1 (ωl + (1 ω)d) B 2 = (D ωl) 1 (ωu + (1 ω)d) This Symmetric successive overrelaxation method, or SSOR, is useful as preconditioner for symmetric matrices By itself not very different from two steps of SOR 14