Theory of Iterative Methods

Similar documents
Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Iterative Methods for Ax=b

Computational Linear Algebra

Iterative Methods and Multigrid

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

9. Iterative Methods for Large Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

Von Neumann Analysis of Jacobi and Gauss-Seidel Iterations

Numerical Programming I (for CSE)

Iterative Methods. Splitting Methods

Selected HW Solutions

Classical iterative methods for linear systems

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

COURSE Iterative methods for solving linear systems

Lecture 18 Classical Iterative Methods

Chap 3. Linear Algebra

The Conjugate Gradient Method

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Introduction to Iterative Solvers of Linear Systems

CAAM 454/554: Stationary Iterative Methods

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Iterative Methods for Solving A x = b

9.1 Preconditioned Krylov Subspace Methods

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Computational Methods. Systems of Linear Equations

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

CHAPTER 5. Basic Iterative Methods

Iterative Solution methods

Power Series. Part 1. J. Gonzalez-Zugasti, University of Massachusetts - Lowell

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Numerical Methods - Numerical Linear Algebra

Introduction to Scientific Computing

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Applied Linear Algebra

1 Number Systems and Errors 1

JACOBI S ITERATION METHOD

Iterative techniques in matrix algebra

Iterative Methods for Sparse Linear Systems

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

CLASSICAL ITERATIVE METHODS

PETROV-GALERKIN METHODS

Solving Linear Systems of Equations

Motivation: Sparse matrices and numerical PDE's

Stabilization and Acceleration of Algebraic Multigrid Method

Numerical Analysis Comprehensive Exam Questions

Algebra C Numerical Linear Algebra Sample Exam Problems

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Applied Linear Algebra with MATLAB

Splitting Iteration Methods for Positive Definite Linear Systems

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

4.6 Iterative Solvers for Linear Systems

Chapter 12: Iterative Methods

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Lecture 10 - Eigenvalues problem

Computational Economics and Finance

Iterative methods for Linear System

Linear algebra II Tutorial solutions #1 A = x 1

A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya

Linear algebra II Homework #1 due Thursday, Feb A =

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Lecture 10b: iterative and direct linear solvers

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Kasetsart University Workshop. Multigrid methods: An introduction

Solving Linear Systems

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Modelling and implementation of algorithms in applied mathematics using MPI

AIMS Exercise Set # 1

ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli

Lab 1: Iterative Methods for Solving Linear Systems

Linear Algebra: Matrix Eigenvalue Problems

MATH 571: Computational Assignment #2

Iterative Methods for Smooth Objective Functions

TMA4125 Matematikk 4N Spring 2017

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Cache Oblivious Stencil Computations

OR MSc Maths Revision Course

Mathematical Foundations

Introduction and Stationary Iterative Methods

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Synopsis of Numerical Linear Algebra

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Transcription:

Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies Me (k+1) = (M A)e (k), e (k+1) = Be (k) where the iteration matrix B = M 1 (M A). Now e (k) B = k e (0) ρ k 0 iff ρ < 1 where ρ = maximum of eigenvalues of B is the spectral radius of B. Convergence of Iterative Methods Convergence. Does x (k) x? If x (k) x, then Mx = (M A)x + b and Ax = b, which implies x = x. The rate of convergence is governed by powers of B = M 1 (M A). Since e (k) = B k e (0), e (k) 0 and x (k) x (convergence) iff B k 0 (stability). Theorem. B k 0 iff every eigenvalue of B satisfies λ i < 1. The rate of convergence is governed by the spectral radius ρ of B: ρ = max i λ i. Proof using eigenvectors. Expand the initial error in terms of the eigenvectors of B (assuming a complete set of eigenvectors) e (0) = c 1 v 1 + c v + + c n v n

where Bv i = λ i v i. Then as k, the error at iteration k is e (k) = B k e (0) = c 1 λ k 1v 1 + c λ k v + + c n λ k nv n 0 iff λ i < 1, i = 1,..., n and e (k) cj λ k j v j ρ k. Proof in matrix form. Assuming a complete set of eigenvectors, write B = S 1 ΛS, where S = [v 1 v v n ] is the eigenvector matrix of B and Λ = diag{λ 1, λ,..., λ n } is the eigenvalue matrix of B. Then B k = ( S 1 ΛS ) ( S 1 ΛS ) (S 1 ΛS ) = S 1 Λ k S 0 iff λ i < 1, i = 1,..., n. If B does not have a complete set of eigenvectors, the matrix form of the proof simply involves the Jordan form J of B instead of Λ. Choice of M Mx (k+1) = (M A)x (k) + b should be easy to solve. Diagonal or lower triangular M s are good choices. M should be close to A, so that the eigenvalues of B = M 1 (M A) = I M 1 A are as small in magnitude as possible (must be inside the unit circle in the complex plane). For A = L+D+U, Jacobi takes M = D while Gauss-Seidel takes M = D+L. SOR (successive over-relaxation) introduces a relaxation factor 1 < < in Gauss-Seidel which is adjusted to make the spectral radius ρ as small as possible. For a wide class of finite-difference matrices, Young s formula (1950) relates the eigenvalues µ of Jacobi and the eigenvalues λ of SOR: (λ + 1) = λ µ. Minimizing ρ SOR = max{ λ } = 1 (using the quadratic formula) gives ( ) 1 1 ρ J opt =, ρ SOR = opt 1. ρ J For the model Laplace problem on an N N grid with h = 1/N, ρ J = cos(πh), ρ GS = ρ J, ρ SOR = 1 sin πh 1 + sin πh.

Model Laplace Problem Spectral Radii To show that the Jacobi spectral radius ρ J = cos(πh) for Laplace s equation on the unit square with second-order accurate central differences, first consider a 1D problem. In 1D, set A = tridiag[ 1 1]. Then the iteration matrix B = 1 tridiag[1 0 1]. Then show that Bv = cos(πh) v where the 1D eigenvector v = [sin(πh), sin(πh),, sin(nπh)], n = N 1, h = 1 N = 1 n + 1. The other eigenvectors of B replace π with π, 3π,..., nπ in v, with eigenvalues cos(πh), cos(3πh),..., cos(nπh). In D, the eigenvector = [sin(jπh) sin(kπh)] = [sin(πh) sin(πh), sin(πh) sin(πh),, sin(πh) sin(nπh), sin(πh) sin(πh),, sin(πh) sin(nπh),, sin(nπh) sin(nπh)] with the same eigenvalue, so ρ J = cos(πh). Using ρ J in Young s formula yields the SOR spectral radius Iteration Matrices ρ SOR = 1 sin πh 1 + sin πh. Solve Ax = b iteratively. Decompose A = L + D + U and define r (k) = Ax (k) b. Jacobi Iteration M = D Dx (k+1) = (L + U)x (k) + b = Dx (k) r (k) x (k+1) = x (k) D 1 r (k) B J = D 1 (L + U) 3

Gauss-Seidel Iteration M = L + D (D + L)x (k+1) = Ux (k) + b Dx (k+1) = Dx (k) (Lx (k+1) + Dx (k) + Ux (k) b) Dx (k) r (k) SOR/SUR Iteration M = D + L x (k+1) = x (k) (k) D 1 r B GS = (L + D) 1 U x (k+1) = x (k) (k) D 1 r D x(k+1) = D x(k) (Lx (k+1) + Dx (k) + Ux (k) b) ( ) ( ) D D + L x (k+1) = D U x (k) + b ( ) D 1 ( ) D B SOR = + L D U = (D + L) 1 ((1 )D U) Note that for SOR/SUR, det{b} = (1 ) n, 0 < <, and ρ SOR = opt 1, ρ SUR = 1 opt. Example M GS = det{a} = λ 1 λ λ n, Tr{A} = λ 1 + λ + + λ n [ ] 1 A = 1 [ ] [ ] 0 0 1 M J =, B 0 J = 1, λ 0 ± = ± 1, ρ J = 1 [ ] [ ] 0 0 1, B 1 GS = 0 1 4 M SOR = [ 0 1 ], B SOR =, λ 1 = 0, λ = 1 4, ρ GS = 1 4 1 (1 ) ( ) 1 4

3 3 Examples opt = 4( 3), λ 1 = λ = opt 1 = ρ SOR 0.0718 (i) Example where Jacobi converges but Gauss-Seidel diverges 1 1 0 0 A = 1 1 1, M J = 0 1 0, B J = 1 0 0 1 0 1 0 1 0 λ 1 = λ = λ 3 = 0, ρ J = 0, Jacobi converges. Note that here B 3 = 0 and e 3 = 0, which is an unusual situation! See the Cayley-Hamilton Theorem in the web notes Conjugate Gradients. M GS = 1 0 0 1 1 0 1, B GS = λ 1 = 0, λ = λ 3 =, ρ GS =, Gauss-Seidel diverges. 0 0 3 0 0 (ii) Example where Jacobi diverges but Gauss-Seidel converges 1 1 0 0 A = 1 1, M J = 0 0, B J = 1 1 1 0 0 λ 1 = 1, λ = λ 3 = 0.5, ρ J = 1, Jacobi diverges. 0 0 M GS = 1 0, B GS = 1 8 1 1 0 4 4 0 0 1 3 0 1 1 1 0 1 1 1 0 λ 1 = 0, λ ± = 0.315 ± 0.1654i, ρ GS = 0.3536, Gauss-Seidel converges. Chebyshev SOR By dynamically adjusting, the Chebyshev SOR method insures that the norm of the error always decreases. Make two half sweeps on even/odd (black/white) meshes. Odd (even) points depend only on even (odd) mesh values. Define (µ is the Jacobi spectral radius): (0) = 1 5

(k+ 1 ) = ( 1 ) = 1 1 µ 1 1 µ (k) 4 Then ( ) = opt and e (k) always decreases., k = 1, 1, 3,... 6