Computational Linear Algebra

Similar documents
Computational Linear Algebra

Computational Linear Algebra

Computational Linear Algebra

Computational Linear Algebra

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Theory of Iterative Methods

9. Iterative Methods for Large Linear Systems

Motivation: Sparse matrices and numerical PDE's

The Conjugate Gradient Method

CAAM 454/554: Stationary Iterative Methods

CHAPTER 5. Basic Iterative Methods

COURSE Iterative methods for solving linear systems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Lecture 18 Classical Iterative Methods

Solving Linear Systems

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Stabilization and Acceleration of Algebraic Multigrid Method

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Chapter 7 Iterative Techniques in Matrix Algebra

Introduction to Scientific Computing

JACOBI S ITERATION METHOD

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Methods - Numerical Linear Algebra

9.1 Preconditioned Krylov Subspace Methods

Iterative Methods for Ax=b

Iterative Methods and Multigrid

Introduction to Iterative Solvers of Linear Systems

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Algebra C Numerical Linear Algebra Sample Exam Problems

Solving Linear Systems of Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Classical iterative methods for linear systems

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Solving Linear Systems

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

4.6 Iterative Solvers for Linear Systems

Iterative Methods and Multigrid

Iterative Methods. Splitting Methods

Iterative methods for Linear System

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Chapter 12: Iterative Methods

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

The Solution of Linear Systems AX = B

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

APPLIED NUMERICAL LINEAR ALGEBRA

Synopsis of Numerical Linear Algebra

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Lab 1: Iterative Methods for Solving Linear Systems

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Numerical Programming I (for CSE)

Next topics: Solving systems of linear equations

A Refinement of Gauss-Seidel Method for Solving. of Linear System of Equations

Iterative Methods for Solving A x = b

Splitting Iteration Methods for Positive Definite Linear Systems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Recall : Eigenvalues and Eigenvectors

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

CLASSICAL ITERATIVE METHODS

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Linear Algebra: Matrix Eigenvalue Problems

MAT 1302B Mathematical Methods II

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Review of Linear Algebra

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Numerical Methods I Non-Square and Sparse Linear Systems

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Linear Algebra Review. Vectors

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Introduction and Stationary Iterative Methods

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

LINEAR SYSTEMS (11) Intensive Computation

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

Quantum Computing Lecture 2. Review of Linear Algebra

Course Notes: Week 1

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Computational Linear Algebra

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Iterative Methods for Solving Linear Systems

Computational Methods. Systems of Linear Equations

Kasetsart University Workshop. Multigrid methods: An introduction

Adaptive algebraic multigrid methods in lattice computations

Math 3191 Applied Linear Algebra

Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN

Transcription:

Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18

Part 3: Iterative Methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 2

overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 3

Definitions iteration methods consider a linear system Ax b (3.0.1) with given right hand side b and regular matrix A an iteration method successively computes approximations x m for the exact solution A 1 b via repeated execution of a defined rule with given start vector x 0 x m 1 (x m, b) for m 0, 1,... PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 4

Definitions iteration methods (cont d) Definition 3.1 An iteration method given via the mapping : and called linear if matrices M, N exist, thus (x, b) Mx Nb applies. Matrix M is called iteration matrix of iteration method. Definition 3.2 A vector x : for b if ~ is denoted fixed point of iteration method ~ ~ x (x, b) applies. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 5

Definitions consistency vs. convergence Definition 3.3 An iteration method is called consistent w.r.t. matrix A, if for all b the solution A 1 b is fixed point of for b. An iteration method is called convergent if for all b and all start values x 0 one limit independent from the start value exists. Note: Consistency poses a necessary constraint for any iteration method. In case of a linear iteration method, consistency can be directly determined from matrices M and N. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 6

Definitions consistency vs. convergence (cont d) Theorem 3.4 A linear iteration method is consistent w.r.t. matrix A iff M I NA applies. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 7

Definitions consistency vs. convergence (cont d) Theorem 3.5 A linear iteration method is convergent iff the spectral radius of iteration matrix M fulfils the constraint (M) 1. Theorem 3.6 Let be a convergent and w.r.t. A consistent linear iteration method, then limit x~ of the sequence x m (x m 1, b) for m 1, 2,... fulfils for each x 0 the linear system (3.0.1). Both proofs are lengthy. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 8

overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 9

basic concept based on the partitioning of matrix A as A B (A B), B, (3.1.1) from Ax b the equivalent system Bx (B A)x b follows PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 10

basic concept (cont d) is B regular, we get x B 1 (B A)x B 1 b and hereby define the linear iteration methods x m 1 (x m, b) Mx m Nb for m 0, 1,... with M : B 1 (B A) and N : B 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 11

basic concept (cont d) Theorem 3.7 Let B be regular, then the linear iteration method is consistent w.r.t. A. x m 1 (x m, b) B 1 (B A)x m B 1 b PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 12

basic concept (cont d) Theorem 3.8 Let be a consistent linear iteration method w.r.t. A for whose iteration matrix M a norm exists, thus q : M 1 applies. Then for given 0 follows x m A 1 b for all m with m and x 1 (x 0, b) x 0. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 13

basic concept (cont d) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 14

basic concept (cont d) observation: if 0 q 1 and q q 2 applies, then follows ~ considering two convergent linear iteration methods 1 and 2 whose respective iteration matrices M 1 and M 2 fulfil the property (M 1 ) (M 2 ) 2 then theorem 3.8 delivers an assured accuracy for method 1 normally after half of the iterations required by 2, i.e., the number of required iterations halves if the spectral radius for instance is reduced from 0.9 to 0.81. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 15

JACOBI method for solving linear system Ax b with regular matrix A the JACOBI method requires non disappearing diagonal elements a ii 0, i 1,..., n, thus the diagonal matrix D a 11,..., a nn is regular hence, the linear system can be re written in equivalent form as x D 1 (D A)x D 1 b : M J : N J due to theorem 3.7, the linear iteration method x m 1 D 1 (D A)x m D 1 b for m 0, 1, 2,... herewith is consistent w.r.t. matrix A as new iteration x m 1 is computed solely based on the old one x m this method is also called total step method; speed of convergence depends on the spectral radius (M J ) only PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 16

JACOBI method (cont d) Theorem 3.9 If regular matrix A strong row sum criteria with a ii 0, i 1,..., n, fulfils the or the strong column sum criteria or the square sum criteria then the JACOBI method converges for arbitrary start vector x 0 arbitrary right hand side b towards A 1 b. and for PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 17

JACOBI method (cont d) remark: matrices fulfilling the strong row sum criteria are called strict diagonally dominant PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 18

JACOBI method (cont d) anyhow, many matrices do not fulfil any of the three criteria stated in theorem 3.9 (e.g., an FD discretisation of the POISSON equation) here, the JACOBI method converges for regular matrix A diagonally dominant, i.e. where that is applies, if some k 1,..., n exists with PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 19

JACOBI method (cont d) example: the simple problem (3.1.2) has the solution A 1 b 1, 1 T : A : x : b with convergence of the JACOBI method is proved PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 20

JACOBI method (cont d) eigenvalues of the iteration matrix M J D 1 (D A) are 1,2 thus (M J ) 0.4781 follows PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 21

JACOBI method (cont d) using start vector x 0 (21, 19) T leads to the following iteration JACOBI method m x m,1 x m,2 m : x m A 1 b 0 2.100000e 01 1.900000e 01 2.000000e 01 15 9.996275e 01 1.000261e 00 3.725165e 04 30 1.000000e 00 1.000000e 00 4.856900e 09 45 1.000000e 00 1.000000e 00 9.037215e 14 48 1.000000e 00 1.000000e 00 8.437695e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 22

GAUSS SEIDEL method we again consider a linear system Ax b with regular matrix A and a regular diagonal matrix D diag a 11,..., a nn we further define the strict lower triangular matrix L (l ij ) ij 1,..., n with l ij a ij, i j 0, otherwise and the strict upper triangular matrix R (r ij ) ij 1,..., n with r ij a ij, i j 0, otherwise and hereby define the equivalent linear system (D L)x Rx b (3.1.3) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 23

GAUSS SEIDEL method (cont d) hence, linear system (3.1.3) can be re written as x (D L) 1 Rx (D L) 1 b : M GS : N GS thus M GS (D L) 1 (D L A) I N GS A applies and the linear iteration method x m 1 (D L) 1 Rx m (D L) 1 b for m 0, 1, 2,... (3.1.4) is consistent w.r.t. matrix A PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 24

GAUSS SEIDEL method (cont d) to derive a component based notation we derive i th row of (3.1.4) in form according to (3.1.3) let x m 1,j for j 1,..., i 1 be known, then x m 1,i can be computed via (3.1.5) for i 1,..., n and m 0, 1, 2,... hence, for computation of i th component (within iteration m 1) both old components of x m and the first i 1 new components of x m 1 are used this method is called single step method; due to better approximation of A via (D L) a smaller spectral radius and faster convergence to be expected PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 25

GAUSS SEIDEL method (cont d) Theorem 3.10 Let the regular matrix A with a ii 0 for i 1,..., n be given. If the recursive defined numbers p 1,..., p n with p i for i 1,..., n fulfil the condition p : max p i 1 i 1,..., n then the GAUSS SEIDEL method converges for arbitrary start vector x 0 and for any arbitrary right hand side b towards A 1 b. Corollary 3.11 Let the regular matrix A be strict diagonally dominant, then the GAUSS SEIDEL method converges for arbitrary start vector x 0 and for any arbitrary right hand side b towards A 1 b. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 26

GAUSS SEIDEL method (cont d) example: matrix A from the linear system (3.1.2) is strict diagonally dominant which ensures convergence of GAUSS SEIDEL method corresponding iteration matrix M GS (D L) 1 R has eigenvalues 1 0 and 2, thus (M GS ) (M J ) 2 0.22857 applies and we can expect twice the speed of convergence compared to JACOBI method PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 27

GAUSS SEIDEL method (cont d) for start vector x 0 (21, 19) T and right hand side b (0.3, 0.3) T we get according to our expectations the following iteration GAUSS SEIDEL method m x m,1 x m,2 m : x m A 1 b 0 2.100000e 01 1.900000e 01 2.000000e 01 5 9.688054e 01 9.875222e 01 3.119462e 02 10 9.999805e 01 9.999922e 01 1.946209e 05 15 1.000000e 00 1.000000e 00 1.214225e 08 20 1.000000e 00 1.000000e 00 7.575385e 12 25 1.000000e 00 1.000000e 00 4.551914e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 28

Relaxation methods! source: purabotanica.com PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 29

relaxation methods re writing the linear iteration method x m 1 B 1 (B A)x m B 1 b as follows x m 1 x m B 1 (b Ax m ), (3.1.6) thus x m 1 can be interpreted as correction of x m by using vector c m when confining further considerations on total step methods, the objective of relaxation is to improve speed of convergence of method (3.1.6) via weighting the correction vector c m hence, we modify (3.1.6) to : c m x m 1 x m B 1 (b Ax m ) with PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 30

relaxation methods (cont d) based on x m we are searching for the optimal x m 1 in direction of c m, i.e., the spectral radius of the iteration matrix becomes minimal c m x m 1 x m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 31

relaxation methods (cont d) with x m 1 x m B 1 (b Ax m ) (I B 1 A)x m B 1 b (3.1.7) : M( ) : N( ) the determination of minimal, hence must be as follows, thus (M( )) becomes arg min (M( )) weighting factor is called relaxation parameter method (3.1.7) is denoted under relaxation method for 1 and over relaxation method for 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 32

JACOBI relaxation method according to (3.1.6) we write the JACOBI method as x m 1 x m D 1 (b Ax m ) for m 0, 1, 2,... hence, the JACOBI relaxation method looks as follows x m 1 x m D 1 (b Ax m ) (I D 1 A)x m D 1 b for m 0, 1, 2,... : M J ( ) : N J ( ) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 33

JACOBI relaxation method (cont d) we get in component based notation and finally for i 1,..., n and m 0, 1, 2,... PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 34

JACOBI relaxation method (cont d) Theorem 3.12 Let the iteration matrix M J of the JACOBI method have only real eigenvalues 1... n with respective linearly independent eigenvectors u 1,..., u n and (M J ) 1 applies. Thus the iteration matrix M J ( ) of the JACOBI relaxation method has eigenvalues and i 1 i for i 1,..., n, applies. opt arg min (M J ( )) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 35

JACOBI relaxation method (cont d) consider the relaxation functions f for different values of with f : f, f ( ) 1 1 f 1/2 f 1 f ( n ) 1 f d f ( 1 ) 1 f n 3/2 1 d PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 36

JACOBI relaxation method (cont d) the optimal relaxation parameter could be determined via n ( ) f ( n ) f ( 1 ) 1 ( ) thus, from n ( ) 1 n (1 1 ) 1 ( ) under the assumption (M J ) 1 we get 0 linear systems for which M J has a symmetric distribution of eigenvalues around origin yield due to theorem 3.12 an optimal relaxation parameter opt 1, hence speed of convergence cannot be accelerated PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 37

GAUSS SEIDEL relaxation method we consider the component based notation of GAUSS SEIDEL method (3.1.5) with weighted correction vector component r m,i given as for i 1,..., n and m 0, 1, 2,... thus, we get (I D 1 L)x m 1 [(1 )I D 1 R]x m D 1 b PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 38

GAUSS SEIDEL relaxation method (cont d) with D 1 (D L)x m 1 D 1 [(1 )D R]x m D 1 b the GAUSS SEIDEL relaxation method in the notation x m 1 (D L) 1 [(1 )D R]x m (D L) 1 b follows : M GS ( ) : N GS ( ) Theorem 3.13 Let A with a ii 0 for i 1,..., n, then for (M GS ( )) 1 applies. (Proof is lengthy!) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 39

GAUSS SEIDEL relaxation method (cont d) from theorem 3.13 follows that 0 always leads to a divergent method, hence the initial request for finds its motivation here furthermore, the above theorem also implies the request 2 Corollary 3.14 The GAUSS SEIDEL relaxation method converges at most for a relaxation parameter (0, 2). Theorem 3.15 Let A be hermitian and positive definite, then the GAUSS SEIDEL relaxation method converges iff (0, 2). Both proofs are lengthy. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 40

GAUSS SEIDEL relaxation method (cont d) note: it can be shown for : (M J ) 1 the spectral radius of the iteration matrix M GS ( ) becomes minimal for opt which yields (M GS ( opt )) opt 1 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 41

GAUSS SEIDEL relaxation method (cont d) example: for the linear system (3.1.2) with A, b we know that eigenvalues of M J are 1,2 and thus (M J ) 1 applies hence, the optimal relaxation parameter is opt 1.0648 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 42

GAUSS SEIDEL relaxation method (cont d) which yields (M GS ( opt )) 0.0648 ( (M GS ) 2 ) [for comparison: (M J ) 0.4781, (M GS ) (M J ) 2 0.22857] using start vector x 0 (21, 19) T leads to the following iteration m 0 5 10 15 x m,1 GAUSS SEIDEL relaxation method x m,2 m : x m A 1 b 2.100000e 01 1.900000e 01 2.000000e 01 9.987226e 01 9.997003e 01 1.277401e 03 1.000000e 00 1.000000e 00 2.942099e 09 1.000000e 00 1.000000e 00 4.884918e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 43

RICHARDSON method based upon the basic algorithm x m 1 (I A)x m b with a weighting of the correction vector with a number r m b Ax m thus, we get x m 1 (I A)x m I b (3.1.8) or in component based notation : M R ( ) : N R ( ) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 44

RICHARDSON method (cont d) Theorem 3.16 Let A with (A) and max max, min min, then the RICHARDSON method (3.1.8) converges for (A) iff (A) applies. 0 note: it can be shown that for (A) with max, min according to above the spectral radius of iteration matrix M R ( ) becomes minimal for opt (3.1.9) which yields (M R ( )) (3.1.10) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 45

RICHARDSON method (cont d) given are the functions g max, g min : with g max ( ) 1 max and g min ( ) 1 min, hence under consideration of opt arg min (M R ( )) arg max g max ( ), g min ( ) we can determine the condition opt max 1 1 opt min from the figure and, thus opt g max, g min 1 g min g max opt PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 46

RICHARDSON method (cont d) example: for the linear system (3.1.2) with A, b and 0 det (A I) (0.7 )(0.5 ) 0.08 ( 0.6) 2 0.09 we obtain eigenvalues 1,2 0.6 0.3 and, thus, a spectrum (A) theorem 3.16 ensures convergence of RICHARDSON method for all with 0 2.2 hence, with (3.1.9) we get opt PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 47

RICHARDSON method (cont d) finally, with (3.1.10) we get (M R ( opt )) 0.5 using start vector x 0 (21, 19) T leads to the following iteration RICHARDSON method m x m,1 x m,2 m : x m A 1 b 0 2.100000e 01 1.900000e 01 2.000000e 01 15 9.989827e 01 1.000203e 00 1.017253e 03 30 1.000000e 00 1.000000e 00 1.862645e 08 45 1.000000e 00 1.000000e 00 9.473533e 13 52 1.000000e 00 1.000000e 00 4.662937e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 48

overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 49

overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 50

overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 51