Iterative Methods. Splitting Methods

Similar documents
9. Iterative Methods for Large Linear Systems

JACOBI S ITERATION METHOD

Algebra C Numerical Linear Algebra Sample Exam Problems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

COURSE Iterative methods for solving linear systems

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Next topics: Solving systems of linear equations

9.1 Preconditioned Krylov Subspace Methods

CS 323: Numerical Analysis and Computing

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Chapter 7 Iterative Techniques in Matrix Algebra

Solving Linear Systems

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Iterative Methods for Solving A x = b

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

CLASSICAL ITERATIVE METHODS

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Process Model Formulation and Solution, 3E4

Numerical Methods I Non-Square and Sparse Linear Systems

Computational Methods. Systems of Linear Equations

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 18 Classical Iterative Methods

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

The Solution of Linear Systems AX = B

1 Number Systems and Errors 1

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Solving Linear Systems of Equations

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Lab 1: Iterative Methods for Solving Linear Systems

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

Stabilization and Acceleration of Algebraic Multigrid Method

AIMS Exercise Set # 1

LINEAR SYSTEMS (11) Intensive Computation

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Solving Linear Systems

Introduction to Scientific Computing

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Numerical Methods - Numerical Linear Algebra

Introduction to Applied Linear Algebra with MATLAB

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

CAAM 454/554: Stationary Iterative Methods

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

The Conjugate Gradient Method

Iterative techniques in matrix algebra

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

EXAMPLES OF CLASSICAL ITERATIVE METHODS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Course Notes: Week 1

Numerical Methods in Matrix Computations

4.6 Iterative Solvers for Linear Systems

Notes for CS542G (Iterative Solvers for Linear Systems)

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

CHAPTER 5. Basic Iterative Methods

Theory of Iterative Methods

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Numerical Methods I: Eigenvalues and eigenvectors

MA2501 Numerical Methods Spring 2015

Conjugate Gradient (CG) Method

Computational Economics and Finance

Iterative Solvers. Lab 6. Iterative Methods

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative Solution methods

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

The iteration formula for to find the root of the equation

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MAT 610: Numerical Linear Algebra. James V. Lambers

CPE 310: Numerical Analysis for Engineers

From Stationary Methods to Krylov Subspaces

Math 411 Preliminaries

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Iterative methods for Linear System

1 Non-negative Matrix Factorization (NMF)

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebraic Equations

= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let

MATH 571: Computational Assignment #2

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Transcription:

Iterative Methods Splitting Methods 1

Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition methods including QR Advantages Algorithm terminates in a fixed number of iterations Can take advantage of sparsity (to some extent) 2

Direct Methods Possible disadvantages? We have to store the matrix, or at least know what the entries are No obvious way to get a solution that is close enough to the exact solution for whatever purpose we have in mind No way to reduce the operation count unless we have a special sparse structure. 3

Iterative methods - scalar case Can we apply what we know about root-finding to the problem of finding the solution to a linear system of equations? Newton s method, when applied to a scalar linear equation converges in one step: r = b ax f(x) =b ax Newton : x 1 = x 0 b ax a = b a 4

Iterative methods - scalar case We can formulate a fixed point problem with the function g(x) defined as g(x) = 1 m (b ax)+x = 1 a m x + b m where we choose m so that g 0 (x) < 1 (choose m =2a, for example). g(x) = 1 2 x + 5 6 f(x) =5 3x 5

Iterative methods - matrix case We can apply Newton s Method to the function F (x) =b Ax where x, b 2 R n and A 2 R n n. We get x k+1 = x k [F 0 (x k )] 1 F (xk ) = x k + A 1 (b Ax k ) This also gives us the answer in exactly one step! 6

Iterative methods - matrix case Appling the fixed point iteration to F (x) =b get g(x) =x + M 1 (b Ax) Ax, we where now M 2 R n n. The fixed point iteration then looks like x k+1 = x k + M 1 (b Ax k ) The choice of M will affect the convergence of the scheme. 7

Comparison Newton step: x k+1 = x k + A 1 (b Ax k ) Fixed point step: x k+1 = x k + M 1 (b Ax k ) The Newton step is no different than using a direct method, since we have to invert A at each step. The fixed point approach is promising, if we can understand how different choices for M affect convergence of the scheme. 8

A simple iteration scheme For the scalar case, we have that g(x) = 1 m (b ax)+x = 1 a m x + b m which will converge to a root of f(x) =b ax if g 0 (x) = 1 a m < 1 and the closer m is to a, the faster the convergence. In fact, if m = a, we get convergence in one step. 9

A simple iteration scheme In the matrix case, we have g(x) = x + M 1 (b Ax) = I M 1 A x + M 1 b we also want M to be as close to A as is feasible, while still being easily invertible. If M = A 1, we converge in one step. Do we also have to have a condition like g 0 (x) < 1? 10

General splitting method In a general splitting method, we write A = P Q This leads to the iterative scheme P x k+1 = Qx k + b or x k+1 = P 1 Qx k + P 1 b The matrix P 1 Q is sometimes called the iteration matrix. 11

General splitting method For our method, we have P M and Q M A. Then x k+1 = M 1 (M A) x k + M 1 b = I M 1 A x k + M 1 b The splitting matrix is P 1 Q (I M 1 A) 12

Classic splitting methods Some classic schemes. Note that for each scheme, M approximates A. x k+1 = I M 1 A x k + M 1 b M = D M = L The Jacobi iteration The Gauss-Seidel iteration M =(! 1 1)D + L The Successive Over Relaxation or SOR Method where D is the diagonal of A and L is the lower triangular portion of A (including diagonal). The matrix M for each of these choices is easy to invert! 13

Simple Iteration Algorithm x k = I M 1 A x k 1 + M 1 b Given an initial guess x 0, compute r 0 = b solve Mz 0 = r 0. Ax 0, and For k =1, 2,..., Set x k = x k 1 + z k 1. Compute r k = b Ax k. Matrix vector multiply Solve Mz k = r k should be easy to solve! if kz k k< Note that z k is the increment x k x k 1. Stop. end 14

Convergence? Let A be an n n matrix. matrix is defined as The spectral radius of a (A) = max { : is an eigenvalue of A} The following are equivalent (A) < 1 A k! 0 as k!1, and A k x! 0 as k!1for any vector x. If one of the above conditions is true, they are all true. 15

Convergence Let e k = A 1 b x k.then e k =(I M 1 A)e k 1 =...=(I M 1 A) k e 0 Then show this! ke k kapplek I M 1 A k kke 0 k The iteration will converge if and only if we have (I M 1 A) < 1 16

Convergence Depending on which norm we use, however, we might not see linear convergence. All we can say is that to get ke k kapple ", we require that k satisfy log(") k log ki M 1 Ak This only says that we need at least this many iterations; in practice the convergence can be quite slow. 17

Convergence of Splitting methods Some relevant matrix properties for A 2 R n n : Symmetric : A T = A Positive definite : x T Ax > 0 for any x 2 R n Symmetric, positive definite (SPD) : A is symmetric, and all of its eigenvalues are positive, Reminder : If A is symmetric, then the eigenvalues of A are positive if and only if A is positive definite. Diagonally dominant : P n j=1,i6=j a ij < a ii 18

Convergence of splitting methods Suppose A 2 R n n is a non-singular matrix. Then If A is strictly diagonally dominant, then the Jacobi method converges for any starting vector x 0. If A is symmetric, positive definite, then the Gauss- Seidel method converges for any starting vector x 0. If A is symmetric, positive definite and 0 <! < 2, then the SOR method converges for any starting vector x 0. Note : There are many examples of matrices for which the above conditions do not hold, and yet the iteration using one of the three methods, converges. 19

Examples of convergence Jacobi SOR, with optimal! Gauss-Seidel Convergence for model tridiagonal [1, 2, 1] matrix. 20

Examples of convergence SOR Gauss-Seidel Jacobi Convergence for a diagonally dominant system and poorly chosen SOR parameter. 21

Examples of convergence Gauss-Seidel Errors initially increase Pathological convergence for a highly non-normal matrix 22