Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Similar documents
Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Iterative Methods. Splitting Methods

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Linear Algebra: Matrix Eigenvalue Problems

Chapter 7 Iterative Techniques in Matrix Algebra

9. Iterative Methods for Large Linear Systems

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 108b: Notes on the Spectral Theorem

Theory of Iterative Methods

Chap 3. Linear Algebra

Introduction to Iterative Solvers of Linear Systems

LinGloss. A glossary of linear algebra

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Lecture 8 : Eigenvalues and Eigenvectors

JACOBI S ITERATION METHOD

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Numerical Methods - Numerical Linear Algebra

COURSE Iterative methods for solving linear systems

Algebra C Numerical Linear Algebra Sample Exam Problems

1 Number Systems and Errors 1

Definition (T -invariant subspace) Example. Example

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 Last time: least-squares problems

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Homework 2 Foundations of Computational Math 2 Spring 2019

Solving Linear Systems of Equations

Iterative methods for Linear System

CAAM 454/554: Stationary Iterative Methods

Computational Linear Algebra

Applied Linear Algebra

9.1 Preconditioned Krylov Subspace Methods

Lecture 10 - Eigenvalues problem

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

9. Banach algebras and C -algebras

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Introduction to Applied Linear Algebra with MATLAB

Computational Economics and Finance

Iterative techniques in matrix algebra

CLASSICAL ITERATIVE METHODS

Eigenvalues and eigenvectors

Conjugate Gradient (CG) Method

Iterative Methods for Solving A x = b

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Synopsis of Numerical Linear Algebra

Math 489AB Exercises for Chapter 2 Fall Section 2.3

4.6 Iterative Solvers for Linear Systems

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Classical iterative methods for linear systems

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Eigenvalue and Eigenvector Problems

MATH 581D FINAL EXAM Autumn December 12, 2016

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Elementary linear algebra

Properties of Matrices and Operations on Matrices

Linear Algebra in Actuarial Science: Slides to the lecture

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Linear Algebra Review

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Quantum Computing Lecture 2. Review of Linear Algebra

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Eigenvalues and Eigenvectors

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Foundations of Matrix Analysis

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Lecture 18 Classical Iterative Methods

Numerical Programming I (for CSE)

MAT 610: Numerical Linear Algebra. James V. Lambers

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Chapter 3. Numerical linear algebra. 3.1 Motivation. Example 3.1 (Stokes flow in a cavity) Three equations,

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

MATHEMATICS 217 NOTES

Numerical Analysis: Solving Systems of Linear Equations

CS 246 Review of Linear Algebra 01/17/19

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

The Solution of Linear Systems AX = B

Basic Concepts in Linear Algebra

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

G1110 & 852G1 Numerical Linear Algebra

Iterative Methods and Multigrid

11. Convergence of the power sequence. Convergence of sequences in a normed vector space

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Numerical Linear Algebra Homework Assignment - Week 2

2 Two-Point Boundary Value Problems

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Review of Basic Concepts in Linear Algebra

LINEAR SYSTEMS (11) Intensive Computation

The Eigenvalue Problem: Perturbation Theory

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Transcription:

Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class. We will use the Jordan Decomposition Theorem. (Those of you who took my 640 class last semester saw an alternative proof using Schur s Theorem). To this end, we make the following definition: Definition 1. A Jordan Block is a square matrix of the form λ if i = j, (4.1) J ij = 1 if j = i+1, The following theorem can be found in any reasonably good text on linear algebra. For more details, see also, http://mathworld.wolfram.com/jordancanonicalform.html. Theorem 1. (Jordan Canonical Form Theorem) Any square (complex) matrix A is similar to a block diagonal matrix with Jordan Blocks on the block diagonal, i.e., A = NJN 1 with N an n n complex valued non-singular matrix and J block diagonal with Jordan blocks on the diagonal. Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J 1 0 0 0 J 2 0. 0 0 J 3 Here J i are Jordan Blocks of dimension k i k i with λ i on the diagonal. The remaining zero blocks get their sizes from the diagonal blocks, e.g., the 1,3 block has dimension k 1 k 3. Remark 1. The above theorem states that given a square complex matrix A, we can write J = N 1 AN where J is a block diagonal matrix with Jordan Blocks on the diagonal. The similarity matrix N is, in general, complex even when A has all real entries. Since similarity transformations preserve eigenvalues and the eigenvalues of an upper triangular matrix (a matrix with zeroes below the diagonal) are the values on the diagonal, the values of λ appearing in the Jordan Blocks are the eigenvalues of the matrix A. As the eigenvalues of matrices with real coefficients are often complex, the similarity matrices even for matrices with real coefficients end up being complex as well. 1

2 We shall use Proposition 1 of Class Notes 2 which we restate here as a reminder. Proposition 1. Let B be an m m matrix with possibly complex entries then ( B = max m m ) B jk. j=1 k=1 Proof of Theorem 2 of Class Notes 3. WestartbyapplyingtheJordanCanonical Form Theorem and conclude that J = N 1 AN with J being a block diagonal matrix with K Jordan Blocks on the diagonal. The l th diagonal block of J has the following form: λ l if i = j, Jij l = 1 if j = i+1, As {λ l } for l = 1,...,K are the eigenvalues of A, ρ(a) = K max l=1 λ l. Let ǫ > 0 be given and set M to be the diagonal matrix with entries M ii = ǫ i. Further, set x = M 1 N 1 x. That this is a norm follows from Problem 2 of Class 2. Now A = sup x C n,x 0 Ax x M 1 N 1 Ax = sup x C n,x 0 M 1 N 1 x M 1 N 1 ANMy = sup = M 1 JM. y C n,y 0 y The second to last equality above followed from substituting y = M 1 N 1 x and noting that as x goes over all nonzero vectors, so does y. A direct computation gives λ l if i = j, (M 1 JM) ij = 0 or ǫ if j = i+1, Applying the proposition gives, A = M 1 JM K max λ i +ǫ = ρ(a)+ǫ.

3 This completes the proof. We can get a slightly better version of the corollary of the previous class in the case when G has real entries. Corollary 1. Let G be the reduction matrix for a linear iterative method with real entries. Then the iterative method converges for any starting iterate with real values and any right hand side with real values if and only if ρ(g) < 1. Proof. The proof in the case of ρ(g) < 1 is identical to that of the previous corollary. The argument in the case when ρ(g) 1 needs to be changed since the eigenvector φ used in the proof of the previous corollary may be complex. Assume that G is an n n matrix and ρ(g) 1 and use the Jordan form theorem to write G = N 1 JN. In this case, there is a Jordan Block (of dimension k l k l ) with diagonal entry λ l satisfying λ l 1. Suppose that this block corresponds to the indices j k l +1,...,j. There is a vector φ R n with (Nφ) j 0 1. We prove this by contradiction. Suppose (Nφ) j = 0 for all φ R n then, by linearity, (N(φ r +iφ k )) j = 0 for all φ r,φ k R n (here i = 1). This implies that (Nφ) j = 0 for all φ C n. This cannot be the case since N is nonsingular and its range is all of C n. Examining the structure of J, it is easy to check that (J i Nφ) j = λ i l (Nφ) j and hence J i Nφ does not converge to zero as i tends to infinity. Thus, since N 1 is nonsingular, G i φ = N 1 J i Nφ also does not converge to zero as i tends to infinity. The Successive Over Relaxation Method. In the remainder of this class, we consider the Successive Over Relaxation Method (SOR). This is another example of a splitting method. In this case we split A = (ω 1 D +L)+((1 ω 1 )D +U), i.e., and obtain the iterative method (ω 1 D +L)x = ((ω 1 1)D U)x+b (4.2) (ω 1 D +L)x i+1 = ((ω 1 1)D U)x i +b. As in the Gauss-Seidel and Jacobi iterations, we can only apply this method to matrices with non-vanishing diagonal entries. 1 This would be obvious if we knew that N were real but this is not the case in general.

4 In the above method, ω is an iteration parameter which we shall have to choose. When ω = 1, the method reduces to Gauss-Seidel. Like Gauss- Seidel, this method can be implemented as a sweep. We first develop a necessary condition on ω required for convergence of the SOR method. A simple computation shows that the reduction matrix for the SOR method is G SOR = (ω 1 D +L) 1 ((ω 1 1)D U). From the corollary of the previous class, a necessary condition for SOR to converge for all starting iterates and right hand sides is that ρ(g SOR ) < 1. The eigenvalues of G SOR are the roots of the characteristic polynomial, P(λ) = det(g SOR λi) = ( 1) n λ n +C n 1 λ n 1 + +C 0. Note that C 0 = P(0) = det(g SOR ) and that n P(λ) = ( 1) n (λ λ i ) where λ 1,λ 2,...,λ n are the roots of P. Expanding the above expression shows that n C 0 = λ i = det(g SOR ). Note that if C 0 1, there has to be at least one root whose absolute value is at least one, i.e., ρ(g SOR ) 1. Consequently, a necessary condition for ρ(g SOR ) < 1 and the convergence of the SOR iteration is that C 0 = det(g SOR ) < 1. Now, the determinant of an upper or lower triangular matrix is the product of the entries on the diagonal. Using this and other simple properties of the determinant gives det(g SOR ) = det((ω 1 1)D U) det(ω 1 D +L) = n (ω 1 1)D i,i n ω 1 D i,i = (1 ω) n. For this product to be less that one in absolute value, it is necessary that 1 ω < 1, i.e., 0 < ω < 2. We restate this as a proposition. Proposition 2. A necessary condition for the SOR method to converge for any starting vector and right hand side is that 0 < ω < 2. We shall provide a convergence theorem for the SOR method. Note that the reduction matrix G SOR is generally not symmetric even when A is symmetric. Accordingly, even if A has real entries, any analysis of G SOR will have to involve complex numbers as, in general, G SOR will have complex

eigenvalues. Accordingly, we shall provide an analysis for complex Hermitian matrices A. Definition 2. The conjugate transpose N of a general n m matrix N with complex entries is the m n matrix with entries (N ) i,j = N j,i. Here the bar denotes complex conjugate. An n n matrix A with complex entries is called Hermitian (or conjugate symmetric) if A = A. We shall use the Hermitian inner product (, ) on C n C n defined by n (x,y) = x i ȳ i. We shall discuss more general Hermitian inner products in later classes. We note that (x,y) = (y,x), (αx,y) = α(x,y), and (x,αy) = ᾱ(x,y) for all x,y C n and α C. In addition, it is easy to see that when N is an n m matrix, N is the unique matrix satisfying (check it by applying it to the standard basis vectors!) (Nx,y) = (x,n y) for all x C m, y C n. We also note the following properties of a Hermitian n n matrix A. (Ax,x) is real since (Ax,x) = (x,ax) = (Ax,x). If A is positive definite 2 then A i,i > 0 since A i,i = (Ae i,e i ) > 0 where e i denotes the i th standard basis vector for C n. We finish this class by stating a convergence theorem for SOR. Its proof will be given in the next class. Theorem 2. Let A be an n n Hermitian positive definite matrix and ω be in (0,2). Then the SOR method for iteratively solving Ax = b converges for any starting vector and right hand side. 5 2 A complex n n matrix A is positive definite if (Ax,x) > 0 for all non zero x C n.