Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Similar documents
Theory of Iterative Methods

Lecture 10 - Eigenvalues problem

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

CHAPTER 5. Basic Iterative Methods

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

CS 246 Review of Linear Algebra 01/17/19

Algebra C Numerical Linear Algebra Sample Exam Problems

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

CAAM 454/554: Stationary Iterative Methods

JACOBI S ITERATION METHOD

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Gershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds

Chapter 12: Iterative Methods

The convergence of stationary iterations with indefinite splitting

Solving Linear Systems

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Recall : Eigenvalues and Eigenvectors

9. Iterative Methods for Large Linear Systems

Numerical Methods - Numerical Linear Algebra

Computational Linear Algebra

Solving Linear Systems of Equations

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture 18 Classical Iterative Methods

Iterative Methods. Splitting Methods

Numerical Linear Algebra Homework Assignment - Week 2

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 240 Calculus III

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Chapter 3. Determinants and Eigenvalues

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Foundations of Matrix Analysis

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Eigenvalues and Eigenvectors

Eigenvalue and Eigenvector Problems

Chapter 5 Eigenvalues and Eigenvectors

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Introduction to Scientific Computing

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...

9.1 Preconditioned Krylov Subspace Methods

Matrix Operations: Determinant

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Programming I (for CSE)

Math 315: Linear Algebra Solutions to Assignment 7

Dimension. Eigenvalue and eigenvector

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Math 108b: Notes on the Spectral Theorem

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Dominant Eigenvalue of a Sudoku Submatrix

9.1 Eigenvectors and Eigenvalues of a Linear Map

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 4 Eigenvalue problems

Eigenvalues and Eigenvectors

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Conceptual Questions for Review

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Symmetric and anti symmetric matrices

Math Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Econ Slides from Lecture 7

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Determinants by Cofactor Expansion (III)

Chap 3. Linear Algebra

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

6 EIGENVALUES AND EIGENVECTORS

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Solving Linear Systems

Announcements Wednesday, November 01

Chapter 2. Square matrices

The Eigenvalue Problem: Perturbation Theory

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

and let s calculate the image of some vectors under the transformation T.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Eigenvalues and Eigenvectors

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

CLASSICAL ITERATIVE METHODS

Conjugate Gradient (CG) Method

Chapter 5. Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computational Methods. Systems of Linear Equations

Classical iterative methods for linear systems

SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

Notes on Linear Algebra and Matrix Theory

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

Matrices and Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Transcription:

Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A} = λ +λ + +λ n the eigenvalues of a triangular matrix are the diagonal elements similar matrices B = S AS have the same eigenvalues the eigenvalues of a real symmetric matrix are real the eigenvalues of an orthogonal matrix have λ =. The first fact can be generalized to block triangular matrices. If M is block upper triangular [ A B M = 0 C the eigenvalues of M are the eigenvalues of A plus the eigenvalues of C since [ A λi B 0 = det{m λi} = det = det{a λi}det{c λi}. 0 C λi Here each identity matrix I has the appropriate dimensions to match its partner. A similar statement holds for block lower triangular M. Thus for block triangular matrices, the eigenvalue problem can be broken up into smaller subproblems. (A matrix M is reducible if it can be written in block triangular form by reorderingrowsandcolumnspmp, wherep isapermutationmatrix.) Gershgorin Circle Theorem Also called the Gershgorin Disk Theorem. Theorem statement and Examples and are based on LeVeque s Finite Difference Methods for Ordinary & Partial Differential Equations.

Theorem. Let A C n n and let D i be the closed disk in the complex plane centered at A ii with radius given by the row sum r i = j i A ij : D i = {z C : z A ii r i } D(A ii,r = r i ). ThenalltheeigenvaluesofAlieintheunionofthedisksD i fori =,..., n. If some set of k overlapping disks is disjoint from all the other disks, then exactly k eigenvalues lie in the union of these k disks. Note the following: If a disk D i is disjoint from all other disks, then it contains exactly one eigenvalue of A. If a disk D i overlaps other disks, then it need not contain any eigenvalues, although the union of the overlapping disks contains the appropriate number. If A is real, A T has the same eigenvalues as A and the theorem can also be applied to A T (or equivalently the disk radii can be defined by summing the magnitudes of the off-diagonal elements of columns of A rather than rows). (If A is irreducible, a stronger version of the theorem states that an eigenvalue cannot lie on the boundary of a disk unless it lies on the boundary of every disk.) Proof: Ax = λx implies (λ A ii )x i = j i A ij x j λ A ii j i by choosing i such that x i = max j x j. A ij x j x i A ij = r i j i

Examples Here we assume all matrices are real.. 3 3 example A = 5 0.6 0. 6 0. 0 The Gershgorin theorem applied to A implies that the eigenvalues lie within the union of D(5,r = 0.7), D(6,r =.), and D(,r = ). There is exactly one eigenvalue in D(,r = ) and two eigenvalues in D(5,r = 0.7) D(6,r =.). All eigenvalues have real parts between and 7. (and hence positive real parts, in particular). The eigenvalue in D(, r = ) must be real, since complex eigenvalues must appear in conjugate pairs. Applying the theorem to A T gives a tighter bound on the single eigenvalue λ near ; λ must lie within D(,r = 0.), so.8 λ.. The actual eigenvalues of A are λ =.9639 and λ ± = 5.580±0.64i.. Second difference matrix ThematrixD () = tridiag[ issymmetric, soallitseigenvaluesarereal. By the Gershgorin theorem they must lie in the circle of radius centered at : 4 λ j 0. We know that D () is nonsingular, so all the eigenvalues of D () are negative. (Also since D () is irreducible and two circles have radii =, 4 < λ j < 0, which proves D () is nonsingular.) The eigenvalues can actually be calculated: λ j = (cos(jπh) ), j =,..., n where h = /(n+), and thus are distributed between 4 < λ j < 0. The Jacobi iteration matrix is B = tridiag[ 0 with < λ j < (since B is irreducible), and Jacobi converges since ρ(b) <. 3. Absolute row sums < Suppose B has absolute row sums <. Then r i = B i + + B i, + B i+, + + B in < B ii 3

and by the Gershgorin theorem all λ i < and ρ(b) <. 4. Diagonally dominant matrix Suppose A is diagonally dominant: r i = A i + + A i, + A i+, + + A in < A ii, i =,..., n. Then by the Gershgorin theorem λ i 0 and A is invertible. For Jacobi iteration, B = I D A has all absolute row sums <, so Jacobi converges. Gauss-Seidel and SOR are also guaranteed to converge. In Examples 5 9, see how much information you can extract about the actual eigenvalues using just the Gershgorin Circle Theorem and (block) triangularity. The actual eigenvalues are given. 5. From HW6 λ = 0, λ ± = ± /. 6. From HW7 λ ± = (± 5)/. A = A = 0 0 0 0 0 [ 0 Last examples are from Theory of Iterative Methods notes 7. For B GS = B J = [ A = [ 0 [ 0 0 4 0, λ ± = ±, ρ J =, λ = 0, λ = 4, ρ GS = 4 8. Example where Jacobi converges but Gauss-Seidel diverges 0 A =, B J = 0 0 4

λ = λ = λ 3 = 0, ρ J = 0, Jacobi converges. 0 B GS = 0 3 0 0 λ = 0, λ = λ 3 =, ρ GS =, Gauss-Seidel diverges. 9. Example where Jacobi diverges but Gauss-Seidel converges A =, B J = 0 0 0 λ =, λ = λ 3 = 0.5, ρ J =, Jacobi diverges. B GS = 0 4 4 0 8 0 3 λ = 0, λ ± = 0.35±0.654i, ρ GS = 0.3536, Gauss-Seidel converges. 5