AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Similar documents
Lecture 15, 16: Diagonalization

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Recall : Eigenvalues and Eigenvectors

JACOBI S ITERATION METHOD

Econ Slides from Lecture 7

Diagonalization of Matrix

Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

7.6 The Inverse of a Square Matrix

Chapter 2 Notes, Linear Algebra 5e Lay

Elementary Linear Algebra

Extreme Values and Positive/ Negative Definite Matrix Conditions

Announcements Monday, November 06

Recall the convention that, for us, all vectors are column vectors.

Lecture 1 Systems of Linear Equations and Matrices

NORMS ON SPACE OF MATRICES

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Section 2.2: The Inverse of a Matrix

Linear Algebra March 16, 2019

and let s calculate the image of some vectors under the transformation T.

Honors Advanced Mathematics Determinants page 1

22m:033 Notes: 3.1 Introduction to Determinants

12. Perturbed Matrices

City Suburbs. : population distribution after m years

Lecture 10: Powers of Matrices, Difference Equations

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

THE MATRIX EIGENVALUE PROBLEM

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

An overview of key ideas

Diagonalization. P. Danziger. u B = A 1. B u S.

Further Mathematical Methods (Linear Algebra) 2002

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Study Guide for Linear Algebra Exam 2

Solution Set 3, Fall '12

Definition (T -invariant subspace) Example. Example

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra Review. Fei-Fei Li

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS

Introduction to Determinants

MAT 1302B Mathematical Methods II

Notes on Row Reduction

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

Chapter 2. Matrix Arithmetic. Chapter 2

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Math 240 Calculus III

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter 3. Determinants and Eigenvalues

Conceptual Questions for Review

Notes on Eigenvalues, Singular Values and QR

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Eigenvalues, Eigenvectors, and Diagonalization

Linear Algebra Review. Fei-Fei Li

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Lecture 9: Elementary Matrices

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Chapter 5 Eigenvalues and Eigenvectors

Chap 3. Linear Algebra

Eigenvalues, Eigenvectors, and Diagonalization

Problem Solving in Math (Math 43900) Fall 2013

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Announcements Wednesday, November 01

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Final Exam Practice Problems Answers Math 24 Winter 2012

MATH36001 Perron Frobenius Theory 2015

Eigenvalues and eigenvectors

Math 315: Linear Algebra Solutions to Assignment 7

Basics. A VECTOR is a quantity with a specified magnitude and direction. A MATRIX is a rectangular array of quantities

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Properties of Linear Transformations from R n to R m

Jordan normal form notes (version date: 11/21/07)

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Computational Linear Algebra

Math 2114 Common Final Exam May 13, 2015 Form A

4.5 Integration of Rational Functions by Partial Fractions

lecture 2 and 3: algorithms for linear algebra

4. Linear transformations as a vector space 17

Diagonalization. Hung-yi Lee

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University

Review of similarity transformation and Singular Value Decomposition

Math 108b: Notes on the Spectral Theorem

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Math Spring 2011 Final Exam

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Lecture 11: Diagonalization

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Solving Quadratic Equations

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

A Field Extension as a Vector Space

Chapter 4 & 5: Vector Spaces & Linear Transformations

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Solving Linear Systems

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Transcription:

AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise in a number of applications. One way in which such systems are solved is as follows. First rewrite the system as x = b + Ax () Choose and initial guess x (0). Then define a sequence of iterates x (),x (2),... by x (k+) = b + Ax (k), k =0,, 2,... (2) Do these iterates converge to x? Subtract (2) from (), obtaining x x (k+) = Ax Ax (k) = A ( x x (k))

By recursion, x x (k+) = A ( x x (k)), k 0 x x (k) = A k ( x x (0)), k 0 This error depends on both A k and the initial error x x (0), x x (k) A k x x (0) Generally we want a method which works for all possible initial guesses, in part because rounding error will eventually bring a randomness into the error. Consequently, in order to determine convergence of x (k) to x, we need to know whether A k 0ask.

THEOREM Let A be an n n matrix. Then if and only if A k 0 as k r σ (A) < PROOF: There are a number of ways to prove this; and in the book, I use the Jordan canonical form for A. To simplify things for this class presentation, we make the simplifying assumption that the Jordan canonical form for A is a diagonal matrix: there is a nonsingular matrix P for which P AP = D =diag[λ,..., λ n ] We can rewrite this as A = PDP

Then A k = ( PDP ) k = ( PDP )( PDP ) ( PDP ) = PD k P Then A k 0 if and only if D k 0. For a diagonal matrix D =diag[λ,..., λ n ], D k =diag [ λ k,..., λk n Thus D k 0 if and only if λ k i 0 as k 0 for i =, 2,..., n. This is true if and only if r σ (A) max i n λ i < COROLLARY ] Suppose A <. Then A k 0ask, because r σ (A) A <

SOLVABILITY OF A LINEAR SYSTEM Return to the linear system (I A) x = x Ax = b Is this system uniquely solvable? Equivalently, is I A an invertible matrix? The answer to this question turns out to be of fundamental importance in the numerical analysis of linear algebra problems. THEOREM Let A be an n n matrix. If r σ (A) <, then (I A) exists; and moreover, it can be expressed as the convergent infinite series (I A) = I + A + A 2 + A 3 + Conversely, if this series is convergent, then r σ (A) <

PROOF: Assume r σ (A) <. Then I A must be nonsingular. Otherwise there would be a vector x 0 for which (I A) x = 0 Ax = x and this says is an eigenvalue of A, contrary to assumption. To prove the series converges, look at its partial sums B k = I + A + A 2 + A 3 + + A k for k =, 2, 3,... Then (I A)B k = (I A) ( I + A + + A k) = I A k+ B k = (I A) ( I A k+) Then the partial sums B k converge to (I A) because A k+ 0.

If instead we assume the series I + A + A 2 + A 3 + is convergent, then the general term A k must converge to the zero matrix 0 as k. That in turn implies This result about r σ (A) < (I A) = I + A + A 2 + A 3 + is called the Geometric Series Theorem. Note its resemblance to the well-known geometric series z =+z + z2 + z 3 +, z < This in not an accidental resemblance. We can do similar things with series such as that for e z, leading to e A =+A + 2! A2 + 3! A3 +

THEOREM Let A be an n n matrix with A <. (I A) exists, with Then Moreover, (I A) = I + A + A 2 + A 3 + (I A) A PROOF: The only thing to be proven is the bound. Recall that Then B k = I + A + A 2 + A 3 + + A k = (I A) ( I A k+) B k I + A + A 2 + A 3 + + A k + A + A 2 + + A k = A k+ A A

Also, B k (I A) Bk = A (I A) B k (I A) A k+ (I A) A k+ 0 Combining these results proves the bound (I A) A

AN APPROXIMATION RESULT Suppose we are considering a system Ax = b where A is some nonsingular n n matrix. What happens to the solvability of this system if we change the matrix A by a small amount to a new square matrix B? And what is the relation of the solution of to the original solution x? Bz = b

THEOREM Let A and B be two n n matrices, and suppose A is nonsingular. Moreover, assume A B < A Then the matrix B is also nonsingular, with B A A A B For the solutions of Ax = b and Bz = b, x z B A B x This theorem tells us that if A is invertible, then all nearby matrices B are also invertible.

PROOF: Write B = A (A B) = A [ I A (A B) ] Examine the matrix I A (A B). A (A B) A A B < Then by the Geometric Series Theorem, [ I A (A B) ] exists, and [ I A (A B) ] A (A B) A A B

Returning to B = A [ I A (A B) ] we have B is the product of invertible matrices, and therefore it is itself invertible: B = [ I A (A B) ] A B [ I A (A B) ] A The bound for B follows from this formula. For the equations Ax = b and Bz = b, write x z = A b B b = ( A B ) b = B (B A) A b = B (B A) x x z B A B x

EXAMPLE Let B = c 0 0 0 c 0 0 0 c 0 0 0 c 0 0 0 c Approximate this by the matrix A = ci. Then B A = Using the row norm, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B A =2

Also, A = A c I, = The condition A B < A becomes c. 2 < c Then B is invertible if this is true; and then the bound B A A A B becomes B c 2 c = c 2 (3)

This also tells us something about the eigenvalues of B. LetBx = λx for some x 0. Then λ B = c +2 (4) Multiply both sides of Bx = λx by both B and λ to get λ x = B x Therefore the eigenvalues of B are the reciprocals of those of B. Recall that Then (3) implies With (4), r σ (B ) B λ c 2 λ c 2 c 2 λ c +2 for all eigenvalues λ of the matrix B.