Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Size: px
Start display at page:

Download "Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices"

Transcription

1 Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009

2 Today Given a matrix A C n,n. Finding the eigenvalues using the characteristic polynomial? Perturbation theory Reduction to Hessenberg form Sylvester s inertia theorem Find one or more selected eigenvalues of a symmetric, tridiagonal matrix Find one or more selected eigenvectors (next time) Find all eigenvalues and eigenvectors (next time)

3 Eigenvalues and Characteristic Polynomial The eigenvalues of A C n,n are the n roots of the characteristic polynomial π A (λ) := det(a λi) = 0. π A (λ) is of exact degree n Except for some special matrices the eigenvalues must be found numerically. Recall the notation σ(a), the set of eigenvalues of A.

4 Characteristic Polynomial Possible method: Compute the characteristic polynomial π A (λ) and apply a numerical method like Newton s method to find one or more of its roots. Not suitable as an all purpose method. Reason: A small change in one of the coefficients of π A (λ) can lead to a large change in the roots of the polynomial Example:π A (λ :) = λ 16. q(λ) = λ Roots of π A are all equal to zero. Roots of q are λ j = 10 1 e 2πij/16, j = 1,..., 16. The roots of q have absolute value 0.1 Computed roots can be very inaccurate. Better to work directly with the matrix.

5 Gerschgorins circle theorem Where are the eigenvalues? Theorem Suppose A C n,n. Define for i, j = 1, 2,..., n R i = {α C : α a ii r i }, r i := C j = {z C : z a jj c j }, c j := n a ij, j=1 j i n a ij. i=1 i j Then any eigenvalue of A lies in R C where R = R 1 R 2 R n and C = C 1 C 2 C n. If A H = A then C i = R i = [a ii r i, a ii + r i ].

6 Examples Locate eigenvalues λ for A = A is symmetric. λ 1 = 3 and λ 2 = 1. [ 2 ] R 1 = R 2 = [2 1, 2 + 1] = [1, 3] = R, so λ [1, 3], not bad. T = tridiag( 1, 2, 1) R m,m. R 1 = R m = [1, 3], R i = [0, 4], i = 2, 3,..., m 1, so R = [0, 4]. [ 2 jπ λ j = 4 sin 2(m+1)] (0, 4), j = 1, 2,..., m.

7 Proof of Gerschgorin Suppose (λ, x) is an eigenpair for A. We claim that λ R i, where i is such that x i = x. Indeed, Ax = λx implies that j a ijx j = λx i or (λ a ii )x i = j i a ijx j. Dividing by x i and taking absolute values we find λ a ii = j i a ij x j /x i j i a ij x j /x i j i a ij = r i Thus λ R i. Since λ is also an eigenvalue of A T, it must be in one of the row disks of A T. But these are the column disks C j of A. Hence λ C j for some j.

8 Distinct circles Sometimes some of the Gerschgorin disks are distinct and we have Corollary If p of the Gerschgorin row disks are distinct from the others, the union of these disks contains precisely p eigenvalues. The same result holds for the column disks. 1 ɛ 1 ɛ 2 A = ɛ 3 2 ɛ 4 where ɛ j ɛ 5 ɛ 6 3 λ j j for j = 1, 2, 3

9 Perturbation of eigenvalues; Example 1 A 0 := 0, λ σ(a 0 + E) = σ(e), λ E Any eigenvalue of A 0 is perturbed by at most E (why?).

10 Perturbation of eigenvalues; Example 2 A 1 := diag(ones(n 1, 1), 1), E := ɛe n e T A 1 := , E := ɛ The characteristic polynomial det(a 1 + E λi) is π(λ) := ( 1) n (λ n ɛ) (show this), The eigenvalues of A 1 are perturbed by the amount λ = E 1/n. Thus, for n = 16, a perturbation of say ɛ = 10 16, gives a change in eigenvalue of 0.1.

11 The factor 1/n Theorem (Elsner s Theorem) Suppose A, E C n,n. To every µ σ(a + E) there is a λ σ(a) such that µ λ ( A 2 + A + E 2 ) 1 1/n E 1/n 2. (1)

12 Proof Suppose A has eigenvalues λ 1,..., λ n. Use det(a) = λ1 λ n. U H U = 1 det(u) = 1. det(a) n j=1 a j 2, Hadamard s inequality. Details of proof will be given on the blackboard. If you did not come to the lecture then you should study the notes.

13 Can we improve on E 1/n 2? Recall perturbation for linear systems: Ax = b and Ay = b + e y x p x p K p (A) e p b p, where K p (A) := A p A 1 p. The relative error y x p x p in y as an approximation to x can possibly be K p (A) as large as the relative error e p b p the right hand side b. But the perturbation term e p is only raised to the first power. in

14 The Eigenvector Matrix Theorem Suppose A C n,n has linearly independent eigenvectors {x 1,..., x n } and let X = [x 1,..., x n ] be the eigenvector matrix. If (µ, x) is an eigenpair for A + E, then we can find an eigenvalue λ of A such that If A is symmetric then λ µ K p (X) E p, 1 p. (2) λ µ E 2. (3)

15 Two observations The eigenvalue problem for symmetric matrices is well conditioned. It is difficult or sometimes impossible to compute accurate eigenvalues and eigenvectors of matrices with linearly dependent eigenvectors or almost linearly dependent eigenvectors.

16 Reduction to Upper Hessenberg using orthonormal similarity transformations A = [ x x x x x x ] x x x x x x x x x x x x Q T AQ x x x x x x B = x x x x x x x x x x x x [ x x x x x x ] x x x x x x 0 x x x x x 0 0 x x x x x x x x x Before attempting to find eigenvalues and eigenvectors of a matrix (exceptions are made for certain sparse matrices), it is often advantageous to reduce it by similarity transformations to a simpler form. Orthonormal similarity transformations are particularly important since they are insensitive to noise in the elements of the matrix. Use Householder transformations.

17 The Householder reduction b 11 x x x x x x x x x x x x x x x H 1 A 1 H 1 b 11 b 12 x x b 21 b 22 x x 0 x x x 0 x x x [ ] [ ] B1 C A 1 = 1 B2 C, A D 1 E 2 = 2 1 D 2 E [ ] [ 2 ] 1 0 I 0 H 1 = H 0 V 2 = 1 0 V 2 H 2 A 2 H 2 b 11 b 12 b 13 b 14 b 21 b 22 b 23 b 24 0 b 32 b 33 b b 43 b 44 [ B3 C A 3 = 3 D 3 E 3 B k R k,k is upper Hessenberg and D k = [0, 0,..., 0, d k ] R n k,k. [ Let H k = I 0 0 V k ], where V k = I v k v T k Rn k,n k is a Householder transformation such that V k d k = α k e 1, where α 2 k = dt k d k. ].

18 Reduction [ ] [ Ik 0 Bk C A k+1 = H k A k H k = k 0 V k D k E k [ ] Bk C = k V k. V k D k V k E k V k ] [ Ik 0 0 V k ] V k = I v k v T k, V k C = C v k (v T k C), CV k = C (Cv k )v T k. V k E k : C := A k (k+1:n, k+1:n), A k (k+1:n, k+1:n) = C v k (v T k [ ] C) Ck V V k E k : C := A k (1:n, k+1:n), k A k+1 (1:n, k+1:n) = C (Cv k )v T k.

19 Algorithm function [L,B] = hesshousegen(a) n=length(a); L=zeros(n,n); B=A; for k=1:n-2 [v,b(k+1,k)]=housegen(b(k+1:n,k)); L(k+1:n,k)=v; B(k+2:n,k)=zeros(n-k-1,1); C=B(k+1:n,k+1:n); B(k+1:n,k+1:n)=C-v*(v *C); C=B(1:n,k+1:n); B(1:n,k+1:n)=C-(C*v)*v ; end [u,a]=housegen(x) computes a = α and the vector u so that (I uu T )x = αe 1. v k stored under the kth diagonal in L.

20 Complexity: 5 Gaussian eliminations Suppose A R m,n, u R m and v R n. The computation of A u(u T A) and A (Av)v T both cost O(4mn) flops. C v (v T C), C C n k,n k C (C v) v T, C C n,n k n (4(n 0 k)2 + 4n(n k)) dk = 10n 3 /3 = 5(2n 3 /3)

21 The symmetric case If A 1 = A is symmetric, the matrix A n 1 will also be symmetric since A T k = A k implies A T k+1 = (H k A k H k ) T = H k A T k H k = A k+1. Since A n 1 is upper Hessenberg and symmetric, it must be tridiagonal. Thus the algorithm above reduces A to symmetric tridiagonal form if A is symmetric.

22 Symmetric tridiagonal A T = A R n,n eigenvalues λ 1 λ 2 λ n. Using Householder similarity transformations we can assume that A is symmetric and tridiagonal. d 1 c 1 c 1 d 2 c 2 A = (4) c n 2 d n 1 c n 1 c n 1 d n

23 Split tridiagonal A into irreducible components A is reducible if c i = 0 for at least one i. Example: Suppose n = 4 and c 2 = 0 A = d 1 c c 1 d d 3 c c 3 d 4 = [ A1 0 0 A 2 ]. The eigenvalues of A are the union of the eigenvalues of A 1 and A 2. Thus if A is reducible then the eigenvalue problem can be split into smaller irreducible problems. So assume that A is irreducible.

24 Lemma An irreducible, tridiagonal and symmetric matrix A R n,n has n real and distinct eigenvalues. Idea of Proof the eigenvalues are real. Define for x R the polynomial p k (x) := det(xi A(1:k, 1:k)) for k = 1,..., n. One shows by induction on k that the roots z 1,..., z k 1 of p k 1 separates the roots y 1,..., y k of p k. y 1 < z 1 < y 2 < z 2 < z k 1 < y k.

25 The inertia theorem We say that two matrices A, B C n,n are congruent if A = E H BE for some nonsingular matrix E C n,n. Let π(a), ζ(a) and υ(a) denote the number of positive, zero and negative eigenvalues of A. If A is Hermitian then π(a) + ζ(a) + υ(a) = n. Theorem (Sylvester s Inertia Theorem) If A, B C n,n are Hermitian and congruent then π(a) = π(b), ζ(a) = ζ(b) and υ(a) = υ(b).

26 Idea of Proof A = E H BE, Can find factorization D 1 = U H 1 AU 1, where D 1 is a diagonal matrix (why?). Can find factorization B = U 2 D 2 U H 2, where D 2 is a diagonal matrix (why?). D 1 = U H 1 AU 1 = U H 1 E H BEU 1 = U H 1 E H U 2 D 2 U H 2 EU 1 = F H D 2 F, F = U H 2 EU 1 is nonsingular (why?). Enough to show the theorem for diagonal matrices (why?).

27 Symmetric LU-factorization If A = L T DL is a symmetric LU-factorization of A then A and D are congruent. π(a) = π(d), ζ(a) = ζ(d) and υ(a) = υ(d). [ ] [ ] [ ] [ ] = [ ] 1 3 has one positive and one negative eigenvalue 3 4 (why?).

28 Corollary Suppose for some x R that A xi has a symmetric LU-factorization A xi = LDL T. Then the number of eigenvalues of A strictly less than x equals the number of negative diagonal entries in D. Explain why.

29 Counting eigenvalues in an interval Suppose A T = A R n,n Using for example Gerschgorin s theorem we can find an interval [a, b) containing the eigenvalues of A. For x [a, b) let ρ(x) be the number of negative diagonal entries in D in a symmetric LU-factorization of A xi. ρ(x) is the number of eigenvalues of A which are strictly less than x. ρ(a) = 0, ρ(b) = n ρ(e) ρ(d) is the number of eigenvalues in [d, e).

30 Approximating λ m λ 1 λ 2 λ n. Suppose 1 m n. Find λ m using interval bisection. Let c = (a + b)/2 and k := ρ(c). If k m then λ m c and λ m [a, c], while if k < m then λ m c and λ m [c, b]. Continuing with the interval containing λ m we generate a sequence {[a j, b j ]} of intervals, each containing λ m and b j a j = 2 j (b a).

31 function k=count(c,d,x) Suppose A = tridiag(c, d, c) is symmetric and tridiagonal with entries d 1,..., d n on the diagonal and c 1,..., c n 1 on the neighboring diagonals. For given x this function counts the number of eigenvalues of A strictly less than x.

32 function lambda=findeigv(c,d,m) Suppose A = tridiag(c, d, c) is symmetric and tridiagonal with entries d 1,..., d n on the diagonal and c 1,..., c n 1 on the neighboring diagonals. We first estimates an interval [a, b] containing all eigenvalues of A and then generates a sequence {[a k, b k ]} of intervals each containing λ m. We iterate until b k a k (b a)ɛ M, where ɛ M is Matlab s machine epsilon eps. Typically ɛ M

33 Example Given T := tridiag( 1, 2, 1) of size 100. Estimate l 5 λ 5. Using findeigv we find l 5 = Using Matlab s eig we find µ 5 = Which is most accurate? Exact λ 5 = 4 sin (π 5/202) 2 = µ 5 λ 5 = 8.6e 016 not bad! l 5 λ 5 = 3.4e 016 better!

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

THE MATRIX EIGENVALUE PROBLEM

THE MATRIX EIGENVALUE PROBLEM THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation

More information

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems. Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Lecture 4 Eigenvalue problems

Lecture 4 Eigenvalue problems Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Lecture 2 Decompositions, perturbations

Lecture 2 Decompositions, perturbations March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math 502 Fall 2005 Solutions to Homework 3

Math 502 Fall 2005 Solutions to Homework 3 Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since

More information

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A = Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

STABILITY FOR PARABOLIC SOLVERS

STABILITY FOR PARABOLIC SOLVERS Review STABILITY FOR PARABOLIC SOLVERS School of Mathematics Semester 1 2008 OUTLINE Review 1 REVIEW 2 STABILITY: EXPLICIT METHOD Explicit Method as a Matrix Equation Growing Errors Stability Constraint

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Section 4.4 Reduction to Symmetric Tridiagonal Form

Section 4.4 Reduction to Symmetric Tridiagonal Form Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today AM 205: lecture 22 Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today Posted online at 5 PM on Thursday 13th Deadline at 5 PM on Friday 14th Covers material up to and including

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Problems of Eigenvalues/Eigenvectors

Problems of Eigenvalues/Eigenvectors 67 Problems of Eigenvalues/Eigenvectors Reveiw of Eigenvalues and Eigenvectors Gerschgorin s Disk Theorem Power and Inverse Power Methods Jacobi Transform for Symmetric Matrices Spectrum Decomposition

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3 Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. An n n matrix

More information