Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
|
|
- Edwin Mosley
- 5 years ago
- Views:
Transcription
1 Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008
2 Today Given a matrix A C n,n. Finding the eigenvalues using the characteristic polynomial? Perturbation theory Reduction to Hessenberg form Sylvester s inertia theorem Find one or more selected eigenvalues of a symmetric, tridiagonal matrix Find one or more selected eigenvectors (next time) Find all eigenvalues and eigenvectors (next time)
3 Eigenvalues and Characteristic Polynomial The eigenvalues of A C n,n are the n roots of the characteristic polynomial π A (λ) := det(a λi) = 0. π A (λ) is of exact degree n Except for some special matrices the eigenvalues must be found numerically.
4 Characteristic Polynomial Possible method: Compute the characteristic polynomial π A (λ) and apply a numerical method like Newton s method to find one or more of its roots. Not suitable as an all purpose method. Reason: A small change in one of the coefficients of π A (λ) can lead to a large change in the roots of the polynomial Example:π A (λ :) = λ 16. q(λ) = λ Roots of π A are all equal to zero. Roots of q are λ j = 10 1 e 2πij/16, j = 1,..., 16. The roots of q have absolute value 0.1 Computed roots can be very inaccurate. Need to work directly with the matrix.
5 Gerschgorins circle theorem Where are the eigenvalues? Theorem Suppose A C n,n. Define for i, j = 1, 2,..., n R i = {α C : α a ii r i }, r i := C j = {z C : z a jj c j }, c j := n a ij, j=1 j i n a ij. i=1 i j Then any eigenvalue of A lies in R C where R = R 1 R 2 R n and C = C 1 C 2 C n. If A H = A then C i = R i = [a ii r i, a ii + r i ].
6 Examples Locate eigenvalues λ for A = A is symmetric. [ 2 ] R 1 = R 2 = [2 1, 2 + 1] = [1, 3] = R, so λ [1, 3]. λ 1 = 3 and λ 2 = 1. In this case the smallest interval possible. Let T = tridiag( 1, 2, 1) R m,m be the second derivative matrix. R 1 = R m = [1, 3], R i = [0, 4], i = 2, 3,..., m 1, so R = [0, 4]. [ 2 jπ λ j = 4 sin 2(m+1)], j = 1, 2,..., m. λ j [δ, 4 δ], where δ = 1/(2(m + 1)) Gerschgorins theorem gives a remarkably good estimate.
7 proof of Gerschgorin Proof. Suppose (λ, x) is an eigenpair for A. We claim that λ R i, where i is such that x i = x. Indeed, Ax = λx implies that j a ijx j = λx i or (λ a ii )x i = j i a ijx j. Dividing by x i and taking absolute values we find λ a ii = j i a ijx j /x i j i a ij x j /x i r i Thus λ R i. Since λ is also an eigenvalue of A T, it must be in one of the row disks of A T. But these are the column disks C j of A. Hence λ C j for some j.
8 Distinct circles Sometimes some of the Gerschgorin disks are distinct and we have Corollary If p of the Gerschgorin row disks are distinct from the others, the union of these disks contains precisely p eigenvalues. The same result holds for the column disks. 1 ɛ 1 ɛ 2 A = ɛ 3 2 ɛ 4 where ɛ j ɛ 5 ɛ 6 3 λ j j for j = 1, 2, 3
9 Perturbation Analysis Recall linear systems Ax = b and Ay = b + e y x p x p K p (A) e p b p, where K p (A) := A p A 1 p. This means that the relative error y x p x p in y as an approximation to x can possibly be K p (A) as large as the relative error e p b p in the right hand side b. Consider now the eigenvalue problem. We restrict the discussion to nondefective matrices.
10 Absolute errors Theorem Suppose A C n,n has linearly independent eigenvectors {x 1,..., x n } and let X = [x 1,..., x n ] be the eigenvector matrix. If (µ, x) is an eigenpair for A + E, then we can find an eigenvalue λ of A such that If A is symmetric then λ µ K p (X) E p, 1 p. (1) λ µ E 2. (2)
11 Two observations It is difficult or sometimes impossible to compute accurate eigenvalues and eigenvectors of matrices with linearly dependent eigenvectors or almost linearly dependent eigenvectors. The eigenvalue problem for symmetric matrices is well conditioned.
12 Upper Hessenberg Before attempting to find eigenvalues and eigenvectors of a matrix (exceptions are made for certain sparse matrices), it is often advantageous to reduce it by similarity transformations to a simpler form. Orthogonal similarity transformations are particularly important since they are insensitive to noise in the elements of the matrix. Recall that a matrix A R n,n is upper Hessenberg if a i,j = 0 for j = 1, 2,..., i 2, i = 3, 4,..., n. ] [ x x x x x x x x x x x x 0 x x x x x 0 0 x x x x x x x x x
13 A matrix H R n,n of the form H := I uu T, where u R n and u T u = 2 is called a Householder transformation.
14 Zero out entries in a vector x Find a Householder transformation H := I uu T such that Hx = αe 1. u := { x/α e1 1 x1 /α { if x 0, 2e1, otherwise., α := x 2 if x 1 > 0 + x 2 otherwise, H = diag( 1, 1,..., 1) if α = 0. Assume α 0. u T u = (x/α e 1) T (x/α e 1 ) 1 x 1 /α Hx = x (u T x)u = (α2 /α 2x 1 /α+1 1 x 1 /α = 2 u T x = (x/α e 1) T x = xt x/α x 1 = α x 1 = α 1 x 1 /α. 1 x1 /α 1 x1 /α 1 x1 /α Hx = x (u T x)u = x α(x/α e 1 ) = αe 1.
15 Computing u u := { x/α e1 1 x1 /α 2e1, if x 0, otherwise. α = ± x 2 If α = 0 then u = 2e 1 exit v = x/α e 1 u = v/ v(1)
16 Recall Algorithm housegen To given x R n the following algorithm computes a = α and the vector u so that (I uu T )x = αe 1. Algorithm function [u,a]=housegen(x) a=norm(x); u=x; if a==0 u(1)=sqrt(2); return; end if u(1)>0 a=-a; end u=u/a; u(1)=u(1)-1; u=u/sqrt(-u(1));
17 Reduction to upper Hessenberg we define A 1 = A. Suppose for k 1 that A k is upper Hessenberg in its first k 1 [ columns. ] Bk C A k = k D k E k B k R k,k is upper Hessenberg and D k = [0, 0,..., 0, d k ] R n k,k. Let H k = [ I 0 0 V k ], where Vk = I v k v T k Rn k,n k is a Householder transformation such that V k d k = α k e 1, where α 2 k = dt k d k.
18 Reduction 2 [ ] [ Ik 0 Bk C A k+1 = H k A k H k = k 0 V k D k E k [ ] Bk C = k V k. V k D k V k E k V k ] [ Ik 0 0 V k ] Now V k D k = [V k 0,..., V k 0, V k d k ] = (0,..., 0, α k e 1 ) and the matrix B k is not affected by the H k transformation. Therefore the matrix A k+1 will be upper Hessenberg in its first k columns, and the reduction is carried one step further. The reduction stops with A n 1 which is upper Hessenberg.
19 O(10n 3 /3) algorithm Algorithm function [L,B] = hesshousegen(a) n=length(a); L=zeros(n,n); B=A; for k=1:n-2 [v,b(k+1,k)]=housegen(b(k+1:n,k)); L(k+1:n,k)=v; B(k+2:n,k)=zeros(n-k-1,1); C=B(k+1:n,k+1:n); B(k+1:n,k+1:n)=C-v*(v *C); C=B(1:n,k+1:n); B(1:n,k+1:n)=C-(C*v)*v ; end This algorithm uses Householder similarity transformations to reduce a matrix A R n,n to upper Hessenberg form B. Details of the transformations are stored under the diagonal in a matrix L. The entries of L can be used to assemble an orthogonal matrix Q such that B = Q T AQ. Algorithm housegen is used in each step of the reduction.
20 The symmetric case If A 1 = A is symmetric, the matrix A n 1 will also be symmetric since A T k = A k implies A T k+1 = (H k A k H k ) T = H k A T k H k = A k+1. Since A n 1 is upper Hessenberg and symmetric, it must be tridiagonal. Thus the algorithm above reduces A to symmetric tridiagonal form if A is symmetric.
21 Symmetric tridiagonal A T = A R n,n eigenvalues λ 1 λ 2 λ n. Using Householder similarity transformations we can assume that A is symmetric and tridiagonal. d 1 c 1 c 1 d 2 c 2 A = (3) c n 2 d n 1 c n 1 c n 1 d n
22 Split tridiagonal A into irreducible components Recall that A is reducible if c i = 0 for at least one i. Example: Suppose n = 4 and c 2 = 0 A = d 1 c c 1 d d 3 c c 3 d 4 = [ A1 0 0 A 2 ]. The eigenvalues of A are the union of the eigenvalues of A 1 and A 2. Thus if A is reducible then the eigenvalue problem can be split into smaller irreducible problems. So assume that A is irreducible. Theorem: An irreducible, symmetric, tridiagonal matrix has distinct eigenvalues.
23 The inertia theorem We say that two matrices A, B C n,n are congruent if A = E H BE for some nonsingular matrix E C n,n. Let π(a), ζ(a) and υ(a) denote the number of positive, zero and negative eigenvalues of A. If A is Hermitian then π(a) + ζ(a) + υ(a) = n. Theorem (Sylvester s Inertia Theorem) If A, B C n,n are Hermitian and congruent then π(a) = π(b), ζ(a) = ζ(b) and υ(a) = υ(b).
24 LDLT factorization If A = L T DL is an LDLT-factorization of A then A and D are congruent. π(a) = π(d), ζ(a) = ζ(d) and υ(a) = υ(d). [ ] 1 3 = 3 4 [ ] [ ] [ ] one positive and one negative eigenvalue.
25 Corollary Suppose for some x R that A xi has an LDLT-factorization A xi = LDL T. Then the number of eigenvalues of A strictly less than x equals the number of negative diagonal entries in D. υ(a αi) = υ(d) If Ax = λx then (A αi)x = (λ α)x, υ(a αi) equals the number of eigenvalues of A which are less than α.
26 Counting eigenvalues in an interval Suppose A T = A R n,n Using for example Gerschgorin s theorem we can find an interval [a, b) containing the eigenvalues of A. For x [a, b) let ρ(x) be the number of negative diagonal entries in D in an LDLT-factorization of A xi. ρ(x) is the number of eigenvalues of A which are strictly less than x. ρ(a) = 0, ρ(b) = n ρ(e) ρ(d) is the number of eigenvalues in [d, e).
27 Approximating λ m λ 1 λ 2 λ n. Suppose 1 m n. Find λ m using interval bisection. Let c = (a + b)/2 and k := ρ(c). If k m then λ m c and λ m [a, c], while if k < m then λ m c and λ m [c, b]. Continuing with the interval containing λ m we generate a sequence {[a j, b j ]} of intervals, each containing λ m and b j a j = 2 j (b a).
28 Fixing a possible failure The method will fail if one of the diagonal entries in D is zero or very close to zero We replace such an entry by a suitable small number, say δ k = ± c k ɛ M, where the negative sign is used if c k < 0, and ɛ M is the Machine epsilon, typically ɛ M = for Matlab. This replacement is done if d k (α) < δ k.
29 function k=count(c,d,x) Algorithm Suppose A = tridiag(c, d, c) is symmetric and tridiagonal with entries d 1,..., d n on the diagonal and c 1,..., c n 1 on the neighboring diagonals. For given x this function counts the number of eigenvalues of A strictly less than x. We use the replacement described above if one of the d j (x) is close to zero.
30 function lambda=findeigv(c,d,m) Algorithm Suppose A = tridiag(c, d, c) is symmetric and tridiagonal with entries d 1,..., d n on the diagonal and c 1,..., c n 1 on the neighboring diagonals. We first estimates an interval [a, b] containing all eigenvalues of A and then generates a sequence {[a k, b k ]} of intervals each containing λ m. We iterate until b k a k (b a)ɛ M, where ɛ M is Matlab s machine epsilon eps. Typically ɛ M
31 Example Given T := tridiag( 1, 2, 1) of size 100. Estimate l 5 λ 5. Using findeigv we find l 5 = Using Matlab s eig we find µ 5 = Which is most accurate? Exact λ 5 = 4 sin (π 5/202) 2 = µ 5 λ 5 = 8.6e 016 not bad! l 5 λ 5 = 3.4e 016 better!
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationUp to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas
Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationSection 4.4 Reduction to Symmetric Tridiagonal Form
Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationTHE MATRIX EIGENVALUE PROBLEM
THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More information5 Selected Topics in Numerical Linear Algebra
5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationToday: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today
AM 205: lecture 22 Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today Posted online at 5 PM on Thursday 13th Deadline at 5 PM on Friday 14th Covers material up to and including
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationLecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems
Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationThe QR Decomposition
The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationA Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationNumerical Linear Algebra
Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationLecture Notes for Inf-Mat 3350/4350, Tom Lyche
Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationvibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...
6 Eigenvalues Eigenvalues are a common part of our life: vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow, The simplest
More informationSchur s Triangularization Theorem. Math 422
Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationAccurate eigenvalue decomposition of arrowhead matrices and applications
Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction
More informationCayley-Hamilton Theorem
Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationDiagonalization of Matrix
of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationQ1 Q2 Q3 Q4 Tot Letr Xtra
Mathematics 54.1 Final Exam, 12 May 2011 180 minutes, 90 points NAME: ID: GSI: INSTRUCTIONS: You must justify your answers, except when told otherwise. All the work for a question should be on the respective
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationNumerical Solution of Linear Eigenvalue Problems
Numerical Solution of Linear Eigenvalue Problems Jessica Bosch and Chen Greif Abstract We review numerical methods for computing eigenvalues of matrices We start by considering the computation of the dominant
More informationSECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS
SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationSpectral radius, symmetric and positive matrices
Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationLecture 11: Diagonalization
Lecture 11: Elif Tan Ankara University Elif Tan (Ankara University) Lecture 11 1 / 11 Definition The n n matrix A is diagonalizableif there exits nonsingular matrix P d 1 0 0. such that P 1 AP = D, where
More informationSTABILITY FOR PARABOLIC SOLVERS
Review STABILITY FOR PARABOLIC SOLVERS School of Mathematics Semester 1 2008 OUTLINE Review 1 REVIEW 2 STABILITY: EXPLICIT METHOD Explicit Method as a Matrix Equation Growing Errors Stability Constraint
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More information18.S34 linear algebra problems (2007)
18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More information. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. An n n matrix
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More information