Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
|
|
- Caren Lloyd
- 5 years ago
- Views:
Transcription
1 Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009
2 Today The power method to find the dominant eigenvector The shifted power method to speed up convergence The inverse power method The Rayleigh quotient iteration The QR-algorithm
3 The Power Method Find the eigenvector corresponding to the dominant (largest in absolute value) eigenvalue. With a simple modification we can also find the corresponding eigenvalue
4 Assumptions Let A C n,n have eigenpairs (λ j, v j ), j = 1,..., n. Given z 0 C n we assume that (i) λ 1 > λ 2 λ 3 λ n, (ii) z 0 = c 1 v c n v n with c 1 0. (iii) A has linearly independent eigenvectors. The first assumption means that A has a dominant eigenvalue λ 1 of algebraic multiplicity one. The second assumption says that z 0 has a component in the direction v 1. The third assumption is not necessary. It is included to simplify the analysis.
5 Powers Given A C n,n, a vector z 0 C n, and assume that i),ii), iii) hold. Define a sequence {z k } of vectors in C n by z k := A k z 0 = Az k 1, k = 1, 2,.... A k v j = λ k j v j, k = 0, 1, 2,..., j = 1,..., n. Then z k = c 1 λ k 1v 1 + c 2 λ k 2v c n λ k nv n, k = 0, 1, 2,.... ( ) kv2 ( ) λ = c 1 v 1 + c 2 λ k. 2 λ c n n λ 1 z k λ k 1 z k /λ k 1, c 1 v 1, k
6 The Power method Need to normalize the vectors z k. Choose a norm on C n, set x 0 = z 0 / z 0 and generate for k = 1, 2,... unit vectors,{x k } as follows: (i) (ii) y k = Ax k 1 x k = y k / y k. (1)
7 Example A = [ ] 1 2, z = [1.0, 1.0] T, x 0 = [0.707, 0.707] T x 1 = [0.39, 0.92], x 2 = [0.4175, ], x 3 = [0.4159, ],... converges to an eigenvector of A
8 Convergence The way {x k } converges to an eigenvector will depend on the sign of λ 1. Suppose x k = Kv 1 for some K 0. Then Ax k = λ 1 x k and x k+1 = Ax k / Ax k = λ 1 λ 1 x k. Thus x k+1 = x k if λ 1 < 0.
9 eigenvalue Suppose we know an approximate eigenvector of A C n,n. How should we estimate the corresponding eigenvalue? If (λ, u) is an exact eigenpair then Au λu = 0. If u is an approximate eigenvector we can minimize the function ρ : C R given by Theorem ρ(µ) := Au µu 2. ρ is minimized when µ = ν := uh Au u H u u. Proof on blackboard. is the Rayleigh quotient of
10 Power with Rayleigh function [l,x,it]=powerit(a,z,k,tol) af=norm(a, fro ); x=z/norm(z); for k=1:k y=a*x; l=x *y; if norm(y-l*x)/af < tol it=k; x=y/norm(y); return end x=y/norm(y); end it=k+1;
11 Example A 1 := [ ] 1 2, A := [ ] , and A = [ ] Start with a random vector and tol=10 6. Get convergence in 7 iterations for A 1, 174 iterations for A 2 and no convergence for A 3. A 3 has two complex eigenvalues so assumption i) is not satisfied Rate of convergence depends on r = λ 2 /λ 1. Faster convergence for smaller r. We have r 0.07 for A 1 and r = 0.95 for A 2.
12 The shifted power method A variant of the power method is the shifted power method. In this method we choose a number s and apply the power method to the matrix A si. The number s is called a shift since it shifts an eigenvalue λ of A to λ s of A si. Sometimes the convergence can be faster if the shift is chosen intelligently. For example, for A 2 with shift s = 1.8, we get convergence in 17 iterations instead of 174 without shift.
13 The inverse power method We apply the power method to the inverse matrix (A si) 1, where s is a shift. If A has eigenvalues λ 1,..., λ n in no particular order then (A si) 1 has eigenvalues µ 1 (s) = (λ 1 s) 1, µ 2 (s) = (λ 2 s) 1,..., µ n (s) = (λ n s) 1. Suppose λ 1 is a simple eigenvalue of A. Then lim s λ1 µ 1 (s) =, while lim s λ1 µ j (s) = (λ j λ 1 ) 1 < for j = 2,..., n. Hence, by choosing s sufficiently close to λ 1 the inverse power method will converge to that eigenvalue.
14 For the inverse power method (1) is replaced by. (i) (ii) (A si)y k = x k 1 x k = y k / y k. (2) Note that we solve the linear system rather than computing the inverse matrix. Normally the PLU-factorization of A si is pre-computed in order to speed up the iteration.
15 Rayleigh quotient iteration We can combine inverse power with Rayleigh quotient calculation. (i) (A s k 1 I)y k = x k 1, (ii) x k = y k / y k, (iii) s k = x H k Ax k, (iv) r k = Ax k s k x k. We can avoid the calculation of Ax k in (iii) and (iv).
16 Example A 1 := [ ] λ = (5 33)/ Start with x = [1, 1] T k r 2 1.0e e e e e-020 s k λ 3.7e e e e e-016 Table: Quadratic convergence of Rayleigh quotient iteration
17 Problem with singularity? The linear system in i) becomes closer and closer to singular as s k converges to the eigenvalue. Thus the system becomes more and more ill-conditioned and we can expect large errors in the computed y k. This is indeed true, but we are lucky. Most of the error occurs in the direction of the eigenvector and this error disappears when we normalize y k in ii). Miraculously, the normalized eigenvector will be quite accurate.
18 Discussion Since the shift changes from iteration to iteration the computation of y will require O(n 3 ) flops for a full matrix. For such a matrix it might pay to reduce it to a upper Hessenberg form or tridiagonal form before starting the iteration. However, if we have a good approximation to an eigenpair then only a few iterations are necessary to obtain close to machine accuracy. If Rayleigh quotient iteration converges the convergence will be quadratic and sometimes even cubic.
19 The QR-algorithm An iterative method to compute all eigenvalues and eigenvectors of a matrix A C n,n. The matrix is reduced to triangular form by a sequence of unitary similarity transformations computed from the QR-factorization of A. Recall that for a square matric the QR-factorization and the QR-decomposition are the same. If A = QR is a QR-factorization then Q C n,n is unitary, Q H Q = I and R C n,n is upper triangular.
20 Basic QR A 1 = A for k = 1, 2,... Q k R k = A k (QR-factorization of A k ) A k+1 = R k Q k. end (3)
21 Example [ ] 2 1 A 1 = A = 1 2 A 1 = Q 1 R 1 = ( [ ] [ ] ) ( ) [ ] [ ] A 2 = R 1 Q 1 = 1 = 5 [ ] [ ] [ ] A 3 =, A = A 9 is almost diagonal and contain the eigenvalues λ 1 = 3 and λ 2 = 1 on the diagonal.
22 Example 2 we obtain A 14 = A 1 = A = e e e e e A 14 is close to quasi-triangular..
23 Example 2 A 14 = e e e e e The 1 1 blocks give us two real eigenvalues λ and λ The middle 2 2 block has complex eigenvalues resulting in λ i and λ i. From Gerschgorin s circle theorem it follows that the approximations to the real eigenvalues are quite accurate. We would also expect the complex eigenvalues to have small absolute errors.
24 Why QR works Since Q H k A k = R k we obtain A k+1 = R k Q k = Q H k A k Q k = Q H k A Q k, where Q k = Q 1 Q k. (4) Thus A k+1 is similar to A k and hence to A. It combines both the power method and the Rayleigh quotient iteration. If A R n,n has real eigenvalues, then under fairly general conditions, the sequence {A k } converges to an upper triangular matrix, the Schur form. If A is real, but with some complex eigenvalues, then the convergence will be to the quasi-triangular Schur form
25 The Shifted QR-algorithms Like in the inverse power method it is possible to speed up the convergence by introducing shifts. The explicitly shifted QR-algorithm works as follows: A 1 = A for k = 1, 2,... end Choose a shift s k Q k R k = A k s k I A k+1 = R k Q k + s k I. (QR-factorization of A k si) (5)
26 Remarks 1. A k+1 = Q H k A k Q k is unitary similar to A k. 2. Before applying this algorithm we reduce A to upper Hessenberg form 3. If A is upper Hessenberg then all matrices {A k } k 1 will be upper Hessenberg. 4. Why? because A k+1 = R k A k R 1 k. A product of two upper triangular matrices and an upper Hessenberg matrix is upper Hessenberg. 5. Givens rotations is used to compute the QR-factorization of A k s k I.
27 More remarks 6. If a i+1,i = 0 we can split the the eigenvalue problem into two smaller problems. We do this splitting if one a i+1,i becomes sufficiently small. 7. s k := e T n A k e n is called the Rayleigh quotient shift 8. The Wilkinson shift is the eigenvalue of the lower right 2 2 corner closest to the n, n entry of A k. 9. Convergence is at least quadratic for both kinds of shifts. 10. In the implicitly shifted QR-algorithm we combine two shifts at a time. Can find both real and complex eigenvalues without using complex arithmetic.
28 More remarks 11. To find the eigenvectors we first find the eigenvectors of the triangular or quasi-triangular matrix. We then compute the eigenvalues of the upper Hessenberg matrix and finally the eigenvectors of A. 12. Normally we find all eigenvalues in O(n 3 ) flops. This is because we need O(n 3 ) flops to find the upper Hessenberg form, each iterations requires O(n 2 ) flops if A k is upper Hessenberg and O(n) flops if A k is tridiagonal, and we normally need O(n) iterations.
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationNumerical methods for eigenvalue problems
Numerical methods for eigenvalue problems D. Löchel Supervisors: M. Hochbruck und M. Tokar Mathematisches Institut Heinrich-Heine-Universität Düsseldorf GRK 1203 seminar february 2008 Outline Introduction
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationECS130 Scientific Computing Handout E February 13, 2017
ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationLecture Notes for Inf-Mat 3350/4350, Tom Lyche
Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationOrthogonal iteration to QR
Week 10: Wednesday and Friday, Oct 24 and 26 Orthogonal iteration to QR On Monday, we went through a somewhat roundabout algbraic path from orthogonal subspace iteration to the QR iteration. Let me start
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationLecture 4 Basic Iterative Methods I
March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationEIGENVALUE PROBLEMS (EVP)
EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a
More informationACM106a - Homework 4 Solutions
ACM106a - Homework 4 Solutions prepared by Svitlana Vyetrenko November 17, 2006 1. Problem 1: (a) Let A be a normal triangular matrix. Without the loss of generality assume that A is upper triangular,
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationLecture 2: Numerical linear algebra
Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationComputational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationThe QR Algorithm. Marco Latini. February 26, 2004
The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationApplied Linear Algebra
Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University
More information632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method
632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationLeast squares and Eigenvalues
Lab 1 Least squares and Eigenvalues Lab Objective: Use least squares to fit curves to data and use QR decomposition to find eigenvalues. Least Squares A linear system Ax = b is overdetermined if it has
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationEigenpairs and Similarity Transformations
CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationThe QR Decomposition
The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationData Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples
Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationMatrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury
Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is
More informationTHE QR METHOD A = Q 1 R 1
THE QR METHOD Given a square matrix A, form its QR factorization, as Then define A = Q 1 R 1 A 2 = R 1 Q 1 Continue this process: for k 1(withA 1 = A), A k = Q k R k A k+1 = R k Q k Then the sequence {A
More informationChapter f Linear Algebra
1. Scope of the Chapter Chapter f Linear Algebra This chapter is concerned with: (i) Matrix factorizations and transformations (ii) Solving matrix eigenvalue problems (iii) Finding determinants (iv) Solving
More informationEigensystems for Large Matrices
Eigensystems for Large Matrices L. Harris I. Papanikolaou J. Shi D. Strott Y. Tan C. Zhong Dr. Changhui Tan AMSC460 - Spring 205 May 2, 205 Abstract In this paper we will introduce the ideas behind several
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationDouble-shift QR steps
Double-shift QR steps Week 11: Monday, Oct 29 Last time, we discussed the Wilkinson strategy of choosing as a shift one of the roots of the trailing 2-by-2 submatrix of A (k) (the one closest to the final
More informationChasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3
Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationIn this section again we shall assume that the matrix A is m m, real and symmetric.
84 3. The QR algorithm without shifts See Chapter 28 of the textbook In this section again we shall assume that the matrix A is m m, real and symmetric. 3.1. Simultaneous Iterations algorithm Suppose we
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationOutline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina
A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More information