Introduction to Arnoldi method
|
|
- Pauline Dean
- 5 years ago
- Views:
Transcription
1 Introduction to Arnoldi method SF Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
2 Agenda lecture 2 Introduction to Arnoldi method Gram-Schmidt - efficiency and roundoff errors Derivation of Arnoldi method (Convergence characterization) KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
3 Main eigenvalue algorithms in this course Fundamental eigenvalue techniques (Lecture 1) Arnoldi method (Lecture 2-3). QR-method (Lecture 8-9). KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
4 Main eigenvalue algorithms in this course Fundamental eigenvalue techniques (Lecture 1) Arnoldi method (Lecture 2-3). Typically suitable when we are interested in a small number of eigenvalues, the matrix is large and sparse Currently solvable size on desktop m 10 6 (depending on structure) QR-method (Lecture 8-9). Typically suitable when we want to compute all eigenvalues, the matrix does not have any particular easy structure. Solvable size on desktop m KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
5 Finite-element method example (slide 1/2) Laplace eigenvalue problem u = λu with Dirichlet boundary conditions outside wing profile KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
6 Finite-element method example (slide 2/2) Problem: Compute eigenvalues of matrix A R m m where m large but few non-zero elements: nz = 3550 KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
7 Finite-element method example (slide 2/2) Problem: Compute eigenvalues of matrix A R m m where m large but few non-zero elements: nz = 3550 In MATLAB with very efficient QR-method: tic; eig(full(a)); toc KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
8 Finite-element method example (slide 2/2) Problem: Compute eigenvalues of matrix A R m m where m large but few non-zero elements: nz = 3550 In MATLAB with very efficient QR-method: tic; eig(full(a)); toc More than 10 hours we need a different method. KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
9 Idea of Arnoldi method (slide 1/3) Eigenvalue problem Ax = λx. TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
10 Idea of Arnoldi method (slide 1/3) Eigenvalue problem Ax = λx. Suppose Q = (q 1,..., q n ) R m n and x span(q 1,..., q n ) TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
11 Idea of Arnoldi method (slide 1/3) Eigenvalue problem Ax = λx. Suppose Q = (q 1,..., q n ) R m n and x span(q 1,..., q n ) There exists z R n such that AQz = λqz TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
12 Idea of Arnoldi method (slide 1/3) Eigenvalue problem Ax = λx. Suppose Q = (q 1,..., q n ) R m n and x span(q 1,..., q n ) There exists z R n such that AQz = λqz and Q T AQz = λq T Qz KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
13 Idea of Arnoldi method (slide 1/3) Eigenvalue problem Ax = λx. Suppose Q = (q 1,..., q n ) R m n and x span(q 1,..., q n ) There exists z R n such that AQz = λqz and Q orthogonal Q T AQz = λq T Qz = λz KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
14 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
15 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
16 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. Solutions (µ, z) to the eigenvalue problem Q T AQz = µz are called Ritz pairs. TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
17 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. Solutions (µ, z) to the eigenvalue problem Q T AQz = µz are called Ritz pairs. The procedure (Rayleigh-Ritz) returns an exact eigenvalue if x span q 1,..., q n, TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
18 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. Solutions (µ, z) to the eigenvalue problem Q T AQz = µz are called Ritz pairs. The procedure (Rayleigh-Ritz) returns an exact eigenvalue if x span q 1,..., q n, and approximates eigenvalues of A well if x x span(q 1,..., q n ). TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
19 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. Solutions (µ, z) to the eigenvalue problem Q T AQz = µz are called Ritz pairs. The procedure (Rayleigh-Ritz) returns an exact eigenvalue if x span q 1,..., q n, and approximates eigenvalues of A well if What is a good subspace span(q)? x x span(q 1,..., q n ). TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
20 Idea of Arnoldi method (slide 2/3) Rayleigh-Ritz procedure Let Q R m n be orthogonal basis of some subspace. Solutions (µ, z) to the eigenvalue problem Q T AQz = µz are called Ritz pairs. The procedure (Rayleigh-Ritz) returns an exact eigenvalue if x span q 1,..., q n, and approximates eigenvalues of A well if What is a good subspace span(q)? x x span(q 1,..., q n ). TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
21 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
22 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
23 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). Justification of Arnoldi method Use Rayleigh-Ritz on Q = (q 1,..., q n ) where span(q 1,..., q n ) = K n (A, q 1 ) KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
24 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). Justification of Arnoldi method Use Rayleigh-Ritz on Q = (q 1,..., q n ) where span(q 1,..., q n ) = K n (A, q 1 ) Arnoldi method is a clever procedure to construct H n = V T AV. KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
25 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). Justification of Arnoldi method Use Rayleigh-Ritz on Q = (q 1,..., q n ) where span(q 1,..., q n ) = K n (A, q 1 ) Arnoldi method is a clever procedure to construct H n = V T AV. Clever : (1) We exploit sparsity (we will only need matrix-vector product) KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
26 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). Justification of Arnoldi method Use Rayleigh-Ritz on Q = (q 1,..., q n ) where span(q 1,..., q n ) = K n (A, q 1 ) Arnoldi method is a clever procedure to construct H n = V T AV. Clever : (1) We exploit sparsity (we will only need matrix-vector product) (2) We expand V with one row in each iteration Iterate until we are happy. KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
27 Idea of Arnoldi method (slide 3/3) Recall: Power method approximates the largest eigenvalue well. Definition: Krylov matrix / subspace K n (A, q 1 ) := (q 1, Aq 1,... A n 1 q 1 ) K n (A, q 1 ) := span(q 1, Aq 1,... A n 1 q 1 ). Justification of Arnoldi method Use Rayleigh-Ritz on Q = (q 1,..., q n ) where span(q 1,..., q n ) = K n (A, q 1 ) Arnoldi method is a clever procedure to construct H n = V T AV. Clever : (1) We exploit sparsity (we will only need matrix-vector product) (2) We expand V with one row in each iteration Iterate until we are happy. KTH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
28 Arnoldi method graphically Graphical illustration of algorithm: Q = q 1 q 2 q 3 q 4 = H
29 Arnoldi method graphically Graphical illustration of algorithm: A Q = q 1 q 2 q 3 q = H 4 w
30 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w
31 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
32 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
33 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
34 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w After iteration: Take eigenvalues of H as approximate eigenvalues. TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
35 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w After iteration: Take eigenvalues of H as approximate eigenvalues. * show arnoldi.m and Hessenberg matrix in matlab * TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
36 Arnoldi method graphically Graphical illustration of algorithm: A Orthogonalize Q = q 1 q 2 q 3 q = H 4 w w After iteration: Take eigenvalues of H as approximate eigenvalues. * show arnoldi.m and Hessenberg matrix in matlab * We will now... (1) derive a good orthogonalization procedure: variants of Gram-Schmidt, (2) show that Arnoldi generates a Rayleigh-Ritz approximation, (3) characterize the convergence. TH Royal Institute of Technology (Elias Jarlebring)Introduction to Arnoldi method / 9
Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationEigenvalue problems III: Advanced Numerical Methods
Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric
More informationSF Matrix computations for large-scale systems. Numerical linear algebra for large-scale systems
SF2524 - Matrix computations for large-scale systems Numerical linear algebra for large-scale systems Intro lecture, October 31, 2017 KTH Royal Institute of Technology Mathematics Dept. - NA division 1/23
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationKrylov Subspaces. The order-n Krylov subspace of A generated by x is
Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationOrthogonal iteration revisited
Week 10 11: Friday, Oct 30 and Monday, Nov 2 Orthogonal iteration revisited Last time, we described a generalization of the power methods to compute invariant subspaces. That is, starting from some initial
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationLinear algebra for exponential integrators
Linear algebra for exponential integrators Antti Koskela KTH Royal Institute of Technology Beräkningsmatematikcirkus, 28 May 2014 The problem Considered: time integration of stiff semilinear initial value
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview
More informationAlternative correction equations in the Jacobi-Davidson method
Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationarxiv: v1 [math.na] 5 May 2011
ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and
More informationSection 6.4. The Gram Schmidt Process
Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationLinear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.
Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning
More informationHessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods
Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical
More informationApproximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method
Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationThe parallel rational Arnoldi algorithm
The parallel rational Arnoldi algorithm Mario Berljafa Stefan Güttel May 016 Contents 1 Introduction 1 Parallelization strategies 1 3 Basic usage 1 Near-optimal continuation strategy 3 5 Links to numerical
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationBindel, Spring 2016 Numerical Analysis (CS 4220) Notes for
Notes for 2016-03-11 Woah, We re Halfway There Last time, we showed that the QR iteration maps upper Hessenberg matrices to upper Hessenberg matrices, and this fact allows us to do one QR sweep in O(n
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationAn Asynchronous Algorithm on NetSolve Global Computing System
An Asynchronous Algorithm on NetSolve Global Computing System Nahid Emad S. A. Shahzadeh Fazeli Jack Dongarra March 30, 2004 Abstract The Explicitly Restarted Arnoldi Method (ERAM) allows to find a few
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationMULTIGRID ARNOLDI FOR EIGENVALUES
1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationRational Krylov methods for linear and nonlinear eigenvalue problems
Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational
More informationAlternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.
Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999
More informationKeeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.
RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject
More information1 Extrapolation: A Hint of Things to Come
Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.
More informationTHE QR METHOD A = Q 1 R 1
THE QR METHOD Given a square matrix A, form its QR factorization, as Then define A = Q 1 R 1 A 2 = R 1 Q 1 Continue this process: for k 1(withA 1 = A), A k = Q k R k A k+1 = R k Q k Then the sequence {A
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationA Jacobi Davidson Method for Nonlinear Eigenproblems
A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More information