Solving large sparse eigenvalue problems

Similar documents
The parallel rational Arnoldi algorithm

Fitting an artificial frequency response

Barycentric interpolation via the AAA algorithm

Arnoldi Methods in SLEPc

Square root of a symmetric matrix

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

International Space Station model

Numerical Methods I Eigenvalue Problems

On the Ritz values of normal matrices

Electronic filter design using RKFUN arithmetic

Rational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint:

Generalized rational Krylov decompositions with an application to rational approximation. Berljafa, Mario and Güttel, Stefan. MIMS EPrint: 2014.

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

Math 504 (Fall 2011) 1. (*) Consider the matrices

Exponential of a nonnormal matrix

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

MICHIEL E. HOCHSTENBACH

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

KU Leuven Department of Computer Science

Parallelization of the rational Arnoldi algorithm. Berljafa, Mario and Güttel, Stefan. MIMS EPrint:

Induced Dimension Reduction method to solve the Quadratic Eigenvalue Problem

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

ABSTRACT OF DISSERTATION. Ping Zhang

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

Rational Krylov methods for linear and nonlinear eigenvalue problems

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Solving Regularized Total Least Squares Problems

Reduction of nonlinear eigenproblems with JD

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Inexactness and flexibility in linear Krylov solvers

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

The Lanczos and conjugate gradient algorithms

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Preconditioned inverse iteration and shift-invert Arnoldi method

On Solving Large Algebraic. Riccati Matrix Equations

Augmented GMRES-type methods

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Efficient iterative algorithms for linear stability analysis of incompressible flows

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

FEM and sparse linear system solving

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011

c 2009 Society for Industrial and Applied Mathematics

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

of dimension n 1 n 2, one defines the matrix determinants

ETNA Kent State University

ABSTRACT. Professor G.W. Stewart

PROJECTED GMRES AND ITS VARIANTS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Probabilistic upper bounds for the matrix two-norm

A Note on Inverse Iteration

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

On the solution of large Sylvester-observer equations

MA 265 FINAL EXAM Fall 2012

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD

On the influence of eigenvalues on Bi-CG residual norms

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

Numerical Methods I: Eigenvalues and eigenvectors

1 Last time: least-squares problems

Krylov Subspace Methods to Calculate PageRank

A hybrid reordered Arnoldi method to accelerate PageRank computations

Eigenvalue Problems Computation and Applications

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Solution of Matrix Eigenvalue Problem

An Asynchronous Algorithm on NetSolve Global Computing System

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Synopsis of Numerical Linear Algebra

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

Introduction to Numerical Linear Algebra II

The Eigenvalue Problem: Perturbation Theory

6.4 Krylov Subspaces and Conjugate Gradients

APPLIED NUMERICAL LINEAR ALGEBRA

ECS130 Scientific Computing Handout E February 13, 2017

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

Transcription:

Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the rational Arnoldi decomposition 4 5 References 6 1 Introduction The first use of rational Krylov methods was for the solution of large sparse eigenvalue problems Ax = λbx, where A and B are N N matrices and (λ, x ) are the wanted eigenpairs; see [3, 4, 5, 6]. Let b be an N 1 vector and m a positive integer. The rational Krylov space Q m+1 (A, b, q m ) is defined as Q m+1 (A, b, q m ) = q m (A) 1 span{b, Ab,..., A m b}. Here, q m is a polynomial of degree at most m having roots ξ 1,..., ξ m, called poles of the rational Krylov space. If q m is a constant nonzero polynomial, then Q m+1 (A, b, q m ) is a standard (polynomial) Krylov space. The rational Arnoldi method [4, 5] can be used to compute an orthonormal basis V m+1 of Q m+1 such that a rational Arnoldi decomposition AV m+1 K m = BV m+1 H m is satisfied, where H m and K m are upper Hessenberg matrices of size (m + 1) m. In the following we initialize two matrices for the pencil (A, B) and plot the full spectrum Λ(A, B). For simplicity we take B = I and choose a rather small size to be able to compute all eigenvalues exactly. load west0479 A = west0479 ; B = speye (479) ; ee = eig ( full (A), full (B)); % Cannot do this for larger matrices! figure, plot (ee, 'ko ', ' linewidth ', 2), hold on title ( ' eigenvalues of (A,B) ') legend ( ' exact eigenvalues ') 1

2000 1500 eigenvalues of (A,B) exact eigenvalues 1000 500 0-500 -1000-1500 -2000-150 -100-50 0 50 100 150 We now construct a rational Krylov space with m = 10 poles all set to zero, i.e., we are interested in the generalized eigenvalues of (A, B) closest to zero. The inner product for the rational Krylov space can be defined by the user, or otherwise is the standard Euclidean one. (Since B = I in this example, the two coincide.) rng (0) ; b = randn (479,1) ; m = 10; xi = repmat (0, 1, m); param. inner_product = @(x,y) y '*B*x; [V, K, H] = rat_krylov (A, B, b, xi, param ); warning off, nrma = normest ( A); nrmb = normest ( B); warning on We can easily check the validity of the rational Arnoldi decomposition by veryfing that the residual norm is close to machine precision: disp ( norm (A*V*K - B*V*H)/( nrma + nrmb )) 2.6796 e -18 The basis V m+1 is close to orthonormal too: disp ( norm ( param. inner_product (V, V)-eye (m +1) )) 7.8236 e -16 2 Extracting approximate eigenpairs A common approach for extracting approximate eigenpairs from a search space is by using Ritz approximations or variants thereof. Let C be an N N matrix, and X an N m matrix. The pair (ϑ, y Xz ) is called a ˆ Ritz pair for C with respect to R(X) if Cy ϑy R(X); ˆ harmonic Ritz pair for C with respect to R(X) if Cy ϑy R(CX). 2

Assume that B is nonsingular. It follows easily (see [1, Lemma 2.4] and [2, Theorem 2.1]) from AV m+1 K m = BV m+1 H m and the definition of (harmonic) Ritz pairs given above that ˆ Ritz pairs (ϑ, y V m+1 K m z ) for B 1 A with respect to R(V m+1 K m ) arise from solutions of the generalized eigenvalue problem K m H m z = ϑk m K m z. Since K m is of full rank, K m K m is nonsingular, and we can equivalently solve the standard eigenvalue problem K m H m z = ϑz ; ˆ harmonic Ritz pairs (ϑ, y V m+1 K m z ) for B 1 A with respect to R(V m+1 K m ) arise from solutions of the generalized eigenvalue problem H m H m z = ϑh m K m z. Since H m is of full rank, H m H m is nonsingular, and we can equivalently solve the standard eigenvalue problem H m K m z = λz, and take ϑ = 1. λ [Xr, Dr] = eig (K\H); [Xh, Dh] = eig (H\K); ritz = diag (Dr); hrm_ritz = 1./ diag (Dh); plot ( real ( ritz ), imag ( ritz ),'bx ', ' linewidth ', 3) plot ( real ( hrm_ritz ), imag ( hrm_ritz ),'m+ ', ' linewidth ', 2) axis ([ -0.02, 0.02, -0.1, 0.1]) legend ( ' exact eigenvalues ',... 'Ritz approximations ',... ' harmonic Ritz ') 0.1 0.08 0.06 0.04 0.02 0-0.02-0.04-0.06-0.08 eigenvalues of (A,B) exact eigenvalues Ritz approximations harmonic Ritz -0.1-0.02-0.015-0.01-0.005 0 0.005 0.01 0.015 0.02 3 Accuracy of the approximate eigenpairs We can evaluate the accuracy of the (harmonic) Ritz pairs from the relative residual norm Ay ϑby 2 ( A 2 + ϑ B 2 ) y 2. From the rational Arnoldi decomposition AV m+1 K m = BV m+1 H m we have Ay ϑby = AV m+1 K m z ϑbv m+1 K m z = BV m+1 (H m ϑk m )z. Hence, a cheap estimate of the accuracy of an approximate eigenpair is the norm (H m ϑk m )z 2. If this norm is small compared to B 1 A 2, we have computed an eigenpair of a nearby problem. It seems that in this example two eigenpairs have already converged to very high accuracy: 3

approx_residue = @(X) arrayfun (@(i) norm (X(:, i)), 1: size (X, 2)); approx_res = [ approx_residue (H*Xr -K*Xr*Dr).'... approx_residue (H*Xh -K*Xh* diag ( hrm_ritz )). ']; disp ( approx_res ) 5.1976 e -08 4.5063 e -19 5.1976 e -08 6.1961 e -17 4.5828 e -08 8.2255 e -07 4.5828 e -08 8.2255 e -07 6.6227 e -08 1.5529 e -07 6.6227 e -08 1.5529 e -07 6.6756 e -08 1.5303 e -07 6.6756 e -08 1.5303 e -07 1.2580 e -17 2.3565 e -07 1.5728 e -15 2.3565 e -07 4 Expanding the rational Arnoldi decomposition Let us perform 8 further iterations with rat krylov, with 2 repeated poles being the harmonic Ritz eigenvalues expected to converge next. Since the poles appear in complexconjugate pairs, we can turn on the real flag for rat krylov and end up with a real-valued quasi rational Arnoldi decomposition [6]. [~, ind ] = sort ( approx_res (:, 2)); xi = repmat ( hrm_ritz ( ind ([3, 4]) ) ', 1, 4); param. real = 1; [V, K, H] = rat_krylov (A, B, V, K, H, xi, param ); Let us check the residual norm of the extended rational Arnoldi decomposition, and verify that the decomposition has the original 10 poles at zero, and the newly selected 8 poles. disp ( norm (A*V*K - B*V*H)/( nrma + nrmb )) disp ( util_pencil_poles (K, H). ') 2.9296 e -18 2.8261 e -04-1.2537 e -02 i 4

2.8261 e -04-1.2537 e -02 i 2.8261 e -04-1.2537 e -02 i 2.8261 e -04-1.2537 e -02 i Finally, the (improved) 18 Ritz pairs are evaluated, both standard and harmonic. [Xr, Dr] = eig (K\H); [Xh, Dh] = eig (H\K); ritz = diag (Dr); hrm_ritz = 1./ diag (Dh); approx_res = [ approx_residue (H*Xr -K*Xr*Dr).'... approx_residue (H*Xh -K*Xh* diag ( hrm_ritz )). ']; disp ( approx_res ) 7.8102 e -10 1.0176 e -19 7.8102 e -10 2.2607 e -19 1.0521 e -09 1.2433 e -13 1.5917 e -08 1.2433 e -13 3.4226 e -10 3.3683 e -11 3.4226 e -10 2.3079 e -10 5.3472 e -10 3.5878 e -13 5.3472 e -10 3.5878 e -13 1.1481 e -09 2.3904 e -11 1.1481 e -09 2.3904 e -11 3.9385 e -09 9.2915 e -09 3.9385 e -09 9.2915 e -09 2.5184 e -16 4.0670 e -09 2.5439 e -11 4.0670 e -09 2.5439 e -11 3.3345 e -09 9.7244 e -12 3.3345 e -09 9.7244 e -12 9.9539 e -09 1.7705 e -16 9.9539 e -09 figure, plot (ee, 'ko ', ' linewidth ', 2), hold on plot ( real ( ritz ), imag ( ritz ),'bx ', ' linewidth ', 3) plot ( real ( hrm_ritz ), imag ( hrm_ritz ),'m+ ', ' linewidth ', 2) axis ([ -0.08, 0.08, -0.1, 0.1]) legend ( ' exact eigenvalues ',... 'Ritz approximations ',... ' harmonic Ritz ') 5

0.1 0.08 0.06 exact eigenvalues Ritz approximations harmonic Ritz 0.04 0.02 0-0.02-0.04-0.06-0.08-0.1-0.08-0.06-0.04-0.02 0 0.02 0.04 0.06 0.08 Harmonic Ritz pairs are typically better than (standard) Ritz pairs for interior eigenvalues, thought this is not yet fully understood. Also, for symmetric matrices the two sets of Ritz values interlace each other, and hence their distance is not large as ultimately both sets converge to the same eigenvalues. 5 References [1] G. De Samblanx and A. Bultheel. Using implicitly filtered RKS for generalised eigenvalue problems, J. Comput. Appl. Math., 107(2):195 218, 1999. [2] R. B. Lehoucq and K. Meerbergen. Using generalized Cayley transformations within an inexact rational Krylov sequence method, SIAM J. Matrix Anal. Appl., 20(1):131 148, 1998. [3] A. Ruhe. Rational Krylov sequence methods for eigenvalue computation, Linear Algebra Appl., 58:391 405, 1984. [4] A. Ruhe. Rational Krylov algorithms for nonsymmetric eigenvalue problems, Recent Advances in Iterative Methods. Springer New York, pp. 149 164, 1994. [5] A. Ruhe. Rational Krylov: A practical algorithm for large sparse nonsymmetric matrix pencils, SIAM J. Sci. Comput., 19(5):1535 1551, 1998. [6] A. Ruhe. The rational Krylov algorithm for nonsymmetric eigenvalue problems. III: Complex shifts for real matrices, BIT, 34(1):165 176, 1994. 6