Chebyshev acceleration techniques for large complex non Hermitian eigenvalue problems

Similar documents
Arnoldi Methods in SLEPc

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

EVALUATION OF ACCELERATION TECHNIQUES FOR THE RESTARTED ARNOLDI METHOD

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

DELFT UNIVERSITY OF TECHNOLOGY

On Solving Large Algebraic. Riccati Matrix Equations

PROJECTED GMRES AND ITS VARIANTS

Algorithms that use the Arnoldi Basis

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Krylov subspace projection methods

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

An Asynchronous Algorithm on NetSolve Global Computing System

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

FEM and sparse linear system solving

Introduction to Iterative Solvers of Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

MATH 5640: Functions of Diagonalizable Matrices

ABSTRACT OF DISSERTATION. Ping Zhang

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Key words. cyclic subspaces, Krylov subspaces, orthogonal bases, orthogonalization, short recurrences, normal matrices.

Exercise Set 7.2. Skills

Course Notes: Week 1

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Research Article Residual Iterative Method for Solving Absolute Value Equations

c 2009 Society for Industrial and Applied Mathematics

ETNA Kent State University

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

MTH 2032 SemesterII

The Lanczos and conjugate gradient algorithms

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

A hybrid reordered Arnoldi method to accelerate PageRank computations

WHEN studying distributed simulations of power systems,

ETNA Kent State University

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Index. for generalized eigenvalue problem, butterfly form, 211

On the solution of large Sylvester-observer equations

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Computational Linear Algebra

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Solving large sparse eigenvalue problems

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

ON RESTARTING THE ARNOLDI METHOD FOR LARGE NONSYMMETRIC EIGENVALUE PROBLEMS

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

ECS130 Scientific Computing Handout E February 13, 2017

The in uence of orthogonality on the Arnoldi method

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

Block Krylov Space Solvers: a Survey

The Conjugate Gradient Method

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems

On the Ritz values of normal matrices

THE USE OF A CALCULATOR, CELL PHONE, OR ANY OTHER ELEC- TRONIC DEVICE IS NOT PERMITTED DURING THIS EXAMINATION.

S.F. Xu (Department of Mathematics, Peking University, Beijing)

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Chapter 6: Orthogonality

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

The Singular Value Decomposition and Least Squares Problems

A Note on Inverse Iteration

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Numerical Methods - Numerical Linear Algebra

Scientific Computing: An Introductory Survey

Diagonalizing Matrices

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

arxiv: v1 [hep-lat] 2 May 2012

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1 Extrapolation: A Hint of Things to Come

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

Linear Algebra: Matrix Eigenvalue Problems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

Chapter 7 Iterative Techniques in Matrix Algebra

Math 3191 Applied Linear Algebra

Linear Algebra Massoud Malek

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Linear Algebra and Eigenproblems

Transcription:

Reliable Computing 2 (2) (1996), pp. 111-117 Chebyshev acceleration techniques for large complex non Hermitian eigenvalue problems VINmSNT H~.,vmar,m and Mat.ouo SADKANE We propo~ an extensiem of the Arnoldi-Chebyshev algorithm to the large complex non Hermitian ~a~. We demonstrate the algorithm on two applied problems. TexH~a yckopemzt,q no tte6t,imeby h~ 6oAt, mnx 3ahaq HaXO>KAem~ CO6CTBeHHBrX 3HaqeHH~ He-3pMI4TOBbIX KOMHAeKCHbIX MaTp~1/ B. 9B~m-I, M. C^ar..aH IqpeJIJlal'ac'rc.q tx-e-xltlllehae a.rlrop ltma ApHo31bHil-.~IL'ff~bllll~l~l H2151 ~lyqa~ ~{klllblllllx KOMIIIIeKCHIblX He- 3pMIITOBI~IX MaTpt~llL PatoTa ajlroptlt.'~a IlpOlleMOHcl'pltpOBaNa.Ha IlpllMepC /1Byx IIpI.IK/lalINMX 3allaq. 1. Introduction The computation of a few eigenvalues and the corresponding eigenvectors of large complex non hermitian matrices arises in many applications in science and engineering such as magnetohydrodynamic or electromagnetism [6], where the eigenvalues of interest often belong to some region of the complex plane. If the size of the matrices is relatively small, then the problem can be solved by the standard and robust QZ algorithm [9]. The QZ algorithm computes all the eigenvalues while only a few eigenvalues may be of interest, requires a high computational cost and does not exploit the sparsity of the matrices and is therefore only suitable for small matrices, When the size of the matrices becomes large, then Krylov subspace methods are a good alternative since these methods take into account the structure of the matrix and approximate only a small part of the spectrum. Among these methods, Arnoldi's algorithm appears to be the most suitable. Arnoldi's method builds an orthogonal basis in which the matrix is represented in a Hessenberg form whose spectrum, or at least a part of it, approximates the sought eigenvalues. Saad [11] has proved that this method gives a good approximation to the outermost part of the spectrum. However, convergence can be very slow especially when the distribution of the spectrum is unfavorable. On the other hand, to ensure convergence, the dimension of the Krylov subspace must be large, which increases the cost and the storage. To overcome these difficulties, one solution is to use the method iterativety, that is, the process must be restarted periodically with the best estimated eigenvectors associated with the required eigenvalues. These eigenvectors can also be improved by choosing a polynomial that amplifies (~) V. Heuvelin, M. Sadkane, 1996

112 V. HEUVELINE, M. SADKANE the components of the required eigendirections while damping those in the unwanted ones. This amounts to finding, at each restart, a polynomial p* that solves the minimax problem min max IP(z)l pet ~ ted (l) where 7 9 is some set of polynomials and D is a domain that contains the unwanted eigenvalues. These techniques have already been used for linear systems [7, 8] and for eigenvalue problems [1, 5, 11-13]. The domain D is often delimited by an ellipse and the polynomial p* is a Chebyshev polynomial. However, the acceleration schemes described in these works deal with real matrices and take into great account the symmetry with respect to the real axis of the spectrum of these matrices. This property does not hold for complex matrices and therefore the construction of the domain D and the polynomial p* is more difficult. In this paper, we consider a new acceleration scheme for the complex non hermitian eigenvalue problem. We will restrict ourselves to the case where the rightmost eigenvalues are needed. Section 2 provides background and notations for the Chebyshev-Arnoldi method. Section 3 exhibits a new polynomial acceleration scheme for complex non hermitian matrices. Section 4 is devoted to numerical tests taken from practical problems. 2~ The Arnoldi-Chebyshev method Let A be a large sparse complex non Hermitian diagonalizable n x n matrix whose eigenvalues AI,.-.,An are labeled in decreasing order of their real parts: ~ea1 >... >_> ~ReAn and ul..., un their corresponding eigenvectors. We suppose that we are interested in computing (Ai, ui) i = 1... r (r << n). These eigenpairs may be approximated in the following way: From V1 E C n r, let ])m = IV1,..., I'm] E Cn '~r(r < m << n) be the block Arnoldi * basis of the Krytov subspace/cm = span{v1, AVh... Am-iV1} satisfying: AYm = ]),~H,,, + [0...,0, V,~+aHm+l,m] (2) where Hm = ])~ma]),, is a block Hessenberg matrix and Hm+xm = Vmn+~AVm. Let HmY = YO with O = diag(ol,..., 0r), Y = [Yl,--., Yr] corresponding to the r rightmost eigenpairs of Hm and X = ])my the matrix whose columns approximate the sought eigenvectors of A. Then (2) reduces to IIAX - XOll= = llh, x,, Yll= (3) where 12 is the last r x r block of Y. In order to improve the approximate eigenpair (O, X) of A, we look for a polynomial p* such that the columns of p*(a)x will be a better approximation of ux,..., Ur than X. This may be done in the following way: Algorithm 1: Arnoldi-Chebysehev 1. Start: Choose an initial block V1 E G '~ r and the size m of the Krylov subspace. 2. Block-Arnoldi: Orthonormalize the columns of II1 and generate an orthogonal basis Vrn of the grylov subspace X:m(A, = span(v1, AV1,..., Am-IV:}. 1The block versi, m may, in certain cams, he more efficient than the standard one [12].

CHEBYSHEV ACCELERATION TECHNIQUES... 113 Compute the rightmost eigenpairs (),i, Vi)l<i<. of H,,~ = V~AV, n. 8, Restart: Compute the corresponding approximate eigenvectors ~ = ])rayi i = 1,..., r. If convergence stop. Else compute polynomial i0" that solves approximately (1) and set V, = [p*(a)al,...,p*(a)g]. Go to 9.. The aim of the polynomial acceleration is to enhance the columns of the restarting block V1 in the needed eigendirections {Ui}l<i<r. We see that each p(a)uj j = 1,...,r can be expanded in the basis ul,..., un as follows: p(a)~j = ajp(aj)u~ + Z a~p(~k)u~, j = 1,..., r. (4) k#j Obviously we are thus seeking a polynomial p* which achieves the minimum rain m~x Ip(A)I (5) petite p(,xr)=l. where C is a domain containing the unwanted eigenvalues Ai for i E [r + 1, r~], and IIa denotes the space of all polynomials of degree not exceeding k. The condition p(a~) = 1 is only used for normalization purpose. Let E = C(c, e, a) be an ellipse containing the set {At+l,..., A,~} and having center c, focis c + e, c - e and major semi-axis a. An asymptotically best min-max polynomial for (5) is the polynomial [3] p'(z) = (6) where Tk is the Chebyshev polynomial of degree k of the first kind. In the next section we discuss the strategy that we have adopted for constructing the ellipse E. 31 Computation of the ellipse C In practice we replace the set {At+l,..., An} by 7=4-u = {5~r+a,..., ~mr} which corresponds to the rest of the spectrum of Hm. Aninfinity_ of ellipses may endose the unwanted eigenvalues in 7~ and exclude the wanted ones 7"4,0 = {)q,..., At}. Hence we will add two more conditions that ensure the uniqueness of the desired ellipse. Considering At E ~ the nearest approximate unwanted eigenvalue to the wanted ones with respect to the Euclidean norm, then we want the following conditions to be fulfilled: ~z belongs to the ellipse and is a minimal point of [p*(z)] on the ellipse. The desired ellipse has the minimal surface. The motivation of the first condition is to discriminate as much as possible between the wanted and the unwanted eigenvalues which are close to each other. Indeed it is well known that

114 V. HEUVELINF~ M. SADKANE Arnoldi's method may have very slow convergence on clustered eigenvalues. Moreover, the approximate wanted eigenvalues may, at least at the be_ginning of the process, be far from the exact ones and therefore there is no guarantee that ~ and the wanted eigenvalues will not intersect. The motivation of the second condition is to minimize the risk mentioned in the first situation. Notice that even in the real case, there is no way to fully circumvent this difficulty. Considering the above conditions, the computation the ellipse C(c, e, a) occurs in two steps: (1) Choice of the center c and (2) Computation of the main axis direction, la! and let. These two points are discussed in more details in [4]. We only mention them briefly here: Choice of the center: The choice of the center of the ellipse is difficult considering the fact that it greatly depends on the location of the wanted and unwanted eigenvalues. Moreover, an inappropriate choice may imply that any ellipse enclosing the unwanted eigenvalues, contains also the wanted ones. To avoid such situations we have considered a set of centers {ci}i=l,...,,~ defined by the heuristic scheme l'n k=~+x i = 1... n~ (7) k----:,'+ 1 which ensures that the centers are in the convex hull of the unwanted eigenvalues 7~. The choice of the weights {w~ i)} is discussed in [4]. Computation of the main axis direction: Considering the Joukowski mapping we can show that the Chebyshev polynomial of degree k, T~(~-~) maps the ellipse E(c, e, a) onto the ellipse E(0, 1, a') with a' > 1 [4, t0]. Moreover, one rotation on the initial ellipse E(e, e, a) is mapped on k rotations onto the target ellipse E(0, 1,a')_ Consequently, for z E (c,e,a), Ip'(z)l as defined in (6) has 2k minimums. We impose At defined above to correspond to one of these minimums. Scanning for all possible constructions such that the searched dlipse contains 7~u and excludes ~ we then choose the ellipse of minimal surface. These two steps involve only scalar operations and therefore the computation of the ellipse is cheap. Q Numerical experiments The numerical experiments described in this section have been performed on an IBM/RISC 6000-590 computer using double precision. The stopping criterion is such that the relative residual corresponding to the computed pair (A,x) must satisfy tta~-x~ll~ tizll2 < - ~[1AII2 with ~ = 10-8. If the spectral norm ]]AH2 is not known we use the Frobenius norm I1AIIF. Before considering our test examples, it is worth mentioning that the code we have implemented uses a block version of Arnoldi-with a deflation technique that eliminates all the converged eigenpairs in previous steps. Let us illustrate the behavior of the'algorithm on two representative test problems.

CHEBYSHEV ACCELERATION TECHNIQUES... 115 * The Orr-Sommerfeld operator [6]: Consider the operator defined by aa~l2y- i(uly- U"y)-ALy = 0, where ot and R are positive parameters, A is a spectralparameter number, U = i -x 2, y is a function defined on [-1,+I] with y(q-1) = y'(q-1) = O, L = d 2 -- ~2. Discretizing this operator using the following approximation xj = -1 + jh, h = n+l ' Lh = ~Tridiag(1,-2- a2h 2, 1), Uh = diag(1- x~,..., 1- x~) gives rise to the eigenvalue problem Au = ;~u with A = ~Lh- il-~l(uhlh + 2In). Taking a = 1, R = 5000, n = 2000 yields a complex non Hermitian matrix.,4 (order = 2000, number of nonzero elements = 4.10 s, IIAItF = 21929), whose spectrum is plotted in Figure I. We computed the five rightmost eigenpairs of A. The results are shown in Figure 1 and Table 1. Spectrum of ORR SOMMERFELD Cheby. 70 / 5 Eigenvalues I Krylov 70 + =1= -10[ -O2-0~3-0.4 ~ -O.5 ~ -0.6-0.7-0,8-0,9 -t -1000 + -10.5 + + -11 f + ++ ~-11',5=,.. + + + ~ -12~ -T- + +,4- + + T ~ -12.5[ i + -13.5,; ' ' -14 " ' -500 0 0 5 t0 15 20 real -> Iterations Figure 1. Spectrum (left) and the residual norm versus the number of iterations (right) for Orr-Sommerfeld Method for Orr-Sommerfeld 5 eigenvalues a = 1, R = 5000, n = 2000 Krylov Basis Deflation Cheby. Degree nmult Iterations 40 No No co co 40 Yes No co co 40 Yes 40 67840 189 40 Yes 70 48530 81 70 No No co co 70 Yes No 103000 1482 70 Yes 40 38940 101 70 Yes 70 9620 21 Table 1. nmult: Number of matrix vector multiplications

116 V. HEUVELINE t M. SADKANE The matrix Younglc: This matrix comes from the Harwdl-Boeing [2] set of test matrices. It arises when modeling the acoustic scattering phenomenon. It is complex with order 841 and contains 4089 nonzero elements. IIAIIF = 6484. The spectrum is plotted in Figure 2. The results obtained with a basis m = 40 are given in Figure 2 and Table 2. 0 Spectrum of YOUNGIC Cheby. 30 / 8 Eigenvalues / K~/Iov 40 i -5-10 -15,_~ -20.E -25 +, + A O _> _J -2-4 0t i.,-----~_.:... _ / i... 0 -.~'... A.-6! i it I II L i il I I I II I ~ I i ii t l li I!, ii it -30-35 + -8-10 ~- ~./ i!l!ll x II t il II I il I, t i!j.!! ~ii!ii ~l 'ii -40-6Oo -20O 0 real -> 2O0-12 ~, 1,0 ', 15!,!!,!i,' 20 Iterations Figure 2. Younglc Spectrum (left) and the residual norm versus the number of iterations (right) for Method 6 eigenvalues 8 eigenvalues " nmutt Iterations nmult Iterations Block Arnoldi 36000 1000 oo oo + Deflation 1908 53 14160 354 + Chebyshev T30 1182 7 4723 21 Table 2. nmult: Number of matrix vector multiplications 5. Conclusion We have shown how to apply Chebyshev eigenvalue acceleration in the complex case. Numerical results indicate that the algorithm performs well and that the acceleration is useful and may be necessary in some practical problems.

CHEBYSHEV ACCELERATION TECHNIQUES,,, 117 References [1] Chatelin, F., Ho, D., and Bennani, M. Arnoldi-Tchebychev procedure for large scale noravflnmetric matr~es. Mathematical Modelling and Numerical Analysis 1 (1990), pp. 53-65. [2] Duff, I. S., Grimes, R. G., and Lewis, J. G. Sparse matrix test problems. ACM Trans. Math. Softw. 15 (1989), pp. 1-14. [3] Fischer, B. and Freund, R. Chebysl~v polynomials are not a/ways opt/ma/, j. Approximation Theory 65 (1991), pp. 261"273. [4] Heuveline, V. Polynomial acceleration in the complex plane. Technical report, Irisa/Inria, 1995. [5] Ho, D. Tchebychev acceleration technique for large scale non~nnmetric matrices. Numer. Math. 56 (]990), pp. 721-734. [6] Kerner, W. large-scale co, nplex eigenvalue problems, j. Comp. Phys. 85 (1989), pp. 1-85. [7] Manteuffel, T. A. The Tchebychev iteration for nonsymmetric linear system. Numer. Math. 28 (1977), pp. 307-327. [8] Manteuffel, T. A. Adaptive procedure for estimating parameters for the nonsymmetric Tchebychev iteration. Numer. Math. 31 (1978), pp. 183-208. [9] Moler, C. B. ancl Stewart, G. W. An algorithm for generalized matrix eigen~ue problems. SIAM J. Num. Anal. 10 (1973), pp. 241-256. [10] Rivlim Y. j. The Chebyshev polynomials: from approximation theory to algebra and number theory, j. Wiley and Sons Inc., New-York, 1990. [11] Saad, Y. Numerical methods for large eigen,xdue problems. Algorithms and architectures for advanced scientific computing. Manchester University Press, Manchester, 1992. [12] Sadkane, M. d block Arnoldi-Chebyshev method for ceonputing the leading eigenpairs of large sparse unsymmetric matrices. Numer. Math. 64 (1993), pp. 181-193. [13] Yamamoto, Y. and Ohtsubo, H. Subspace iteration accelerated by using Chebyshev polynomials for eigen~lue problems,~h symmetric mattes. Int. J. Numer. Methods Eng. 10 (1976), pp. 935-944. Received: October 30, 1995 Revised version: December 4, 1995 IRISA/INRIA Campus de Beaulieu 3500 Rennes France