Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Similar documents
Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

Block designs and statistics

Feature Extraction Techniques

A note on the multiplication of sparse matrices

Lecture 13 Eigenvalue Problems

Chaotic Coupled Map Lattices

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations

Least Squares Fitting of Data

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

Solving initial value problems by residual power series method

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Chapter 6 1-D Continuous Groups

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

A new type of lower bound for the largest eigenvalue of a symmetric matrix

OBJECTIVES INTRODUCTION

Ph 20.3 Numerical Solution of Ordinary Differential Equations

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

Variations on Backpropagation

POD-DEIM MODEL ORDER REDUCTION FOR THE MONODOMAIN REACTION-DIFFUSION EQUATION IN NEURO-MUSCULAR SYSTEM

COS 424: Interacting with Data. Written Exercises

Ch 12: Variations on Backpropagation

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Explicit Approximate Solution for Finding the. Natural Frequency of the Motion of Pendulum. by Using the HAM

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

arxiv: v1 [stat.ml] 31 Jan 2018

PHY307F/407F - Computational Physics Background Material for Expt. 3 - Heat Equation David Harrison

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind

Using a De-Convolution Window for Operating Modal Analysis

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Sharp Time Data Tradeoffs for Linear Inverse Problems

AN APPLICATION OF CUBIC B-SPLINE FINITE ELEMENT METHOD FOR THE BURGERS EQUATION

Linear Transformations

Physics 215 Winter The Density Matrix

Analysis of Impulsive Natural Phenomena through Finite Difference Methods A MATLAB Computational Project-Based Learning

Introduction to Machine Learning. Recitation 11

i ij j ( ) sin cos x y z x x x interchangeably.)

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

An Improved Particle Filter with Applications in Ballistic Target Tracking

Kinematics and dynamics, a computational approach

Interactive Markov Models of Evolutionary Algorithms

Stability Analysis of the Matrix-Free Linearly Implicit 2 Euler Method 3 UNCORRECTED PROOF

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

The linear sampling method and the MUSIC algorithm

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

Pattern Recognition and Machine Learning. Artificial Neural networks

Lecture 21. Interior Point Methods Setup and Algorithm

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

Explicit Analytic Solution for an. Axisymmetric Stagnation Flow and. Heat Transfer on a Moving Plate

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT

Multi-Scale/Multi-Resolution: Wavelet Transform

A model reduction approach to numerical inversion for a parabolic partial differential equation

Principal Components Analysis

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

lecture 36: Linear Multistep Mehods: Zero Stability

GREY FORECASTING AND NEURAL NETWORK MODEL OF SPORT PERFORMANCE

USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS. By: Ian Blokland, Augustana Campus, University of Alberta

Preconditioned inverse iteration and shift-invert Arnoldi method

CHAPTER 19: Single-Loop IMC Control

1 Bounding the Margin

A Quantum Observable for the Graph Isomorphism Problem

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

Genetic Algorithm Search for Stent Design Improvements

Bulletin of the. Iranian Mathematical Society

12 Towards hydrodynamic equations J Nonlinear Dynamics II: Continuum Systems Lecture 12 Spring 2015

lecture 37: Linear Multistep Methods: Absolute Stability, Part I lecture 38: Linear Multistep Methods: Absolute Stability, Part II

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits

COMPONENT MODE SYNTHESIS, FIXED-INTERFACE MODEL Revision A

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

A remark on a success rate model for DPA and CPA

arxiv: v1 [math.na] 10 Oct 2016

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

On the Use of A Priori Information for Sparse Signal Approximations

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Bipartite subgraphs and the smallest eigenvalue

A Simple Regression Problem

Dedicated to Richard S. Varga on the occasion of his 80th birthday.

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Rational Filter Wavelets*

Towards Gauge Invariant Bundle Adjustment: A Solution Based on Gauge Dependent Damping

The Solution of One-Phase Inverse Stefan Problem. by Homotopy Analysis Method

Synchronization in large directed networks of coupled phase oscillators

Internet-Based Teleoperation of Carts Considering Effects of Time Delay via Continuous Pole Placement

Scalable Symbolic Model Order Reduction

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Arnoldi Methods in SLEPc

Transcription:

Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par, MD wu@ath.ud.edu Advisor: Professor Howard Elan Departent of Coputer Sciences University of Maryland, College Par, MD elan@cs.ud.edu Abstract his report has four ain parts. In the first part, I state the eigenvalue proble I a trying to solve in y project, give a brief introduction to its bacground and application, and analyze the coputational difficulties. In the second part, I explain the ethods I use to solve the proble, ainly the eigenvalue solvers and atrix transforations. In the third part, I present the results I have obtained and the codes I develop to ipleent the ethods in the previous part. In the last part of y report, I give a outline of what I will do in the future (AMSC 664). Introduction Consider the eigenvalue proble A x S = λb x () S where A S and B S are large sparse non-syetric real N N atrices and S is a set of paraeters given by the underlying Partial Differential Equation (PDE). For siplicity, I will drop the subscript S in the following discussion. People are interested in coputing its rightost eigenvalues (naely, eigenvalues with the largest real parts). he otivation lies in the deterination of the stability of steady state solutions of non-linear systes of the for du N N N B = f ( u) f : R R, u R (2) dt

with large N and where u represents a state variable (velocity, pressure, teperature, etc). B is often called the ass atrix. Define the Jacabian atrix for the steady state u* by A=df/dx(u*), then u* is stable if all the eigenvalues of () have negative real parts. ypically, f arises fro the spatial discretization of a PDE. Interesting applications of this ind occur in stability analyses in fluid echanics, structural engineering and cheical reactions. he proble of finding rightost eigenvalues also frequently occurs in Marov chain odels, econoic odeling, siulation of power systes and agnetohydrodynaics. When finite differences are used to discretize a PDE, then often B=I and () is called a standard eigenproble. If the equations are discretized by finite eleents, then the ass atrix B I and () is called a generalized eigenvalue proble. For probles arising fro fluid echanics, B is often singular. Major coputational difficulties of this ind of probles are: () both A and B are large and sparse, so the algorith we use ust be efficient in dealing with large systes; (2) in any applications, the rightost eigenvalues are coplex, so we ust consider coplex arithetic; (3) B is often singular, so it will give rise to spurious eigenvalues. Beside the nuerical algorith in coputing the rightost eigenvalues of (), how the paraeter set S gives rise to the bifurcation phenoena, i.e, the steady state solution exchanges in stability, is also of people's interest. Exaples are the Rayleigh nuber in nonlinear diffusion equation (Olstead odel) and the Daöhler nuber in the tubular reactor odel. As the paraeters vary, the rightost eigenvalues ight cross the iaginary axis, thus the steady state solution becoes unstable. Methodology Eigenvalue Solvers Since both A and B are large and sparse, direct ethods such as the QZ-algorith for the generalized proble and the QR-algorith for the standard proble are not feasible. A ore efficient approach is the solution of the standard eigenvalue proble x = θx, which is a transforation of Ax = λbx, by iterative ethods lie Arnoldi's ethod, subspace iteration and Lanczos' ethod. Another reason why iterative ethods are ore suitable for this type of proble is that we are usually not interested in coputing the full spectru of the eigenvalue proble (). Instead, we are only interested in coputing a sall set of the spectru, which, in the project, is the rightost eigenvalues. he eigensolver I want to explore in y project is Arnoldi's algorith and its variants, such as the Iplicitly Restarted Arnoldi algorith. 2

. Basic Arnoldi Algorith Arnoldi algorith is an iterative eigensolver based on Krylov subspaces. Given a atrix A and a vector u, a -diensional Krylov subspace is spanned by the coluns of the following atrix: K 2 ( A, u ) = [ u Au A u L A u ] provided that they are linear independent. + steps of Arnoldi algorith give us the following decoposition: AU = U H + β u e (3) + where U is an orthonoral basis of the -diensional Krylov subspace and [U u +] is an orthonoral basis of the +-diensional Krylov subspace, H is a upper Hessenberg atrix, β is a scalar and e is a vector [ 0 0 L 0 ]. One thing worth entioning is that we usually select such that «N. Preultiply (3) by U : = U U H + U u + U AU β e = H. (4) herefore, H is the projection of A onto the -diensional Krylov subspace. We hope the eigenvalues of H can be good approxiation of those of A because if that is true, we can solve a uch saller eigenvalue proble instead. Suppose (λ,z) is an eigenpair of H. Mutiply (4) by z: U ( z) H z = λ z = λ( U U ) z = U λ( U z). A U = (5) hen (λ,u z) is an approxiation of an eigenpair of A and we define the residual to be A(U z) λ(u z).we can also obtain fro (5) the following: ( A( U z) ( U z) ) = 0 U λ. his tells us the residual is always orthogonal to the -diensional Krylov subspace. Fig. illustrate A(U z), λ(u z) and their residual: Fig. 3

As increases, the residual will decrease and eventually, when =N, A(U z) = λ(u z), which eans (λ,u z) is an exact eigenpair of A. hough Arnoldi algorith is powerful, it is not a good idea to apply it naïvely to our eigenvalue proble. here are basically two reasons: () since we are dealing with large systes, increasing to iprove the perforance of Arnoldi is not practical. For exaple, when A is of size 0,000 0,000 and = 00, then we need 0 egabytes to store the Krylov basis U 00 to double precision; (2) in any real world applications, atrix B is singular. his will give rise to spurious eigenvalues if we don't do anything clever. A lot of variants of Arnoldi algorith follow the line of restarting the Arnoldi process by using a ore carefully chosen starting vector u (in the basic Arnoldi algorith, u is randoly chosen). In y project I will explore one of the, the Iplicitly Restarted Arnoldi algorith (IRA). 2. Iplicitly Restarted Arnoldi (IRA) Algorith he basic idea of IRA is to filter out unwanted eigendirections fro the original starting vector u by using the ost recent spectru inforation and also a clever filtering technique. Suppose we first use Arnoldi algorith and find (>) approxiated eigenpairs: (μ,x ), (μ 2, x 2),, (μ, x ) (Re(μ i) Re(μ j), i<j) and the following Arnoldi decoposition: AU = U H + β u e. (6) We want to filter out the eigendirection x +,, x fro the starting vector u so that it can be richer in the eigendirections we are interested in. Suppose now we want to filter out eigendirection x + fro the starting vector. We first subtract μ +U fro both sides of (6): + ( μ I ) U = U ( H μ I ) β u e A + + + + then copute the QR decoposition of H - μ +I (suppose H - μ +I = Q R, Q is orthonoral and R is upper triangular): next, ultiply both sides of (7) by Q : ( I ) U = U Q R β u e A + + + μ, (7) ( A + I )( U Q ) = ( U Q ) ( RQ ) + βu+ eq and add μ +U Q to both sides of (8): μ, (8) ( Q ) = ( U Q ) ( RQ + + I ) + βu+ eq A U his way we have a new Arnoldi decoposition: AU ( ) ( ) ( ) = U H + β u + μ. e Q 4

where U () = U Q and H () = R Q + μ +I results fro one QR step with shift μ + applied to H. he first colun of U () is proportional to (A- μ +I)u, so the unwanted eigendirection x + has been filtered out fro the starting vector. We repeat the process with μ +2,,μ and end up with the new Krylov basis U (-) whose first colun is proportional to (A μ +I) (A μ I)u. All the unwanted eigendirections have been filtered out fro the original starting vector u. So we don't need to restart Arnoldi process explicitly, instead, we apply the shifted QR algorith to the upper Hessenberg atrix H to obtain a new Arnoldi decoposition which is equivalent to the decoposition we will obtain if we start the Arnoldi process with the filtered vector. hat's why this ethod is called Iplicitly Restarted Arnoldi. A typical IRA circle consists of the following three steps:. Copute eigenpairs of H (<«N) 2. Apply Shifted QR algorith - ties (with shifts μ +,,μ ) to H to copute a new Arnoldi decoposition with filtered starting vector 3. Go bac to if the wanted eigenpairs do not converge Matrix ransforation Matrix transforation is crucial in solving probles lie (). here are two iportant reasons for this approach. First, a practical reason is that iterative ethods lie Arnoldi's ethod and subspace iteration cannot solve generalized eigenvalue probles, which aes a transforation necessary. A second reason is of a nuerical nature. It is well nown that iterative eigenvalue solvers applied to A quicly converge to the well-separated extree eigenvalues of A. When A arises fro the spatial discretization of a PDE, then the rightost eigenvalues of A are in general not well separated. his iplies slow convergence. he iterative ethod ay converge to a wrong eigenvalue. Instead, one applies eigenvalue solvers to a transforation with the ai of transforing the rightost eigenvalues of A to well-separated extreal eigenvalues of, which are easily found by the eigenvalue solvers we consider. I will explore two inds of atix transforation: Shift-invert transforation and Cayley transforation.. Shift Invert ransforation he definition of shift invert transforation is as following: ( A, B; σ ) = ( A σb) B SI where σ is called the shift. After the transforation, we solve the standard eigenvalue proble x SI = θx instead of the generalized eigenvalue proble Ax = λbx. After that, we use the relationship between λ and θ: 5

λ θ = λ σ to recover the eigenvalue of the original proble. Shift invert transforation aps λ that are close to σ away fro origin and aps λ far fro σ close to origin. Fig. 2 and Fig. 3 illustrate this property. Fig. 2 Fig. 3 So an obvious choice of σ is an approxiation of the eigenvalue we want to copute. 2. Cayley ransforation he Cayley transforation is defined by C ( A, B; σ, τ ) = I + ( σ τ ) = ( A σb) ( A τb) where σ is called the shift and τ is called the anti-shift. After the transforation, we solve the standard eigenvalue proble x C SI = θx instead of the generalized eigenvalue proble Ax = λbx. After that, we use the relationship between λ and θ: λ τ λ θ = λ σ to recover the eigenvalue of the original proble. Cayley transforation aps λ close to σ to θ away fro the unit circle and aps λ close to τ to θ with sall odulus. hat's why τ is called the anti-shift. he ost interesting property is that the line Re(λ) = (σ+τ)/2 is apped to the unit circle, and λ to the left of the line are apped inside the unit circle, λ to the right of the line are apped outside the unit circle. Fig.4 and Fig. 5 deonstrate this property. 6

Fig. 4 Fig. 5 So if we are interested in coputing r rightost eigenvalues, we should choose σ and τ such that λ r+ lies on the line Re(λ) = (σ+τ)/2. Discretization of PDEs In this project, finite difference and finite eleent ethod will be used to discretize the PDEs. hey are the ost coonly used ethods in the discretization of PDEs. Coplex Arithetic Since the rightost eigenvalues are coplex in any real-world probles, coplex arithetic is considered in this project. Mid-Year Progress Report Overview ie Schedule Progress AMSC 663 solve the first test proble Before Noveber explore the effect of Rayleigh nuber in the proble solve the second test proble - Noveber explore the effect of Daöhler nuber in the proble - Deceber odify the Iplicitly Restarted Arnoldi Algorith solve the third test proble 7

January & February March April finish idter report give idter presentation AMSC 664 ipleent iterative linear syste solver give id-ter presentation discretize a Navier-Stoes equation Solve the eigenvalue proble arises fro the discritization explore the effect of the paraeter in the last proble Start writing final report Finish final report May Give final presentation : finished -: in progess : not started he First est Proble: Olstead Model. Proble Stateent he first test proble is fro a paper by Olstead et al in 986. he odel is governed by a coupled syste of PDEs: with boundary condition: ut = S bst = xx + cu ( c) xx + Ru u u S u = S = 0, x = 0, π. 3 his odel represents the flow of a layer of viscoelastic fluid heated fro below. u is the speed of the fluid, S is a quantity related to viscoelatic forces, and b, c, R are all scalars. In particular, R is called the Rayleigh nuber and is the paraeter I study in this proble. 2. Discretization of PDEs I use second order centered difference ethod to discretize the syste of PDEs. Grid size is h = /(N/2). Order the unnowns by grid points, i.e: y = [ u S u2 S2 LuN / 2 SN / 2]. hen the syste can be written as dy/dt = f(y). Matrix B in this case is identity. We evaluate the Jacobian atrix A at the trivial (all 0) steady state solution y*: A = dy/dt(y*) with N = 000, b = 2, c = 0. and R = 0.6. he resulting atrix is a large (500 500) sparse nonsyetric atrix with bandwidth 6. Fig. 6 illustrate its structure: 8

Fig. 6 he nonzero eleents of the atrix all sit on the six bands in the iddle. his ind of atrix structure is good for linear syste solvers. hat's why I order the unnowns by the grid points. 3. Matlab Ipleentation of Arnoldi Algorith o ipleent Arnoldi algorith together with the shift-invert atrix transforation, I write a Matlab routine: [v,x,u,h]=si_arnoldi(a,b,,siga). he input of the routine is: A, B: atrix A and B in the proble Ax = λbx; : the nuber of eigenpairs wanted; siga: the shift σ in shift-invert transforation. he output of the routine is: v: a vector of coputed eigenvalues; X: a atrix whose coluns are the eigenvectors; U: the orthonoral basis of the +-diensional Krylov subspace; H: the upper Hessenberg atrix. 4. Coputational Result Use = 0 and siga = -0.549994+2.085i. As discussed in the "Matrix ransforation" section, a good choice of σ is an approxiation of the eigenvalue of interest. When b = 2, c = 0. and R = 0.3, the rightost eigenvalue is -0.549994±2.085i. So -0.549994+2.085i can be viewed as an approxiation to the rightost eigenvalue when R = 0.6. he coputational result is: rightost eigenvalues: λ,2 = 0 ± 0.4472i residual: Aλ i λ ix i = 8.4504e-02, i =,2 and the result agrees with the literature. I also copute the critical Rayleigh nuber. Critical nuber Rayleigh nuber is the value of R under which the rightost eigenvalue just cross the iaginary axis 9

R and thus aes the steady state solution change fro stable to unstable. o copute the critical Rayleigh nuber R C, I fix b and c, start fro a large R, copute the rightost eigenvalue, and then decrease R, copute the rightost eigenvalue again using the rightost eigenvalue of the previous step as the σ, repeat this process until the rightost eigenvalue becoe negative. Fig. 7 shows the value of R C under different values of b and c. Fig. 7 he graph shows that R C follows the rule: = + c b C b c b > c which also agrees with the literature. he hird est Proble. Proble Stateent his test proble is fro a Paper by Meergergen and Spence (995). Consider the following eigenvalue proble: K C M 0 x x C = λ 0 0 0 where K is a 200 200 atrix, C is a 200 00 atrix, and M is a 200 200 atrix. hey are all of full ran. K, C and M are generated by using Matlab function "rand". 0

Although this proble is generated artificially, eigenvalue proble with this ind of bloc structure appears in the stability analysis of steady state solution of Navier Stoes equations for incopressible flow. Eventually, I want to solve the eigenvalue proble arises fro the discretization of Navier Stoes equations. 2. Matlab Ipleentation I use two algoriths to solve this proble and copare their perforance. he first one is the basic Arnoldi algorith and the second one is the Iplicitly Restarted Arnoldi (IRA) algorith. o ipleent the first algorith, I again use the Matlab code in the first proble. For the IRA, I use a code written by Fei Xue. he IRA code has the following iportant functions: Nae of the function IRADirectMain Main function Description ArnoldiExpd sglshiftqr, dblshiftqr contractionira Given a step Arnoldi decoposition,expands it to an step Arnoldi decoposition ( ) Ipleents the shifted QR algorith (sglshiftqr if the shift is real, dblshiftqr if the shift is coplex) Given an step Arnoldi decoposition, contracts it to a step Arnoldi decoposition( ) extracteigenpairs Outputs the eigenpairs of interest and also the error Fig. 8 illustrates the basic structure of the IRA code. Fig. 8

3. Coputational Result I first use Matlab function "eig" to copute the exact eigenvalues of the syste, and then use Arnoldi algorith and IRA to copute 0 rightost eigenvalues. he shift σ for both Arnoldi and IRA is 60. Fig. 9 shows their results: Fig. 9 he IRA gives very accurate result. But Arnoldi gives rise to spurious eigenvalues, which are highlighted by red color. he coputed eigenvalues highlighted by blue color are approxiation of the true eigenvalues, but unfortunately, they are not the 0 rightost eigenvalues. Not only does the Arnoldi give rise to spurious eigenvalues, it also coputes the wrong eigenvalues. It is obvious that IRA outperfors Arnoldi. Future Wor (AMSC 664). Finish test proble 2: tubular reactor odel 2. Ipleent iterative linear syste solvers and copare with direct solver 3. Ipleent Cayley atrix transforation and copare with Shift - invert 4. Apply the algorith to Navier Stoes equations Refernce. Meerbergen, K & Roose, D 996 Matrix transforation for coputing 2

rightost eigenvalues of large sparse non-syetric eigenvalue probles. SIAM J. Nuer. Anal. 6, 297-346 2. Meergergen, K & Spence, A 997 Iplicitly restarted Arnoldi with purification for the shift-invert transforation. Math. Coput. 66, 667-698. 3. Olstead, W. E., Davis, W. E., Rosenblat, S. H., & Kath, W. L. 986 Bifurcation with eory. SIAM J. Appl. Math. 40, 7-88. 4. Stewart, G. W. 200 Matrix algoriths, volue II: eigensystes. SIAM 5. Sorensen, D. C. 992 Iplicitly application of polynoial filters in a step Arnoldi Method. SIAM J. Matrix Annal. Appl. 3, 357-385 6. Meerbergen, K & Roose, D 997 he restarted Arnoldi ethod applied to iterative linear syste solvers for the coputation of rightost eigenvalues. SIAM J. Matrix Anal. Appl. 8, -20 3