AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

Similar documents
Lecture 3: Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

M.A. Botchev. September 5, 2014

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Conjugate Gradient (CG) Method

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

c 2006 Society for Industrial and Applied Mathematics

Jae Heon Yun and Yu Du Han

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

Inexact Inverse Iteration for Symmetric Matrices

Controlling inner iterations in the Jacobi Davidson method

Some bounds for the spectral radius of the Hadamard product of matrices

A Note on Inverse Iteration

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Inexact Inverse Iteration for Symmetric Matrices

of dimension n 1 n 2, one defines the matrix determinants

Perron Frobenius Theory

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

MATH36001 Perron Frobenius Theory 2015

MICHIEL E. HOCHSTENBACH

Z-Pencils. November 20, Abstract

Notes on Linear Algebra and Matrix Theory

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Iterative Methods for Solving A x = b

Chapter 3 Transformations

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science. CASA-Report November 2013

Preconditioned inverse iteration and shift-invert Arnoldi method

Chapter 7 Iterative Techniques in Matrix Algebra

Inexact inverse iteration for symmetric matrices

Spectral Properties of Matrix Polynomials in the Max Algebra

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Lecture Note 13: Eigenvalue Problem for Symmetric Matrices

A Smoothing Newton Method for Solving Absolute Value Equations

Krylov Subspace Methods to Calculate PageRank

Solving Regularized Total Least Squares Problems

Numerical Methods - Numerical Linear Algebra

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Characterization of half-radial matrices

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Linear Algebra Massoud Malek

ETNA Kent State University

Fast Verified Solutions of Sparse Linear Systems with H-matrices

ABSTRACT. Professor G.W. Stewart

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Scientific Computing

A Jacobi Davidson type method for the product eigenvalue problem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

DELFT UNIVERSITY OF TECHNOLOGY

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Maximizing the numerical radii of matrices by permuting their entries

Permutation transformations of tensors with an application

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Eigenvalue Problems Computation and Applications

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

Properties of Matrices and Operations on Matrices

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

U AUc = θc. n u = γ i x i,

Review of Some Concepts from Linear Algebra: Part 2

Computational Methods. Eigenvalues and Singular Values

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A hybrid reordered Arnoldi method to accelerate PageRank computations

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

First, we review some important facts on the location of eigenvalues of matrices.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

Section 3.9. Matrix Norm

arxiv: v2 [math-ph] 3 Jun 2017

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Nonnegative and spectral matrix theory Lecture notes

Affine iterations on nonnegative vectors

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Transcription:

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX MICHIEL E. HOCHSTENBACH, WEN-WEI LIN, AND CHING-SUNG LIU Abstract. In this paper, we present an inexact inverse iteration method to find the minimal eigenvalue and the associated eigenvector of an irreducible M-matrix. We propose two different relaxation strategies for solving the linear system of inner iterations. For the convergence of these two iterations, we show they are globally linear and superlinear, respectively. Numerical examples are provided to illustrate the convergence of theory. Key words. Inexact Noda iteration, M-matrix, positive matrix, inexact Rayleigh quotient iteration, Perron vector, Perron root. AMS subject classifications. 65F15, 65F50. 1. Introduction. We consider the eigenvalue problem Ax = λx of computing the smallest eigenvalue λ and the associated eigenvector x of an irreducible M-matrix A. In this paper, the smallest eigenvalue denote by λ := min λ i, where Λ(A is the spectrum of A. Since A is an M-matrix, it can be λ i Λ(A expressed in form A = σi B with B 0 and for some constant σ > ρ(b, where ρ( denotes the spectral radius. Thus, the smallest eigenvalue λ is equal to σ ρ(b. It is well nown [7, p. 487] that the largest eigenvalue of B is the Perron root, which is simple and equal to the spectral radius of B with the associated eigenvector be positive. Consequently, we only need to compute the Perron root and the Perron vector of a nonnegative irreducible matrix B. When A is large and sparse, there are a number of methods for computing (λ,x, such as inverse iteration (INVIT [13, 15, 17], Rayleigh quotient iteration (RQI [13, 17] and shift-invert Arnoldi [17]. These methods are ind of inner-outer iteration method which can be applied to obtain the specific eigenpair. The so-called inner iteration is solving the linear system, and the update of approximate eigenpair is the outer iteration. However, they require the exact solution of a possibly ill-conditioned linear system at each iteration. This is generally very difficult and even impractical by a direct solver since a factorization of a shifted A may be expensive. So there is considerably of interest in inexact methods for eigenvalue problems. Among these inexact methods, the inexact inverse iteration (Inexact INVIT [3, 10] and the inexact RQI (IRQI [8, 9, 12] are the simplest and the most basic ones. In addition, they are ey ingredients of other sophisticated and practical inexact methods, such as inverse subspace iteration [14] and the Jacobi Davidson method [17, 16]. In [8, 9, 12, 17, 16, 14, 3, 10] it is also shown that these methods have good Version May 22, 2012. This wor was partially supported by the National Science Council and the National Center for Theoretical Sciences in Taiwan. Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB, The Netherlands. (www.win.tue.nl/ hochsten. Department of Applied Mathematics, National Chiao Tung University, Hsinchu 300, Taiwan (wwlin@math.nctu.edu.tw. Department of Mathematics, National Tsinghua University, Hsinchu 300, Taiwan (chingsungliu@gmail.com 1

2 HOCHSTENBACH, LIN, AND LIU convergence rate, for example: the IRQI has cubic and quadratic convergence. But the convergence of IRQI strongly depends on the initial guess. If the initial guess is not well-selected, it will waste too much time for searching the right convergence direction. It also may not converge or converge to the undesired eigenpair. First, based on Noda iteration [11] we propose an Inexact Noda iteration (INI to find the largest eigenvalue and the associated eigenvector of a nonnegative irreducible matrix B. The advantage of Noda iteration is to create a decreasing sequence of approximate eigenvalues which converge to ρ(b. Furthermore, the convergence is proven to be superlinear. In [11], the main tas in each step is to solve the linear system (λ I By +1 = x, (1.1 where λ is the approximate shift which is always larger than ρ(b. The major contribution of this paper is to provide two relaxation strategies for solving (1.1. The first strategy is to use min(x as the upper bound of tolerance for inner iterations. The second strategy is to solve (1.1 with decreasing tolerance for inner iterations. We also show that the convergence of the former iteration is globally linear and of the latter iteration is superlinear, respectively. In this paper, we use INI to find the smallest eigenvalue and the associated eigenvector of an M-matrix A, as mentioned in [18, 1]. And the greatest difference between [18, 1] and INI is inexact techniques which we used for solving linear systems, i.e., (A λ Iy +1 = x, (1.2 where λ is the approximate shift which is always smaller than 1/ρ(A 1. Similarly, we provide two relaxation strategies for solving(1.2 and we show that the convergence of this two strategies are globally linear and superlinear, respectively. When A is a symmetric M-matrix (or nonnegative irreducible matrix. We provide an integrated algorithm combined INI with IRQI. First, by using the global convergence of INI, we use INI to generate a good approximate vector x, and use x as the initial vector for IRQI. And then use the cubic or quadratic convergence behaviour of IRQI [8] to accelerate the algorithm convergence. This algorithm leverages the concept of inexact techniques we allow as large as possible in each inner iteration tolerance, in order to greatly enhance the computational efficiency The rest of this paper is organized as follows. In Section 2, we introduce the Noda iteration and some preliminaries. In Section 3, we propose INI algorithm and show some convergence theory. In Section 4, we use INI to find the smallest eigenvalue and the associated eigenvector of an M-matrix. In Section 5, we provide an integrated algorithm combined INI with IRQI for M-matrix or nonnegative matrix. We perform numerical experiments to confirm our results in section 6. Finally, we end up with some conclusions in section 7. 2. Preliminaries and Notation. For any matrix B = (b ij, we denote B = ( b ij. If the entries of matrix B are all nonnegative (positive, then we write B 0 (> 0. For real matrices B and C of the same size, if B C is a nonnegative matrix, we write B C. A nonnegative matrix B is said to be reducible if there exists a permutation matrix P such that [ ] P T E F BP = O G

INEXACT NODA ITERATION 3 where E and G are square matrices and it is called irreducible if it is not reducible. The basic eigenvalue properties of irreducible nonnegative matrices are summed up in the Perron Frobenius Theorem (see Horn and Johnson [7]. Here we formulate only the portion of it relevant to the scope of this paper. We denote e = (1,1,...,1 T. A matrix A is called an M-matrix if it can be expressed in form A = σi B with B 0 and σ > ρ(b. Theorem 2.1. Let A be an M-matrix. Then the following statements are equivalent (see, e.g., [2]: (i A = (a ij, a ij 0 for i j, and A 1 0; (ii A = σi B with B 0 and σ > ρ(b. Theorem 2.2 ([7]. Let B be a real irreducible nonnegative matrix. Then λ := ρ(b, the spectral radius of B, is a simple eigenvalue of B. Moreover, there exist an eigenvector x with positive elements associated with this eigenvalue, and no eigenvalue λ ρ(b has a positive eigenvector. For a pair of vectors x, y with y > 0, we define max ( x y = max i ( xi y i, min ( x y = min i The following theorem is from [7, p. 508], and it shows that the bound for spectral radius of a nonnegative square matrix. Theorem 2.3 ([7]. Let B be a nonnegative irreducible matrix. If x > 0 is not an eigenvector of B, then Bx Bx min < ρ(b < max. (2.1 x x 2.1. Bounds for eigenvectors. Since B is a nonnegative irreducible matrix, from Theorem 2.2, it shows that the largest eigenvalues of B is simple. Assume that the eigenvalues of B are ordered as ( xi ρ(b > λ 2 λ n. (2.2 Let x, x 2,..., x n be the unit eigenvectors corresponding to ρ(b, λ 2,..., λ n. Since ρ(b is simple, there exists a nonsingular matrix [ x X ] with inverse [ v V ] T such that [6] [ ] v T V T B [ x X ] [ ρ(b 0 = 0 L y i. ]. (2.3 Note that v is the left eigenvector of B and V T B = LV T. In addition, if µ is not an eigenvalue of L, the sep function for µ and L is defined as sep(µ,l = (µi L 1 1. Given a pair (µ,z as an approximation to (λ,x, the following lemma [17], shows the relation between sin(x,z and the residual, Bz µz. Lemma 2.4. [17, Th. 3.13] Let z be a unit vector. For any µ / Λ(L sin(x,z Bz µz sep(µ, L (2.4

4 HOCHSTENBACH, LIN, AND LIU 2.2. The Noda iteration. In [11], Noda provided an inverse iteration for computing the Perron root of a nonnegative irreducible matrix. This method has been shown to be quadratically convergent by Elsner [5]. Noda iteration consists of three steps: (λ I By +1 = x, (2.5 x +1 = y +1 / y +1, (2.6 Bx+1 λ +1 = max. (2.7 x +1 The main step is to compute a new approximation x +1 to x by solving the linear system (2.5, called the inner iteration. The update of the approximate eigenpair ( λ+1,x +1 is called the outer iteration. Since λ > ρ(b so long as x is not a scalar multiple of eigenvector x, it follows that the matrix λ I B is an M-matrix. Therefore, x +1 is still a positive vector. After variable transformation, we get the relation between λ +1 and λ as x λ +1 = λ min, y +1 Thus λ is decreasing. The algorithm to be developed here is based on the inverse iteration shifted by a Rayleigh quotient lie approximation of the eigenvalue. This process is summarized as Algorithm 2.1. Algorithm 2.1 Noda iteration. ( Bx 1. Set x 0 > 0, compute λ 0 = max 0 x 0. 2. for = 0,1,2,... 3. Solve the linear system (λ I By +1 = x. 4. Normalize the vector x +1 = y +1 / y +1. 5. Compute λ +1 = max( Ax+1 x +1. 6. until convergence. 3. The inexact Noda iteration and convergence theory. Based on Noda iteration, in this section we shall propose an inexact Noda iteration (INI for the computation of the spectral radius of a nonnegative irreducible matrix A. In practical applications, we provide two different types of relaxation strategies for solving the linear system inexactly in each iterative step of INI. Furthermore, we show that the convergence of two different types of INI is globally linear and superlinear, respectively. 3.1. The inexact Noda iteration. Since A is large and sparse, in step 3 of Algorithm 2.1 we see that an iterative linear solver must be resorted to get an approximate solution of it. In order to reduce the computational cost of Algorithm 2.1, it leads to an inexact Noda iteration by solving y +1 in step 3 of Algorithm 2.1 inexactly satisfying (λ I Ay +1 = x +f, (3.1 x +1 = y +1 / y +1, (3.2

INEXACT NODA ITERATION 5 where f is the residual vector between ( λ I A y +1 and x, and denotes the vector 2-norm. Here, the residual norm (inner tolerance ξ := f can be changed at each iterative step. Lemma 3.1. Let A be a nonnegative irreducible matrix and 0 γ < 1 be a fixed constant. For x > 0, if the residual vector f in (3.1 satisfies (λ I Ay +1 x = f γx, (3.3 then x +1 > 0. Furthermore, the sequence {λ } with λ = max( Ax is monotonically decreasing and bounded below by ρ(a, i.e., x λ > λ +1 ρ(a (3.4 Proof. Since λ I A is an M-matrix and f γx, the vector y +1 satisfies y +1 = (λ I A 1 (x +f > 0. This implies x +1 = y +1 / y +1 > 0 and min( x +f y +1 > 0. From (3.1 and definition of λ follows that Ax+1 Ay+1 λ +1 = max = max x +1 y +1 λ y +1 x f = max y +1 x +f = λ min < λ. (3.5 y +1 By Theorem 2.3 we have λ > λ +1 ρ(a. Based on(3.1 (3.2 and Lemma 3.1, we propose the inexact Noda iteration as follows. Algorithm 3.1 Inexact Noda Iteration (INI 1. Given x 0 > 0 with x 0 = 1, 0 γ < 1 and tol > 0. 2. Compute λ 0 = max( Ax0 x 0. 3. for = 0,1,2,... 4. Solve (λ I Ay +1 = x by an iterative solver such that (λ I Ay +1 x = f γx. 5. Normalize the vector x +1 = y +1 / y +1. 6. Compute λ +1 = max( Ax+1 x +1. 7. until convergence: Ax +1 λ +1 x +1 < tol. Note that if γ = 0, i.e., f = 0 in (3.1 for all, Algorithm 3.1 becomes the standard (exact Noda iteration. In the following, we show the eigenvalue condition for the convergence of {λ } ρ(a for.

6 HOCHSTENBACH, LIN, AND LIU Lemma 3.2. Let x > 0 be a unit eigenvector associated with ρ(a. For any vector z > 0 with z = 1, it holds that cos(z,x > min(x, and inf cos(z,x = min(x. (3.6 z =1,z>0 Proof. Since x > 0 and z > 0 with x = z = 1, we have cos(z,x = z T x > 0. Therefore, the infimum of z T x should attain at z e i, where e i is ith column of the identity matrix. That is inf z =1,z>0 cos(z,x = min{ lim cos(z,x} i z ei = min i {x i } = min(x. Let {x } be generated by Algorithm 3.1. We decompose x into the orthogonaldirect sum by x = xcos(ϕ +p sin(ϕ, p span(x x (3.7 with p = 1 and ϕ = (x,x being the acute angle between x and x. Now define ε = λ ρ(a, A = λ I A. Similar to (2.3 we also have the spectral decomposition [ ] [ ] v T [ ] ε 0 V T A x X =, (3.8 0 L where L = λ I L. Theorem 3.3. Let A be a nonnegative irreducible matrix. Assume (ρ(a, x is the largest eigenpair of A with x > 0 and x = 1. If x, λ, y and f are generated by Algorithm 3.1 (INI, then the following statements are equivalent. (i lim x = x; (ii lim λ = ρ(a; (iii lim y 1 = 0. Proof. (i (ii: By the definition of λ, we get lim λ = lim max Ax Ax = max lim = λ. x x (ii (iii: Since f γx, from (3.7 we have y +1 = A 1 (x +f A 1 (1 γx = ε 1 xcos(ϕ +A 1 p sin(ϕ, (3.9

The second term of equation (3.9 can be bounded by INEXACT NODA ITERATION 7 A 1 p sin(ϕ ( ε 1 xvt +XL 1 V T p = XL 1 V T p X V sep(λ,l X V sep(ρ(a, L (3.10 From Lemma 3.2 follows that cos(ϕ is uniformly bounded below by min(x. Combining (3.9 with (3.10 we have ( y +1 (1 γ ( λ ρ(a 1 xcos(ϕ A 1 p sin(ϕ ( (1 γ min(x λ ρ(a 1 X V. sep(ρ(a, L as. (iii (i: Let (λ,x +1 be an approximation to (ρ(a,x. From Lemma 2.4, we have sin(x,x +1 Ax +1 λ x +1 sep(λ,l Thus, it holds that lim x = x. (λ I Ay +1 y +1 2 sep(λ,l = 2 y +1 2 sep(λ,l. x +f y +1 2 sep(λ,l Note that from Lemma 3.1 follows that must converges.??? Corollary 3.4. Under the assumption of Theorem 3.3, if λ is converge to α > ρ(a, then it holds that (i y is bounded;(ii lim min(x +f = 0; (iii sin(x,x ζ > 0, for some ζ > 0. Proof. (i. Since f γx, we get y +1 = ( λ I A 1 (x +f 2 (λ I A 1 = 2sep(λ,A 1 2sep(α,A 1 <. (3.11 (ii. From the relation in (3.5 follows that lim min x +f = lim λ λ +1 = 0. (3.12 y +1 From (3.11 and (3.12 we have x +f min min(x +f max(y +1 min(x +f sep(α,a > 0. 2 y +1 Thus, it is holds that lim min(x +f = 0, (3.13 (iii Suppose there is a subsequence {sin(x j,x} which converges to zero. Then from Theorem 3.3 there is a subsequence {λ j } converge to ρ(a. This is a contradiction.

8 HOCHSTENBACH, LIN, AND LIU 3.2. Convergence Analysis. We now propose two practical relaxation strategies for the inexactness of step 4 of Algorithm 3.1 (INI: INI 1: the residual norm satisfies ξ = f γmin(x, for some constant 0 < γ < 1; INI 2: the residual vector satisfies f d x, where d = (1 λ /λ 1. It is easily seen that the conditions of INI 1 and INI 2 must satisfy f γx for some constant 0 < γ < 1 as in step 4 of Algorithm 3.1. From Lemma 3.1, we see that INI 1 or INI 2 generates a monotonically decreasing sequence {λ } bounded by ρ(a and a sequence of positive vectors {x }. From the decomposition (3.8, the vector x +1 can be decomposed as x +1 = xγ +1 +XS +1. (3.14 where γ +1 = v T x +1, S +1 = V T x +1. Furthermore,we have t +1 := S +1 γ 1 +1 = ( V T x +1 ( v T x +1 1 = ( V T y +1 ( v T y +1 1 = ( V T A 1 (x +f ( v T A 1 (x +f 1 = ( L 1 V T (x +f ( ε 1 vt (x +f 1 L 1 ε V T (x +f ( v T (x +f 1 = L 1 ε V T (x +f γ 1 ( 1+v T f γ 1 1 L 1 ε V T (x +f γ 1 ( 1+v T f γ 1. (3.15 Note that [14, Proposition 2.1] shows that x x if and only if t 0, (??? definition and sin(ϕ S and γ 1 1 1 sin(ϕ = 1+sin(ϕ cos 2 (ϕ. (3.16 Theorem 3.5 (Main Theorem. Let A be a nonnegative irreducible matrix. If {λ } is generated by INI 1 and ρ(a > L where L mentioned in (??, then {λ } ρ(a monotonically decreasing, as. Proof. Suppose not and assume that {λ } α > ρ(a. Then, L 1 ε can be bounded by L 1 ε = λ ρ(a sep(λ,l α ρ(a =: β < 1, (3.17 sep(α,l (??? what is β as. Since ξ = f γmin(x, it implies f γx. Then min(x +f > (1 γmin(x. From Corollary 3.4 (ii follows that 0 = lim min(x + f > lim (1 γmin(x. Thus, we have lim min(x = 0. (3.18

INEXACT NODA ITERATION 9 Furthermore, from Lemma 3.2 and Corollary 3.4 (iii we now that sin(ϕ and cos(ϕ are uniformly bounded below by a positive number. There is an m > 0 such that m sin(ϕ and m cos(ϕ. From the inequalities (3.16 follows that γ 1 2 m 2 and min(x S m min(x. (3.19 ( Let 0 < δ η, where η = min m 2 2 v, (1 βm 2 γ(βm V +(ρ+1 v. Then from (3.18, there is an N > 0 such that min(x < δ, for all N. From (3.19, it implies that min(x γ 1 v < 1 for all N. Using (3.15, (3.17 and (3.19 we obtain t +1 L 1 ε t + V f γ 1 1 v f γ 1 ( t + V f γ 1 β 1 v f γ 1 ( t +γ V min(x γ 1 β 1 γ v min(x γ 1 ( m 2 t +cm V min(x t β m 2 2γ v min(x ( m 2 +cm V δ 1+β m 2 βt t. 2γ v δ 2 which implies x x. This is a contradiction to the results of Theorem 3.3. From Lemma 3.1 follows that {λ } converges to ρ(a monotonically decreasing. WhenAissymmetric,ρ(A > λ 2 = L,sothecondition L 1 ε = λ ρ(a λ 0 ρ(a λ 0 λ 2 < 1 is automatically satisfied. λ λ 2 Theorem 3.6. Let A be a nonnegative irreducible matrix. If λ is generated by INI 2 and ρ(a > L, then {λ } ρ(a monotonically decreasing, as. Proof. Assumethat{λ } α > ρ(a. Since f d x withd = λ 1 λ /λ 1, we have ξ = f 0, as. Choose 0 < δ γη as in the proofof Theorem 3.5. Then there is an N > 0 such that ξ < δ, for all N. From (3.19 it is easily seen that ξ γ 1 v < 1. Using (3.15, (3.17 and (3.19 we have t +1 1+ V ξ m βt 1 2 v ξ m 2 ( m 2 +m V δ m 2 βt 2 v δ This is a contradiction. Hence, it holds that lim λ = ρ(a. ( 1+β 2 t. Corollary 3.7. Under the assumption of Theorem 3.6 it holds that lim ε y +1 = x. Proof. From (3.8, we have A = ε xv T +XL V T.

10 HOCHSTENBACH, LIN, AND LIU Hence, ε y +1 = ε A 1 (x +f = ( xv T +ε XL 1 V T (x +f. (3.20 Since ε L 1 0, from (3.20 follows that lim ε XL 1 V T (x +f = 0. From Theorems 3.6 and 3.3, we have lim x +f = x. This implies that lim ε ( y +1 = lim xv T +ε XL 1 V T (x +f = xv T x = x. 3.3. Convergence Rates. In this section, we will show that the convergence rates of INI 1 and INI 2 are globally linear and superlinear, respectively. From the definition of λ we have x +f λ +1 = λ min, (3.21 y +1 or equivalent to x +f ε +1 = ε (1 min ε y +1 =: ε ρ. (3.22 Since λ λ +1 < λ λ, from (3.22 and (3.21 ( x +f ρ = 1 min = 1 λ λ +1 < 1. (3.23 ε y +1 λ λ Theorem 3.8. Under the assumption of Theorem 3.5, it holds that ρ < 1 and lim ρ < 1, i.e., the convergence of INI 1 is globally linear. Proof. Since ξ γmin(x, it holds that f γx. Because A 1 0, we have Then (1 γx x +f (1+γx, (1 γa 1 x y +1 (1+γA 1 x. x +f min ε y +1 (1 γmin(x (1+γmax ( ε A 1 x. (3.24 From Theorems 3.5 and 3.3 follows that lim x = x and then lim ε A 1 x = x. It implies that lim min x +f (1 γmin(x lim ε y +1 (1+γmax ( ε A 1 x (1 γmin(x (1+γmax(x > 0.

INEXACT NODA ITERATION 11 Hence lim ρ 1 (1 γmin(x (1+γmax(x < 1. Theorem 3.9. Under the assumption of Theorem 3.6 it holds that ε +1 (i lim = 0; (ii lim ε λ λ +1 = 0; (iii lim λ 1 λ r +1 r = 0. where r = λ x +1 Ax +1. Proof. (i: Since ξ = f 0, from Corollary 3.7 and (3.22, we have ε +1 lim = 1 lim ε min x +f ε y +1 x +f = 1 min lim ε y +1 x = 1 min lim = 0. (3.25 x (ii: From (3.25 and and (3.22, we have λ λ +1 ε ε +1 ε (1 ρ lim = lim = lim λ 1 λ ε 1 ε ε 1 (1 ρ 1 (1 ρ ρ 1 = lim (1 ρ 1 (iii: From (3.1, we have = 0. r = ( λ I A x +1 = From Corollary 3.7, (3.25 and (3.22 ( λ I A y +1 y +1 r +1 lim r = lim ε +1 x +1 +f +1 ε y +1 = 0. ε x +f ε +1 y +2 = ε x +f ε y +1. 4. Computing the smallest eigenpair of an M-matrix. In this section, we consider how to compute the smallest eigenvalue of an irreducible M-matrix A. Let A = σi B be an M-matrix and let (λ, x be the smallest eigenpair with λ := min λ i = σ ρ(b. From Theorem 3.5(or 3.6, there is a decreasing sequence λ which converge to ρ(b with λ > ρ(b. We denote λ = σ λ. It follows that λ < λ form an increasing sequence which converges to λ. From INI algorithm 3.1, we are required to solve the linear system as (λ I By +1 = x +f, (4.1

12 HOCHSTENBACH, LIN, AND LIU where λ = max( Bx x and x +1 = y +1 / y +1. Since λ I B = (λ σ+(σi B = A λ I, the linear system of (4.1 is equivalent to (A λ Iy +1 = x +f, where Bx Ax λ = σ max = min. x x Since (A λ I is an M-matrix, it holds that y +1 > 0. Thus, we get the relation between λ +1 and λ with Ax+1 x +f λ +1 = min = λ +min. x +1 y +1 Therefore, Algorithm 3.1 can be modified for solving M-matrices as the follows: Algorithm 4.1 INI for M-matrix. 1. Set x 0 = e, compute λ 0 = min( Ax0 x 0. 2. for = 0,1,2,... 3. Solve linear system (A λ Iy +1 = x inexactly with Type1 or Type2. 4. Normalize the vector x +1 = y +1 / y +1. 5. Compute λ +1 = min( Ax+1 x +1. 6. until convergence. Here as in the subsection 3.2 we define INI Type1: The residual norm satisfies ξ γmin(x for some 0 < γ < 1. INI Type2: Theresidualvectorsatisfies f d x, withd = ( λ λ 1 /λ, As in Theorems 3.5, 3.6, 3.8, and 3.9 we can also show the following result. Theorem 4.1. Let A be an M-matrix. If λ and x are generated by Algorithm 4.1, then it holds that {λ } λ (the smallest eigenvalue of A monotonically increasing, as and lim x = x with x > 0. Furthermore, the convergence rates of INI Type1 and INI Type2 are globally linear and superlinear, respectively. 5. Computing a good initial vector for IRQI. For a symmetric matrix A, Jia[8] shows that IRQI with MINRES generally converges cubically, quadratically and linearly provided that the inner tolerance ξ ξ with a constant ξ < 1 is not near one, ξ = 1 O( r and ξ = 1 O( r 2, respectively. Here r = (A θ Iu is the residual and θ = u T Au is the Rayleigh quotient. This process is summarized in Algorithm 5.1. It is well nown that RQI has good convergence properties if u 0 is sufficiently closed to x. From [8, Th. 5], we now that if the uniform positiveness condition x T (u +g d is satisfied with a constant d > 0 uniformly independent of where g = (A θ Iw +1 u, then r +1 8β2 ξ d λ λ 2 r 2.

INEXACT NODA ITERATION 13 Algorithm 5.1 Inexact RQI. 1. Choose a unit u 0. 2. for = 0,1,2,... 3. Solve (A θ Iw +1 = u with an iterative solver such that (A θ Iw +1 u = ξ ξ. 4. Normalize the vector u = w +1 / w +1. 5. until convergence. Unfortunately, as mentioned in [8, Thms. 2, 5, 6] for a larger β = λmax λmin λ max λ 2 it shows that IRQI with MINRES may converge very slowly. If we choose the initial guess u 0 such that 8β 2 ξ 0 d λ λ 2 r 0 < 1, (5.1 then it implies r 1 < r 0, that is r become a strictly decreasing sequence. The following theorem gives a sufficient condition for the initial guess u 0 which ensures inequality (5.1. Theorem 5.1. Let A be a symmetric and nonnegative irreducible matrix. The vectors x i and f i are generated by INI (Algorithm 3.1 with outer tolerance tol1. If tol1 < d λ λ2 8β 2 ξ, then r 1 < r 0, that is, the IRQI converges quadratically. Proof. Let u 0 = x i. Since then which implies r 0 = (A θ 0 Iu 0 (A λ i Iu 0 = (A λ i Ix i = tol1 < d λ λ 2 8β 2, ξ 8β 2 ξ 0 d λ λ 2 r 0 < ξ 0 ξ 1, r 1 < r 0. Thus, we combine INI 1 or INI 2 with IRQI to get INI1 IRQI or INI2 IRQI. Based on practical experiments we suggest that tol2 tol1 n 1/2. 6. Numerical experiments. In this section we present numerical experiments to support our theoretical results of INI and compare them with IRQI. We perform numerical tests on an Intel (R Core (TM i5 CPU 750@ 2.67GHz with 4 GB memory using Matlab 7.8.0 under the Microsoft Windows 7 64-bit.

14 HOCHSTENBACH, LIN, AND LIU Algorithm 5.2 INI IRQI 1. Set tol1,tol2 > 0 2. Run INI 1 of INI 2 until the residual tol1, giving the approximation x i. 3. Let u 0 = x i be initial vector for IRQI. 4. Run IRQI Algorithm until the residual tol2. 6.1. INI for Nonnegative matrix. Here we provide four examples to illustrate the numerical properties of NI, INI 1 and INI 2 for nonnegative matrices, and contrast their performance to IRQI s. At each inner iteration step we solve the linear system (3.1 i.e., (λ I Ay +1 = x +f exactly or inexactly. We require the following stopping criteria: for NI, we require f 10 15 ; for INI 1: f min(x ; for INI 2: f min{min(x, λ 1 λ λ 1 }. Note that INI 1 and INI 2 have the similar numerical results when the minimal entry of Perron vector is close to 0. In the following examples, we require the stopping criteria of outer iterations satisfying Ax λ x tol = 10 8. We use BICGSTAB and MINRES to solve linear systems for unsymmetric and symmetric matrices, respectively (Matlab functions bicgstab and minres. The outer iteration starts with the normalized vector of (1,...,1 T except Example 2. We denote by I outer the number of outer iterations to achieve the convergence, and by I inner the total number of inner iterations. example 1. We consider a randomly chosen 2000 2000 nonnegative matrix A with normal distribution (MATLAB function randn. Figure 6.1 shows the convergent result of NI, INI 1 and INI 2. As can be seen, INI 1 converges linearly, and NI and INI 2 converge superlinearly, which support the results of Theorem 3.8 and Theorem 3.9. example 2. Consider a randomly chosen 10 5 10 5 nonnegative irreducible matrix with approximately 10 6 normally distributed nonzero entries, and the row sum equal to 1. Clearly, the largest eigenvalue of this matrix is λ = 1 and the associated eigenvector is x = (1,...,1 T. We randomly choose a starting vector x 0 > 0. Figure 6.2 illustrates how the approximate eigenvalues λ evolves with outer iterations for NI, INI 1 and INI 2. As it shows, the λ of all three methods decreases monotonically, and converges to λ = 1. In Table 1 we report the computed eigenvalues by NI, INI 1 and INI 2. It shows that these three methods can achieve the same order of accuracy.

INEXACT NODA ITERATION 15 10 5 NI INI 1 INI 2 residual norms 10 0 10 5 10 10 5 10 15 outer iterations Fig. 6.1. The outer residual norms versus outer iterations in Examples 1. 10 5 NI INI 1 INI 2 residual norms 10 0 10 5 10 10 5 10 15 20 outer iterations Fig. 6.2. Magnitude of estimate eigenvalue versus small outer iterations. Method λ Exact 1.000000000000000 NI 1.000000000000037 INI 1 1.000000000010168 INI 2 1.000000000000002 Table 1: The magnitude of estimate eigenvalue at final outer iteration. Table 2 reports the total numbers of outer and inner iterations and CPU time obtained by NI, INI 1 and INI 2. In Table 2, we see that the INI 1 improved the over efficiency of inner iterations considerably and reduce the computational cost, and the computing cost is about 25% of NI. Even INI 2 is using only about 33% of inner iterations of NI. Figure 6.3 illustrates the steady increase in inner iterations. It shows that NI need more inner iterations than INI 1 and INI 2 at each outer iteration. Figure 6.4 shows that the reason of increase inner iteration at each outer iteration is because λ close

16 HOCHSTENBACH, LIN, AND LIU to λ. Method NI INI 1 INI 2 I outer 21 23 21 I inner 157 40 50 CPU time 19.1 7.1 8.5 Table 2: The total numbers of outer and inner iterations in Examples 2. number of inner iterations 10 2 10 1 NI INI 1 INI 2 residual norms 10 5 10 0 10 5 10 10 NI INI 1 INI 2 10 0 5 10 15 20 out iterations 10 15 0 20 40 60 80 100 120 140 160 sum of inner iterations Fig. 6.3. The outer residual norms versus sum of inner iterations. Fig. 6.4. The outer iterations versus inner iterations. example 3. Consider the sparse and symmetric nonnegative matrix of delaunay n20 from DIMACS10 test set [4]. The coefficient matrix is generated as Delaunay triangulations of random points in the unit square. It is a binary matrix (a matrix each of whose elements is 0 or 1 with the matrix size n = 2 20 and 6,291,372 nonzero entries. NI GE is a Noda iteration in which the linear system at the each step is solved by Gaussian elimination (GE (i.e., MATLAB function \. INI1 IRQI is the combination of INI Type1 and IRQI as presented in Algorithm 5.2. We choose tol1 = n 1/2 and ξ = 0.8 for IRQI. Table 3 and Figure 6.5 show that INI 1 and INI1 IRQI need much less inner iterations compared with NI and IRQI. INI1 IRQI improves the number of inner iterations which is about 1 14 of the total number of inner iterations of IRQI. Method NI INI 1 NI GE IRQI INI1 IRQI I outer 8 9 8 13 8 I inner 502 244 8 2453 178 CPU time 73.9 38.1 233.2 333.4 28.6 Residual 8.4 10 9 2.9 10 15 8.5 10 9 5.1 10 7 1.1 10 10 Table 3: Examples 3 example 4. From DIMACS10 test set [4], we consider the sparse and symmetric nonnegative matrix rgg n 2 21 s0. This matrix is a random geometric graph with 2 21 vertices. Each vertex is a random point in the unit square and edges connect vertices whose Euclidean distance is below 0.55 (log(n/n. This threshold is chosen in order to ensure that the graph is almost connected. This matrix is a binary matrix with n = 2 21 and 28,975,990 nonzero entries. From Example 3, we now that INI1 IRQI is the most efficient method. In this case, we show that INI 1 and INI1 IRQI have the same efficiency. We only compare

INEXACT NODA ITERATION 17 10 0 10 0 residual norms 10 5 10 10 NI INI 1 INI1 IRQII 10 15 0 100 200 300 400 500 sum of inner iterations residual norms 10 5 10 10 NI INI 1 NI GE INI1 IRQI 10 15 1 2 3 4 5 6 7 8 9 outer iterations Fig. 6.5. The numbers of inner iterations versus outer iterations. Fig. 6.6. The outer residual norms versus outer iterations. NI,INI 1andINI1 IRQI,sinceothertwomethodsinTable3needmorecomputational cost. Table 4 shows that INI1 IRQI and INI 1 are much more efficient, and usually achieve the lowest inner iterations compared with other methods. Method NI INI 1 INI1 IRQI I outer 6 6 6 I inner 317 133 133 CPU time 112.7 54.9 55.1 Table 4: Examples 4 6.2. INI for M-matrix. In this subsection, we use INI 1 to find the smallest eigenvalue and the associated eigenvector of an M-matrix. The numerical experiments illustrate the convergence behavior of INI as shown in Section 4. example 5. We consider an M-matrix of the form A = σi B, where B is the matrix from a 3D Human Face Mesh [?] with a suitable constant σ. This is a matrix of size n = 94865 with 661603 nonzero entries. Figure 6.7 show that IRQI is not convergent even the number of inner iteration is 907 which is shown in Table 5. Furthermore, Table 5 shows that INI1 IRQI needs only about one third of the total number of inner iterations of the NI method and faster than the INI 1. 10 0 residual norms 10 5 10 10 NI INI 1 IRQI INI1 IRQI 0 200 400 600 800 Fig. 6.7. The outer residual norms versus sum of inner iterations in Examples 5

18 HOCHSTENBACH, LIN, AND LIU Method NI INI 1 IRQI INI1 IRQI I outer 6 7 26 7 I inner 141 66 907 51 CPU time 1.2 0.6 8.5 0.5 Table 5: The total numbers of outer and inner iterations in Examples 5. 7. Conclusions. We have considered the convergence of the inexact Noda iteration with two relaxation strategies in detail and have established a number of results on global superlinear and linear convergence. These results clearly show how inner tolerance affects the convergence of outer iterations and provide practical criteria on how to best control inner tolerance to achieve a desired convergence rate. It is the first time to appear surprisingly that the inexact Noda iteration with any linear solver converges linearly for ξ = f γmin(x = O(min(x with x be Perron eigenvector and superlinearly for ξ decreasingly near zero, respectively. NI and INI are fundamentally different from the existing results and have a strong impact on effectively implementing the algorithm so as to reduce the total computational cost very considerably. Acnowledgments: This wor was done while the MH was visiting WWL. MH thans WWL and the department for the hospitality. REFERENCES [1] A. S. Alfa, J. Xue, and Q. Ye, Accurate computation of the smallest eigenvalue of a diagonally dominant M-matrix, Math. Comp., 71 (2002, pp. 217 236. [2] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences, vol. 9 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, 1994. [3] J. Berns-Müller, I. G. Graham, and A. Spence, Inexact inverse iteration for symmetric matrices, Linear Algebra Appl., 416 (2006, pp. 389 413. [4] DIMACS10 test set and the University of Florida Sparse Matrix Collection. Available at www.cise.ufl.edu/research/sparse/dimacs10/. [5] L. Elsner, Inverse iteration for calculating the spectral radius of a non-negative irreducible matrix, Linear Algebra and Appl., 15 (1976, pp. 235 242. [6] G. H. Golub and C. F. Van Loan, Matrix Computations, The John Hopins University Press, Baltimore, London, 3rd ed., 1996. [7] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. [8] Z. Jia, On convergence of the inexact Rayleigh quotient iteration with MINRES, technical report, June 2010. http://arxiv.org/pdf/0906.2238v3.pdf. [9], On convergence of the inexact Rayleigh quotient iteration with the Lanczos method used for solving linear systems, technical report, September 2011. http://arxiv.org/pdf/0906.2239v4.pdf. [10] Y.-L. Lai, K.-Y. Lin, and W.-W. Lin, An inexact inverse iteration for large sparse eigenvalue problems, Numer. Linear Algebra Appl., 4 (1997, pp. 425 437. [11] T. Noda, Note on the computation of the maximal eigenvalue of a non-negative irreducible matrix, Numer. Math., 17 (1971, pp. 382 386. [12] A. M. Ostrowsi, On the convergence of the Rayleigh quotient iteration for the computation of the characteristic roots and vectors. V. (Usual Rayleigh quotient for non-hermitian matrices and linear elementary divisors, Arch. Rational Mech. Anal., 3 (1959, pp. 472 481. [13] B. N. Parlett, The Symmetric Eigenvalue Problem, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, 1998. [14] M. Robbé, M. Sadane, and A. Spence, Inexact inverse subspace iteration with preconditioning applied to non-hermitian eigenvalue problems, SIAM J. Matrix Anal. Appl., 31 (2009, pp. 92 113.

INEXACT NODA ITERATION 19 [15] Y. Saad, Numerical Methods for Large Eigenvalue Problems, Manchester University Press, Manchester, UK, 1992. [16] G. L. G. Sleijpen and H. A. van der Vorst, A Jacobi Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl., 17 (1996, pp. 401 425. [17] G. W. Stewart, Matrix Algorithms. Vol. II, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, 2001. [18] Jungong Xue, Computing the smallest eigenvalue of an M-matrix, SIAM J. Matrix Anal. Appl., 17 (1996, pp. 748 762.