AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem
|
|
- Jennifer Simpson
- 6 years ago
- Views:
Transcription
1 AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX MICHIEL E. HOCHSTENBACH, WEN-WEI LIN, AND CHING-SUNG LIU Abstract. In this paper, we present an inexact inverse iteration method to find the minimal eigenvalue and the associated eigenvector of an irreducible M-matrix. We propose two different relaxation strategies for solving the linear system of inner iterations. For the convergence of these two iterations, we show they are globally linear and superlinear, respectively. Numerical examples are provided to illustrate the convergence of theory. Key words. Inexact Noda iteration, M-matrix, positive matrix, inexact Rayleigh quotient iteration, Perron vector, Perron root. AMS subject classifications. 65F15, 65F Introduction. We consider the eigenvalue problem Ax = λx of computing the smallest eigenvalue λ and the associated eigenvector x of an irreducible M-matrix A. In this paper, the smallest eigenvalue denote by λ := min λ i, where Λ(A is the spectrum of A. Since A is an M-matrix, it can be λ i Λ(A expressed in form A = σi B with B 0 and for some constant σ > ρ(b, where ρ( denotes the spectral radius. Thus, the smallest eigenvalue λ is equal to σ ρ(b. It is well nown [7, p. 487] that the largest eigenvalue of B is the Perron root, which is simple and equal to the spectral radius of B with the associated eigenvector be positive. Consequently, we only need to compute the Perron root and the Perron vector of a nonnegative irreducible matrix B. When A is large and sparse, there are a number of methods for computing (λ,x, such as inverse iteration (INVIT [13, 15, 17], Rayleigh quotient iteration (RQI [13, 17] and shift-invert Arnoldi [17]. These methods are ind of inner-outer iteration method which can be applied to obtain the specific eigenpair. The so-called inner iteration is solving the linear system, and the update of approximate eigenpair is the outer iteration. However, they require the exact solution of a possibly ill-conditioned linear system at each iteration. This is generally very difficult and even impractical by a direct solver since a factorization of a shifted A may be expensive. So there is considerably of interest in inexact methods for eigenvalue problems. Among these inexact methods, the inexact inverse iteration (Inexact INVIT [3, 10] and the inexact RQI (IRQI [8, 9, 12] are the simplest and the most basic ones. In addition, they are ey ingredients of other sophisticated and practical inexact methods, such as inverse subspace iteration [14] and the Jacobi Davidson method [17, 16]. In [8, 9, 12, 17, 16, 14, 3, 10] it is also shown that these methods have good Version May 22, This wor was partially supported by the National Science Council and the National Center for Theoretical Sciences in Taiwan. Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB, The Netherlands. ( hochsten. Department of Applied Mathematics, National Chiao Tung University, Hsinchu 300, Taiwan (wwlin@math.nctu.edu.tw. Department of Mathematics, National Tsinghua University, Hsinchu 300, Taiwan (chingsungliu@gmail.com 1
2 2 HOCHSTENBACH, LIN, AND LIU convergence rate, for example: the IRQI has cubic and quadratic convergence. But the convergence of IRQI strongly depends on the initial guess. If the initial guess is not well-selected, it will waste too much time for searching the right convergence direction. It also may not converge or converge to the undesired eigenpair. First, based on Noda iteration [11] we propose an Inexact Noda iteration (INI to find the largest eigenvalue and the associated eigenvector of a nonnegative irreducible matrix B. The advantage of Noda iteration is to create a decreasing sequence of approximate eigenvalues which converge to ρ(b. Furthermore, the convergence is proven to be superlinear. In [11], the main tas in each step is to solve the linear system (λ I By +1 = x, (1.1 where λ is the approximate shift which is always larger than ρ(b. The major contribution of this paper is to provide two relaxation strategies for solving (1.1. The first strategy is to use min(x as the upper bound of tolerance for inner iterations. The second strategy is to solve (1.1 with decreasing tolerance for inner iterations. We also show that the convergence of the former iteration is globally linear and of the latter iteration is superlinear, respectively. In this paper, we use INI to find the smallest eigenvalue and the associated eigenvector of an M-matrix A, as mentioned in [18, 1]. And the greatest difference between [18, 1] and INI is inexact techniques which we used for solving linear systems, i.e., (A λ Iy +1 = x, (1.2 where λ is the approximate shift which is always smaller than 1/ρ(A 1. Similarly, we provide two relaxation strategies for solving(1.2 and we show that the convergence of this two strategies are globally linear and superlinear, respectively. When A is a symmetric M-matrix (or nonnegative irreducible matrix. We provide an integrated algorithm combined INI with IRQI. First, by using the global convergence of INI, we use INI to generate a good approximate vector x, and use x as the initial vector for IRQI. And then use the cubic or quadratic convergence behaviour of IRQI [8] to accelerate the algorithm convergence. This algorithm leverages the concept of inexact techniques we allow as large as possible in each inner iteration tolerance, in order to greatly enhance the computational efficiency The rest of this paper is organized as follows. In Section 2, we introduce the Noda iteration and some preliminaries. In Section 3, we propose INI algorithm and show some convergence theory. In Section 4, we use INI to find the smallest eigenvalue and the associated eigenvector of an M-matrix. In Section 5, we provide an integrated algorithm combined INI with IRQI for M-matrix or nonnegative matrix. We perform numerical experiments to confirm our results in section 6. Finally, we end up with some conclusions in section Preliminaries and Notation. For any matrix B = (b ij, we denote B = ( b ij. If the entries of matrix B are all nonnegative (positive, then we write B 0 (> 0. For real matrices B and C of the same size, if B C is a nonnegative matrix, we write B C. A nonnegative matrix B is said to be reducible if there exists a permutation matrix P such that [ ] P T E F BP = O G
3 INEXACT NODA ITERATION 3 where E and G are square matrices and it is called irreducible if it is not reducible. The basic eigenvalue properties of irreducible nonnegative matrices are summed up in the Perron Frobenius Theorem (see Horn and Johnson [7]. Here we formulate only the portion of it relevant to the scope of this paper. We denote e = (1,1,...,1 T. A matrix A is called an M-matrix if it can be expressed in form A = σi B with B 0 and σ > ρ(b. Theorem 2.1. Let A be an M-matrix. Then the following statements are equivalent (see, e.g., [2]: (i A = (a ij, a ij 0 for i j, and A 1 0; (ii A = σi B with B 0 and σ > ρ(b. Theorem 2.2 ([7]. Let B be a real irreducible nonnegative matrix. Then λ := ρ(b, the spectral radius of B, is a simple eigenvalue of B. Moreover, there exist an eigenvector x with positive elements associated with this eigenvalue, and no eigenvalue λ ρ(b has a positive eigenvector. For a pair of vectors x, y with y > 0, we define max ( x y = max i ( xi y i, min ( x y = min i The following theorem is from [7, p. 508], and it shows that the bound for spectral radius of a nonnegative square matrix. Theorem 2.3 ([7]. Let B be a nonnegative irreducible matrix. If x > 0 is not an eigenvector of B, then Bx Bx min < ρ(b < max. (2.1 x x 2.1. Bounds for eigenvectors. Since B is a nonnegative irreducible matrix, from Theorem 2.2, it shows that the largest eigenvalues of B is simple. Assume that the eigenvalues of B are ordered as ( xi ρ(b > λ 2 λ n. (2.2 Let x, x 2,..., x n be the unit eigenvectors corresponding to ρ(b, λ 2,..., λ n. Since ρ(b is simple, there exists a nonsingular matrix [ x X ] with inverse [ v V ] T such that [6] [ ] v T V T B [ x X ] [ ρ(b 0 = 0 L y i. ]. (2.3 Note that v is the left eigenvector of B and V T B = LV T. In addition, if µ is not an eigenvalue of L, the sep function for µ and L is defined as sep(µ,l = (µi L 1 1. Given a pair (µ,z as an approximation to (λ,x, the following lemma [17], shows the relation between sin(x,z and the residual, Bz µz. Lemma 2.4. [17, Th. 3.13] Let z be a unit vector. For any µ / Λ(L sin(x,z Bz µz sep(µ, L (2.4
4 4 HOCHSTENBACH, LIN, AND LIU 2.2. The Noda iteration. In [11], Noda provided an inverse iteration for computing the Perron root of a nonnegative irreducible matrix. This method has been shown to be quadratically convergent by Elsner [5]. Noda iteration consists of three steps: (λ I By +1 = x, (2.5 x +1 = y +1 / y +1, (2.6 Bx+1 λ +1 = max. (2.7 x +1 The main step is to compute a new approximation x +1 to x by solving the linear system (2.5, called the inner iteration. The update of the approximate eigenpair ( λ+1,x +1 is called the outer iteration. Since λ > ρ(b so long as x is not a scalar multiple of eigenvector x, it follows that the matrix λ I B is an M-matrix. Therefore, x +1 is still a positive vector. After variable transformation, we get the relation between λ +1 and λ as x λ +1 = λ min, y +1 Thus λ is decreasing. The algorithm to be developed here is based on the inverse iteration shifted by a Rayleigh quotient lie approximation of the eigenvalue. This process is summarized as Algorithm 2.1. Algorithm 2.1 Noda iteration. ( Bx 1. Set x 0 > 0, compute λ 0 = max 0 x for = 0,1,2, Solve the linear system (λ I By +1 = x. 4. Normalize the vector x +1 = y +1 / y Compute λ +1 = max( Ax+1 x until convergence. 3. The inexact Noda iteration and convergence theory. Based on Noda iteration, in this section we shall propose an inexact Noda iteration (INI for the computation of the spectral radius of a nonnegative irreducible matrix A. In practical applications, we provide two different types of relaxation strategies for solving the linear system inexactly in each iterative step of INI. Furthermore, we show that the convergence of two different types of INI is globally linear and superlinear, respectively The inexact Noda iteration. Since A is large and sparse, in step 3 of Algorithm 2.1 we see that an iterative linear solver must be resorted to get an approximate solution of it. In order to reduce the computational cost of Algorithm 2.1, it leads to an inexact Noda iteration by solving y +1 in step 3 of Algorithm 2.1 inexactly satisfying (λ I Ay +1 = x +f, (3.1 x +1 = y +1 / y +1, (3.2
5 INEXACT NODA ITERATION 5 where f is the residual vector between ( λ I A y +1 and x, and denotes the vector 2-norm. Here, the residual norm (inner tolerance ξ := f can be changed at each iterative step. Lemma 3.1. Let A be a nonnegative irreducible matrix and 0 γ < 1 be a fixed constant. For x > 0, if the residual vector f in (3.1 satisfies (λ I Ay +1 x = f γx, (3.3 then x +1 > 0. Furthermore, the sequence {λ } with λ = max( Ax is monotonically decreasing and bounded below by ρ(a, i.e., x λ > λ +1 ρ(a (3.4 Proof. Since λ I A is an M-matrix and f γx, the vector y +1 satisfies y +1 = (λ I A 1 (x +f > 0. This implies x +1 = y +1 / y +1 > 0 and min( x +f y +1 > 0. From (3.1 and definition of λ follows that Ax+1 Ay+1 λ +1 = max = max x +1 y +1 λ y +1 x f = max y +1 x +f = λ min < λ. (3.5 y +1 By Theorem 2.3 we have λ > λ +1 ρ(a. Based on(3.1 (3.2 and Lemma 3.1, we propose the inexact Noda iteration as follows. Algorithm 3.1 Inexact Noda Iteration (INI 1. Given x 0 > 0 with x 0 = 1, 0 γ < 1 and tol > Compute λ 0 = max( Ax0 x for = 0,1,2, Solve (λ I Ay +1 = x by an iterative solver such that (λ I Ay +1 x = f γx. 5. Normalize the vector x +1 = y +1 / y Compute λ +1 = max( Ax+1 x until convergence: Ax +1 λ +1 x +1 < tol. Note that if γ = 0, i.e., f = 0 in (3.1 for all, Algorithm 3.1 becomes the standard (exact Noda iteration. In the following, we show the eigenvalue condition for the convergence of {λ } ρ(a for.
6 6 HOCHSTENBACH, LIN, AND LIU Lemma 3.2. Let x > 0 be a unit eigenvector associated with ρ(a. For any vector z > 0 with z = 1, it holds that cos(z,x > min(x, and inf cos(z,x = min(x. (3.6 z =1,z>0 Proof. Since x > 0 and z > 0 with x = z = 1, we have cos(z,x = z T x > 0. Therefore, the infimum of z T x should attain at z e i, where e i is ith column of the identity matrix. That is inf z =1,z>0 cos(z,x = min{ lim cos(z,x} i z ei = min i {x i } = min(x. Let {x } be generated by Algorithm 3.1. We decompose x into the orthogonaldirect sum by x = xcos(ϕ +p sin(ϕ, p span(x x (3.7 with p = 1 and ϕ = (x,x being the acute angle between x and x. Now define ε = λ ρ(a, A = λ I A. Similar to (2.3 we also have the spectral decomposition [ ] [ ] v T [ ] ε 0 V T A x X =, (3.8 0 L where L = λ I L. Theorem 3.3. Let A be a nonnegative irreducible matrix. Assume (ρ(a, x is the largest eigenpair of A with x > 0 and x = 1. If x, λ, y and f are generated by Algorithm 3.1 (INI, then the following statements are equivalent. (i lim x = x; (ii lim λ = ρ(a; (iii lim y 1 = 0. Proof. (i (ii: By the definition of λ, we get lim λ = lim max Ax Ax = max lim = λ. x x (ii (iii: Since f γx, from (3.7 we have y +1 = A 1 (x +f A 1 (1 γx = ε 1 xcos(ϕ +A 1 p sin(ϕ, (3.9
7 The second term of equation (3.9 can be bounded by INEXACT NODA ITERATION 7 A 1 p sin(ϕ ( ε 1 xvt +XL 1 V T p = XL 1 V T p X V sep(λ,l X V sep(ρ(a, L (3.10 From Lemma 3.2 follows that cos(ϕ is uniformly bounded below by min(x. Combining (3.9 with (3.10 we have ( y +1 (1 γ ( λ ρ(a 1 xcos(ϕ A 1 p sin(ϕ ( (1 γ min(x λ ρ(a 1 X V. sep(ρ(a, L as. (iii (i: Let (λ,x +1 be an approximation to (ρ(a,x. From Lemma 2.4, we have sin(x,x +1 Ax +1 λ x +1 sep(λ,l Thus, it holds that lim x = x. (λ I Ay +1 y +1 2 sep(λ,l = 2 y +1 2 sep(λ,l. x +f y +1 2 sep(λ,l Note that from Lemma 3.1 follows that must converges.??? Corollary 3.4. Under the assumption of Theorem 3.3, if λ is converge to α > ρ(a, then it holds that (i y is bounded;(ii lim min(x +f = 0; (iii sin(x,x ζ > 0, for some ζ > 0. Proof. (i. Since f γx, we get y +1 = ( λ I A 1 (x +f 2 (λ I A 1 = 2sep(λ,A 1 2sep(α,A 1 <. (3.11 (ii. From the relation in (3.5 follows that lim min x +f = lim λ λ +1 = 0. (3.12 y +1 From (3.11 and (3.12 we have x +f min min(x +f max(y +1 min(x +f sep(α,a > 0. 2 y +1 Thus, it is holds that lim min(x +f = 0, (3.13 (iii Suppose there is a subsequence {sin(x j,x} which converges to zero. Then from Theorem 3.3 there is a subsequence {λ j } converge to ρ(a. This is a contradiction.
8 8 HOCHSTENBACH, LIN, AND LIU 3.2. Convergence Analysis. We now propose two practical relaxation strategies for the inexactness of step 4 of Algorithm 3.1 (INI: INI 1: the residual norm satisfies ξ = f γmin(x, for some constant 0 < γ < 1; INI 2: the residual vector satisfies f d x, where d = (1 λ /λ 1. It is easily seen that the conditions of INI 1 and INI 2 must satisfy f γx for some constant 0 < γ < 1 as in step 4 of Algorithm 3.1. From Lemma 3.1, we see that INI 1 or INI 2 generates a monotonically decreasing sequence {λ } bounded by ρ(a and a sequence of positive vectors {x }. From the decomposition (3.8, the vector x +1 can be decomposed as x +1 = xγ +1 +XS +1. (3.14 where γ +1 = v T x +1, S +1 = V T x +1. Furthermore,we have t +1 := S +1 γ 1 +1 = ( V T x +1 ( v T x +1 1 = ( V T y +1 ( v T y +1 1 = ( V T A 1 (x +f ( v T A 1 (x +f 1 = ( L 1 V T (x +f ( ε 1 vt (x +f 1 L 1 ε V T (x +f ( v T (x +f 1 = L 1 ε V T (x +f γ 1 ( 1+v T f γ 1 1 L 1 ε V T (x +f γ 1 ( 1+v T f γ 1. (3.15 Note that [14, Proposition 2.1] shows that x x if and only if t 0, (??? definition and sin(ϕ S and γ sin(ϕ = 1+sin(ϕ cos 2 (ϕ. (3.16 Theorem 3.5 (Main Theorem. Let A be a nonnegative irreducible matrix. If {λ } is generated by INI 1 and ρ(a > L where L mentioned in (??, then {λ } ρ(a monotonically decreasing, as. Proof. Suppose not and assume that {λ } α > ρ(a. Then, L 1 ε can be bounded by L 1 ε = λ ρ(a sep(λ,l α ρ(a =: β < 1, (3.17 sep(α,l (??? what is β as. Since ξ = f γmin(x, it implies f γx. Then min(x +f > (1 γmin(x. From Corollary 3.4 (ii follows that 0 = lim min(x + f > lim (1 γmin(x. Thus, we have lim min(x = 0. (3.18
9 INEXACT NODA ITERATION 9 Furthermore, from Lemma 3.2 and Corollary 3.4 (iii we now that sin(ϕ and cos(ϕ are uniformly bounded below by a positive number. There is an m > 0 such that m sin(ϕ and m cos(ϕ. From the inequalities (3.16 follows that γ 1 2 m 2 and min(x S m min(x. (3.19 ( Let 0 < δ η, where η = min m 2 2 v, (1 βm 2 γ(βm V +(ρ+1 v. Then from (3.18, there is an N > 0 such that min(x < δ, for all N. From (3.19, it implies that min(x γ 1 v < 1 for all N. Using (3.15, (3.17 and (3.19 we obtain t +1 L 1 ε t + V f γ 1 1 v f γ 1 ( t + V f γ 1 β 1 v f γ 1 ( t +γ V min(x γ 1 β 1 γ v min(x γ 1 ( m 2 t +cm V min(x t β m 2 2γ v min(x ( m 2 +cm V δ 1+β m 2 βt t. 2γ v δ 2 which implies x x. This is a contradiction to the results of Theorem 3.3. From Lemma 3.1 follows that {λ } converges to ρ(a monotonically decreasing. WhenAissymmetric,ρ(A > λ 2 = L,sothecondition L 1 ε = λ ρ(a λ 0 ρ(a λ 0 λ 2 < 1 is automatically satisfied. λ λ 2 Theorem 3.6. Let A be a nonnegative irreducible matrix. If λ is generated by INI 2 and ρ(a > L, then {λ } ρ(a monotonically decreasing, as. Proof. Assumethat{λ } α > ρ(a. Since f d x withd = λ 1 λ /λ 1, we have ξ = f 0, as. Choose 0 < δ γη as in the proofof Theorem 3.5. Then there is an N > 0 such that ξ < δ, for all N. From (3.19 it is easily seen that ξ γ 1 v < 1. Using (3.15, (3.17 and (3.19 we have t V ξ m βt 1 2 v ξ m 2 ( m 2 +m V δ m 2 βt 2 v δ This is a contradiction. Hence, it holds that lim λ = ρ(a. ( 1+β 2 t. Corollary 3.7. Under the assumption of Theorem 3.6 it holds that lim ε y +1 = x. Proof. From (3.8, we have A = ε xv T +XL V T.
10 10 HOCHSTENBACH, LIN, AND LIU Hence, ε y +1 = ε A 1 (x +f = ( xv T +ε XL 1 V T (x +f. (3.20 Since ε L 1 0, from (3.20 follows that lim ε XL 1 V T (x +f = 0. From Theorems 3.6 and 3.3, we have lim x +f = x. This implies that lim ε ( y +1 = lim xv T +ε XL 1 V T (x +f = xv T x = x Convergence Rates. In this section, we will show that the convergence rates of INI 1 and INI 2 are globally linear and superlinear, respectively. From the definition of λ we have x +f λ +1 = λ min, (3.21 y +1 or equivalent to x +f ε +1 = ε (1 min ε y +1 =: ε ρ. (3.22 Since λ λ +1 < λ λ, from (3.22 and (3.21 ( x +f ρ = 1 min = 1 λ λ +1 < 1. (3.23 ε y +1 λ λ Theorem 3.8. Under the assumption of Theorem 3.5, it holds that ρ < 1 and lim ρ < 1, i.e., the convergence of INI 1 is globally linear. Proof. Since ξ γmin(x, it holds that f γx. Because A 1 0, we have Then (1 γx x +f (1+γx, (1 γa 1 x y +1 (1+γA 1 x. x +f min ε y +1 (1 γmin(x (1+γmax ( ε A 1 x. (3.24 From Theorems 3.5 and 3.3 follows that lim x = x and then lim ε A 1 x = x. It implies that lim min x +f (1 γmin(x lim ε y +1 (1+γmax ( ε A 1 x (1 γmin(x (1+γmax(x > 0.
11 INEXACT NODA ITERATION 11 Hence lim ρ 1 (1 γmin(x (1+γmax(x < 1. Theorem 3.9. Under the assumption of Theorem 3.6 it holds that ε +1 (i lim = 0; (ii lim ε λ λ +1 = 0; (iii lim λ 1 λ r +1 r = 0. where r = λ x +1 Ax +1. Proof. (i: Since ξ = f 0, from Corollary 3.7 and (3.22, we have ε +1 lim = 1 lim ε min x +f ε y +1 x +f = 1 min lim ε y +1 x = 1 min lim = 0. (3.25 x (ii: From (3.25 and and (3.22, we have λ λ +1 ε ε +1 ε (1 ρ lim = lim = lim λ 1 λ ε 1 ε ε 1 (1 ρ 1 (1 ρ ρ 1 = lim (1 ρ 1 (iii: From (3.1, we have = 0. r = ( λ I A x +1 = From Corollary 3.7, (3.25 and (3.22 ( λ I A y +1 y +1 r +1 lim r = lim ε +1 x +1 +f +1 ε y +1 = 0. ε x +f ε +1 y +2 = ε x +f ε y Computing the smallest eigenpair of an M-matrix. In this section, we consider how to compute the smallest eigenvalue of an irreducible M-matrix A. Let A = σi B be an M-matrix and let (λ, x be the smallest eigenpair with λ := min λ i = σ ρ(b. From Theorem 3.5(or 3.6, there is a decreasing sequence λ which converge to ρ(b with λ > ρ(b. We denote λ = σ λ. It follows that λ < λ form an increasing sequence which converges to λ. From INI algorithm 3.1, we are required to solve the linear system as (λ I By +1 = x +f, (4.1
12 12 HOCHSTENBACH, LIN, AND LIU where λ = max( Bx x and x +1 = y +1 / y +1. Since λ I B = (λ σ+(σi B = A λ I, the linear system of (4.1 is equivalent to (A λ Iy +1 = x +f, where Bx Ax λ = σ max = min. x x Since (A λ I is an M-matrix, it holds that y +1 > 0. Thus, we get the relation between λ +1 and λ with Ax+1 x +f λ +1 = min = λ +min. x +1 y +1 Therefore, Algorithm 3.1 can be modified for solving M-matrices as the follows: Algorithm 4.1 INI for M-matrix. 1. Set x 0 = e, compute λ 0 = min( Ax0 x for = 0,1,2, Solve linear system (A λ Iy +1 = x inexactly with Type1 or Type2. 4. Normalize the vector x +1 = y +1 / y Compute λ +1 = min( Ax+1 x until convergence. Here as in the subsection 3.2 we define INI Type1: The residual norm satisfies ξ γmin(x for some 0 < γ < 1. INI Type2: Theresidualvectorsatisfies f d x, withd = ( λ λ 1 /λ, As in Theorems 3.5, 3.6, 3.8, and 3.9 we can also show the following result. Theorem 4.1. Let A be an M-matrix. If λ and x are generated by Algorithm 4.1, then it holds that {λ } λ (the smallest eigenvalue of A monotonically increasing, as and lim x = x with x > 0. Furthermore, the convergence rates of INI Type1 and INI Type2 are globally linear and superlinear, respectively. 5. Computing a good initial vector for IRQI. For a symmetric matrix A, Jia[8] shows that IRQI with MINRES generally converges cubically, quadratically and linearly provided that the inner tolerance ξ ξ with a constant ξ < 1 is not near one, ξ = 1 O( r and ξ = 1 O( r 2, respectively. Here r = (A θ Iu is the residual and θ = u T Au is the Rayleigh quotient. This process is summarized in Algorithm 5.1. It is well nown that RQI has good convergence properties if u 0 is sufficiently closed to x. From [8, Th. 5], we now that if the uniform positiveness condition x T (u +g d is satisfied with a constant d > 0 uniformly independent of where g = (A θ Iw +1 u, then r +1 8β2 ξ d λ λ 2 r 2.
13 INEXACT NODA ITERATION 13 Algorithm 5.1 Inexact RQI. 1. Choose a unit u for = 0,1,2, Solve (A θ Iw +1 = u with an iterative solver such that (A θ Iw +1 u = ξ ξ. 4. Normalize the vector u = w +1 / w until convergence. Unfortunately, as mentioned in [8, Thms. 2, 5, 6] for a larger β = λmax λmin λ max λ 2 it shows that IRQI with MINRES may converge very slowly. If we choose the initial guess u 0 such that 8β 2 ξ 0 d λ λ 2 r 0 < 1, (5.1 then it implies r 1 < r 0, that is r become a strictly decreasing sequence. The following theorem gives a sufficient condition for the initial guess u 0 which ensures inequality (5.1. Theorem 5.1. Let A be a symmetric and nonnegative irreducible matrix. The vectors x i and f i are generated by INI (Algorithm 3.1 with outer tolerance tol1. If tol1 < d λ λ2 8β 2 ξ, then r 1 < r 0, that is, the IRQI converges quadratically. Proof. Let u 0 = x i. Since then which implies r 0 = (A θ 0 Iu 0 (A λ i Iu 0 = (A λ i Ix i = tol1 < d λ λ 2 8β 2, ξ 8β 2 ξ 0 d λ λ 2 r 0 < ξ 0 ξ 1, r 1 < r 0. Thus, we combine INI 1 or INI 2 with IRQI to get INI1 IRQI or INI2 IRQI. Based on practical experiments we suggest that tol2 tol1 n 1/2. 6. Numerical experiments. In this section we present numerical experiments to support our theoretical results of INI and compare them with IRQI. We perform numerical tests on an Intel (R Core (TM i5 CPU 750@ 2.67GHz with 4 GB memory using Matlab under the Microsoft Windows 7 64-bit.
14 14 HOCHSTENBACH, LIN, AND LIU Algorithm 5.2 INI IRQI 1. Set tol1,tol2 > 0 2. Run INI 1 of INI 2 until the residual tol1, giving the approximation x i. 3. Let u 0 = x i be initial vector for IRQI. 4. Run IRQI Algorithm until the residual tol INI for Nonnegative matrix. Here we provide four examples to illustrate the numerical properties of NI, INI 1 and INI 2 for nonnegative matrices, and contrast their performance to IRQI s. At each inner iteration step we solve the linear system (3.1 i.e., (λ I Ay +1 = x +f exactly or inexactly. We require the following stopping criteria: for NI, we require f ; for INI 1: f min(x ; for INI 2: f min{min(x, λ 1 λ λ 1 }. Note that INI 1 and INI 2 have the similar numerical results when the minimal entry of Perron vector is close to 0. In the following examples, we require the stopping criteria of outer iterations satisfying Ax λ x tol = We use BICGSTAB and MINRES to solve linear systems for unsymmetric and symmetric matrices, respectively (Matlab functions bicgstab and minres. The outer iteration starts with the normalized vector of (1,...,1 T except Example 2. We denote by I outer the number of outer iterations to achieve the convergence, and by I inner the total number of inner iterations. example 1. We consider a randomly chosen nonnegative matrix A with normal distribution (MATLAB function randn. Figure 6.1 shows the convergent result of NI, INI 1 and INI 2. As can be seen, INI 1 converges linearly, and NI and INI 2 converge superlinearly, which support the results of Theorem 3.8 and Theorem 3.9. example 2. Consider a randomly chosen nonnegative irreducible matrix with approximately 10 6 normally distributed nonzero entries, and the row sum equal to 1. Clearly, the largest eigenvalue of this matrix is λ = 1 and the associated eigenvector is x = (1,...,1 T. We randomly choose a starting vector x 0 > 0. Figure 6.2 illustrates how the approximate eigenvalues λ evolves with outer iterations for NI, INI 1 and INI 2. As it shows, the λ of all three methods decreases monotonically, and converges to λ = 1. In Table 1 we report the computed eigenvalues by NI, INI 1 and INI 2. It shows that these three methods can achieve the same order of accuracy.
15 INEXACT NODA ITERATION NI INI 1 INI 2 residual norms outer iterations Fig The outer residual norms versus outer iterations in Examples NI INI 1 INI 2 residual norms outer iterations Fig Magnitude of estimate eigenvalue versus small outer iterations. Method λ Exact NI INI INI Table 1: The magnitude of estimate eigenvalue at final outer iteration. Table 2 reports the total numbers of outer and inner iterations and CPU time obtained by NI, INI 1 and INI 2. In Table 2, we see that the INI 1 improved the over efficiency of inner iterations considerably and reduce the computational cost, and the computing cost is about 25% of NI. Even INI 2 is using only about 33% of inner iterations of NI. Figure 6.3 illustrates the steady increase in inner iterations. It shows that NI need more inner iterations than INI 1 and INI 2 at each outer iteration. Figure 6.4 shows that the reason of increase inner iteration at each outer iteration is because λ close
16 16 HOCHSTENBACH, LIN, AND LIU to λ. Method NI INI 1 INI 2 I outer I inner CPU time Table 2: The total numbers of outer and inner iterations in Examples 2. number of inner iterations NI INI 1 INI 2 residual norms NI INI 1 INI out iterations sum of inner iterations Fig The outer residual norms versus sum of inner iterations. Fig The outer iterations versus inner iterations. example 3. Consider the sparse and symmetric nonnegative matrix of delaunay n20 from DIMACS10 test set [4]. The coefficient matrix is generated as Delaunay triangulations of random points in the unit square. It is a binary matrix (a matrix each of whose elements is 0 or 1 with the matrix size n = 2 20 and 6,291,372 nonzero entries. NI GE is a Noda iteration in which the linear system at the each step is solved by Gaussian elimination (GE (i.e., MATLAB function \. INI1 IRQI is the combination of INI Type1 and IRQI as presented in Algorithm 5.2. We choose tol1 = n 1/2 and ξ = 0.8 for IRQI. Table 3 and Figure 6.5 show that INI 1 and INI1 IRQI need much less inner iterations compared with NI and IRQI. INI1 IRQI improves the number of inner iterations which is about 1 14 of the total number of inner iterations of IRQI. Method NI INI 1 NI GE IRQI INI1 IRQI I outer I inner CPU time Residual Table 3: Examples 3 example 4. From DIMACS10 test set [4], we consider the sparse and symmetric nonnegative matrix rgg n 2 21 s0. This matrix is a random geometric graph with 2 21 vertices. Each vertex is a random point in the unit square and edges connect vertices whose Euclidean distance is below 0.55 (log(n/n. This threshold is chosen in order to ensure that the graph is almost connected. This matrix is a binary matrix with n = 2 21 and 28,975,990 nonzero entries. From Example 3, we now that INI1 IRQI is the most efficient method. In this case, we show that INI 1 and INI1 IRQI have the same efficiency. We only compare
17 INEXACT NODA ITERATION residual norms NI INI 1 INI1 IRQII sum of inner iterations residual norms NI INI 1 NI GE INI1 IRQI outer iterations Fig The numbers of inner iterations versus outer iterations. Fig The outer residual norms versus outer iterations. NI,INI 1andINI1 IRQI,sinceothertwomethodsinTable3needmorecomputational cost. Table 4 shows that INI1 IRQI and INI 1 are much more efficient, and usually achieve the lowest inner iterations compared with other methods. Method NI INI 1 INI1 IRQI I outer I inner CPU time Table 4: Examples INI for M-matrix. In this subsection, we use INI 1 to find the smallest eigenvalue and the associated eigenvector of an M-matrix. The numerical experiments illustrate the convergence behavior of INI as shown in Section 4. example 5. We consider an M-matrix of the form A = σi B, where B is the matrix from a 3D Human Face Mesh [?] with a suitable constant σ. This is a matrix of size n = with nonzero entries. Figure 6.7 show that IRQI is not convergent even the number of inner iteration is 907 which is shown in Table 5. Furthermore, Table 5 shows that INI1 IRQI needs only about one third of the total number of inner iterations of the NI method and faster than the INI residual norms NI INI 1 IRQI INI1 IRQI Fig The outer residual norms versus sum of inner iterations in Examples 5
18 18 HOCHSTENBACH, LIN, AND LIU Method NI INI 1 IRQI INI1 IRQI I outer I inner CPU time Table 5: The total numbers of outer and inner iterations in Examples Conclusions. We have considered the convergence of the inexact Noda iteration with two relaxation strategies in detail and have established a number of results on global superlinear and linear convergence. These results clearly show how inner tolerance affects the convergence of outer iterations and provide practical criteria on how to best control inner tolerance to achieve a desired convergence rate. It is the first time to appear surprisingly that the inexact Noda iteration with any linear solver converges linearly for ξ = f γmin(x = O(min(x with x be Perron eigenvector and superlinearly for ξ decreasingly near zero, respectively. NI and INI are fundamentally different from the existing results and have a strong impact on effectively implementing the algorithm so as to reduce the total computational cost very considerably. Acnowledgments: This wor was done while the MH was visiting WWL. MH thans WWL and the department for the hospitality. REFERENCES [1] A. S. Alfa, J. Xue, and Q. Ye, Accurate computation of the smallest eigenvalue of a diagonally dominant M-matrix, Math. Comp., 71 (2002, pp [2] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences, vol. 9 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, [3] J. Berns-Müller, I. G. Graham, and A. Spence, Inexact inverse iteration for symmetric matrices, Linear Algebra Appl., 416 (2006, pp [4] DIMACS10 test set and the University of Florida Sparse Matrix Collection. Available at [5] L. Elsner, Inverse iteration for calculating the spectral radius of a non-negative irreducible matrix, Linear Algebra and Appl., 15 (1976, pp [6] G. H. Golub and C. F. Van Loan, Matrix Computations, The John Hopins University Press, Baltimore, London, 3rd ed., [7] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, [8] Z. Jia, On convergence of the inexact Rayleigh quotient iteration with MINRES, technical report, June [9], On convergence of the inexact Rayleigh quotient iteration with the Lanczos method used for solving linear systems, technical report, September [10] Y.-L. Lai, K.-Y. Lin, and W.-W. Lin, An inexact inverse iteration for large sparse eigenvalue problems, Numer. Linear Algebra Appl., 4 (1997, pp [11] T. Noda, Note on the computation of the maximal eigenvalue of a non-negative irreducible matrix, Numer. Math., 17 (1971, pp [12] A. M. Ostrowsi, On the convergence of the Rayleigh quotient iteration for the computation of the characteristic roots and vectors. V. (Usual Rayleigh quotient for non-hermitian matrices and linear elementary divisors, Arch. Rational Mech. Anal., 3 (1959, pp [13] B. N. Parlett, The Symmetric Eigenvalue Problem, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, [14] M. Robbé, M. Sadane, and A. Spence, Inexact inverse subspace iteration with preconditioning applied to non-hermitian eigenvalue problems, SIAM J. Matrix Anal. Appl., 31 (2009, pp
19 INEXACT NODA ITERATION 19 [15] Y. Saad, Numerical Methods for Large Eigenvalue Problems, Manchester University Press, Manchester, UK, [16] G. L. G. Sleijpen and H. A. van der Vorst, A Jacobi Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl., 17 (1996, pp [17] G. W. Stewart, Matrix Algorithms. Vol. II, Society for Industrial and Applied Mathematics (SIAM, Philadelphia, PA, [18] Jungong Xue, Computing the smallest eigenvalue of an M-matrix, SIAM J. Matrix Anal. Appl., 17 (1996, pp
Lecture 3: Inexact inverse iteration with preconditioning
Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method
More informationInexact inverse iteration with preconditioning
Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned
More informationEIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems
EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues
More informationHARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM
HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationCONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD
CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses the iterative (and in general inaccurate)
More informationc 2006 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 1069 1082 c 2006 Society for Industrial and Applied Mathematics INEXACT INVERSE ITERATION WITH VARIABLE SHIFT FOR NONSYMMETRIC GENERALIZED EIGENVALUE PROBLEMS
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationA Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems
A Tuned Preconditioner for Applied to Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park
More informationInexact Inverse Iteration for Symmetric Matrices
Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av
More informationControlling inner iterations in the Jacobi Davidson method
Controlling inner iterations in the Jacobi Davidson method Michiel E. Hochstenbach and Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles (C.P. 165/84), 50, Av. F.D. Roosevelt, B-1050
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationCONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD
CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses an inner-outer scheme. In the outer
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More information1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)
A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved
More informationInexact Inverse Iteration for Symmetric Matrices
Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av
More informationof dimension n 1 n 2, one defines the matrix determinants
HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue
More informationPerron Frobenius Theory
Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationMATH36001 Perron Frobenius Theory 2015
MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,
More informationMICHIEL E. HOCHSTENBACH
VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce
More informationZ-Pencils. November 20, Abstract
Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationEINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science. CASA-Report November 2013
EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science CASA-Report 13-26 November 213 Polynomial optimization and a Jacobi-Davidson type method for commuting matrices by I.W.M.
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationInexact inverse iteration for symmetric matrices
Linear Algebra and its Applications 46 (2006) 389 43 www.elsevier.com/locate/laa Inexact inverse iteration for symmetric matrices Jörg Berns-Müller a, Ivan G. Graham b, Alastair Spence b, a Fachbereich
More informationSpectral Properties of Matrix Polynomials in the Max Algebra
Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationLecture Note 13: Eigenvalue Problem for Symmetric Matrices
MATH 5330: Computational Methods of Linear Algebra Lecture Note 13: Eigenvalue Problem for Symmetric Matrices 1 The Jacobi Algorithm Xianyi Zeng Department of Mathematical Sciences, UTEP Let A be real
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationSolving Regularized Total Least Squares Problems
Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationOn the Modification of an Eigenvalue Problem that Preserves an Eigenspace
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationFast Verified Solutions of Sparse Linear Systems with H-matrices
Fast Verified Solutions of Sparse Linear Systems with H-matrices A. Minamihata Graduate School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan aminamihata@moegi.waseda.jp K. Sekine
More informationABSTRACT. Professor G.W. Stewart
ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer
More informationInvertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices
Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:
More informationA Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems
A Tuned Preconditioner for for Generalised Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationA HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY
A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationA Jacobi Davidson type method for the product eigenvalue problem
A Jacobi Davidson type method for the product eigenvalue problem Michiel E Hochstenbach Abstract We propose a Jacobi Davidson type method to compute selected eigenpairs of the product eigenvalue problem
More informationHOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem
HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationApplied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices
Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass
More informationEigenvalue Problems Computation and Applications
Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and
More informationUniversiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.
Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationAn Accelerated Block-Parallel Newton Method via Overlapped Partitioning
An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationPolynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems
Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)
More informationU AUc = θc. n u = γ i x i,
HARMONIC AND REFINED RAYLEIGH RITZ FOR THE POLYNOMIAL EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND GERARD L. G. SLEIJPEN Abstract. After reviewing the harmonic Rayleigh Ritz approach for the standard
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationRAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS
RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS MELINA A. FREITAG, ALASTAIR SPENCE, AND EERO VAINIKKO Abstract. The computation
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationCONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX
J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationFirst, we review some important facts on the location of eigenvalues of matrices.
BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationarxiv: v2 [math-ph] 3 Jun 2017
Elasticity M -tensors and the Strong Ellipticity Condition Weiyang Ding Jinjie Liu Liqun Qi Hong Yan June 6, 2017 arxiv:170509911v2 [math-ph] 3 Jun 2017 Abstract In this paper, we propose a class of tensors
More informationApproximation algorithms for nonnegative polynomial optimization problems over unit spheres
Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationNonnegative and spectral matrix theory Lecture notes
Nonnegative and spectral matrix theory Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the first part of the course Nonnegative and spectral matrix theory with applications to
More informationAffine iterations on nonnegative vectors
Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More information