The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System

Size: px
Start display at page:

Download "The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System"

Transcription

1 Numerische Mathematik manuscript No. (will be inserted by the editor) The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Ren-Cang Li 1, Wei Zhang 1 Department of Mathematics, University of Texas at Arlington, P.O. Box 19408, Arlington, TX 76019, rcli@uta.edu. Department of Mathematics, University of Kentucky, Lexington, KY 40506, wzhang@ms.uky.edu. Received: November 007 / Revised version: April and August 008 Summary The Generalized Minimal Residual method (GMRES) is often used to solve a nonsymmetric linear system Ax = b. But its convergence analysis is a rather difficult task in general. A commonly used approach is to diagonalize A = XΛX 1 and then separate the study of GMRES convergence behavior into optimizing the condition number of X and a polynomial minimization problem over A s spectrum. This artificial separation could greatly overestimate GMRES residuals and likely yields error bounds that are too far from the actual ones. On the other hand, considering the effects of both A s spectrum and the conditioning of X at the same time poses a difficult challenge, perhaps impossible to deal with in general but only possible for certain particular linear systems. This paper will do so for a (nonsymmetric) tridiagonal Toeplitz system. Sharp error bounds on and sometimes exact expressions for residuals are obtained. These expressions and/or bounds are in terms of the three parameters that define A and Chebyshev polynomials of the first kind. Key words GMRES rate of convergence tridiagonal Toeplitz matrix linear system asymptotic speed Mathematics Subject Classification (1991): 65F10 Send offprint requests to: Ren-Cang Li Supported in part by the National Science Foundation under Grant No. DMS and DMS

2 Ren-Cang Li, Wei Zhang 1 Introduction The Generalized MinimalResidual method (GMRES) is often used to solve a nonsymmetric linear system Ax = b. The basic idea is to seek approximate solutions, optimal in certain sense, from the so-called Krylov subspaces. Specifically, the kth approximation x k is sought so that the kth residual r k = b Ax k satisfies [1] (without loss of generality, we take initially x 0 = 0 and thus r 0 = b.) r k = min y K k b Ay, where the kth Krylov subspace K k K k (A,b) of A on b is defined as K k K k (A,b) def = span{b,ab,...,a k 1 b}, (1.1) and generic norm is the usual l norm of a vector or the spectral norm of a matrix. This paper is concerned with the convergence analysis of GMRES on linear system Ax = b whose coefficient matrix A is a (nonsymmetric) tridiagonal Toeplitz coefficient matrix: λ µ ν A = µ, ν λ where λ, µ, ν are assumed nonzero and possibly complex. When µ = ν, A is normal, including symmetric and symmetric positive definite as subcases, and convergence analysis of GMRES (or the conjugate gradient method) on the corresponding linear systems has been well studied (see [14] and the references therein). This paper will focus on the nonsymmetric case: µ ν. Throughout this paper, exact arithmetic is assumed, A is N-by-N, and k is GMRES iteration index. Since in exact arithmetic GMRES computes the exact solution in at most N steps, r N = 0. For this reason, we restrict k < N at all times. Our first main contribution in this paper is the following error bound (Theorem.1) r k 1/ k k + 1 ζ j T j (τ), (1.) r 0 j=0 where T j (t) is the jth Chebyshev polynomial of the first kind, and µν ξ = ν, τ = λ µν, ζ = min{ ξ, ξ 1 }.

3 Convergence of GMRES on Tridiagonal Toeplitz Linear System 3 We will also prove that this upper bound is nearly achieved by b = e 1 (the first column of the identity matrix) when ξ 1 or by b = e N (the last column of the identity matrix) when ξ 1. By nearly achieved, we mean it is within a factor about at most (k + 1) 3/ of the exact residual ratios. Our second main contribution is about the worst asymptotic speed of r k among all possible r 0. It is proven that (Theorem.) [ ] lim inf r k 1/k [ ] sup = lim N>k r 0 r 0 sup r k 1/k sup = min { (ζρ) 1,1 }, N>k r 0 r 0 (1.3) τ where ρ = max{ + τ 1, τ } τ 1. Equivalently, the limits in (1.3) can be interpreted as follows: for any given ǫ > 0, there exists a positive integer K such that [ sup r 0 r k r 0 ] 1/k min { (ζρ) 1,1 } < ǫ for all N > k K. (1.4) It is worth mentioning that sup r0 among all possible r 0 in (1.3) can be replaced by sup r0 {e 1,e N }. A related work that also studied asymptotic speed of convergence but for the conjugate gradient method (CG) and special right-hand sides and λ = and µ = ν = 1 is [], where that N/k remains constant is required as k, which is a much stronger restriction than N > k needed in (1.3) and (1.4). A by-product of (1.3) is that the worst asymptotic speed can be separated into the factor ζ 1 1 contributed by A s departure from normality and the factor ρ 1 contributed by A s eigenvalue distribution. Take, for example, λ = 0.5, µ = 0.3, and ν = 0.7 which was used in [4, p.56] to explain the effect of nonnormality on GMRES convergence. We have (ζρ) 1 = , whereas in [4, p.56] it is implied r k / r 0 (0.913) k for N = 50, which is rather good, considering that N = 50 is rather small. Tridiagonal Toeplitz linear systems naturally arise from a convectiondiffusion model problem [19]. When µ or ν is zero, A becomes a Jordan block, and GMRES residual bounds by Ipsen [1] and Liesen and Strakoš [18] suggests possibly slow convergence unless λ is much bigger in magnitude relative to the nonzero number µ or ν. This is more or less expected because otherwise A is highly non-normal, a case GMRES usually does poorly. Liesen and Strakoš also presented a detailed analysis aiming at showing that GMRES for tiny µ behaves much like GMRES after setting µ to 0. Their general residual expression and bound, however, involve the pseudo-inverse and the

4 4 Ren-Cang Li, Wei Zhang smallest singular value, respectively, of a matrix that do not seem to lead to simple bounds like ours without further, likely complicated, analysis. Ernst [9], in our notation, obtained the following inequality: if A s field of values does not contain the origin, then r k ( ξ k + ξ k) ρ k, (1.5) r 0 1 ρ k where ρ = max{ τ + τ 1, τ τ } 1 and τ = [ cos π N+1] 1 τ. Our bound (1.) is comparable to Ernst s bound for large N. This can be seen by noting that for N large enough, τ τ and ρ ρ, and that T j (τ) 1 ρj when ρ > 1 and ζ k ξ k + ξ k ζ k. Ernst s bound also leads to lim sup sup N>k [ ] r k 1/k sup min { (ζρ) 1,1 }. (1.6) r 0 r 0 In differentiating our contributions here from Ernst s, we use a different technique to arrive at (1.) and (1.3). While our approach is not as elegant as Ernst s which was based on A s field of values (see also [5]), it allows us to establish both lower and upper bounds on relative residuals for special right-hand sides to conclude that our bound is nearly achieved. Also (1.3) is an equality while only an inequality (1.6) can be deduced from Ernst s bound and approach. We also obtain residual bounds and exact expressions especially for right-hand sides b = e 1 and b = e N (Theorem.3). They suggest, besides the sharpness of (1.), an interesting GMRES convergence behavior. For b = e 1, that ξ > 1 speeds up GMRES convergence, and in fact r k is roughly proportional to ξ k. So the bigger the ξ is, the faster the convergence will be. Note as ξ gets bigger, A gets further away from a normal matrix. Thus, loosely speaking, the nonnormality contributes to the convergence rate in the positive way. Nonetheless this does not contradict our usual perception that high nonnormality is bad for GMRES if the worst behavior of GMRES among all b is considered. This mystery can be best explained by looking at the extreme case: ξ =, i.e., ν = 0, for which b = e 1 is an eigenvector (and convergence occurs in just one step). In general for ν 0, as ξ gets bigger and bigger, roughly speaking b = e 1 comes closer and closer to A s invariant subspaces of lower dimensions and consequently speedier convergence is witnessed. Similar comments apply to the case when b = e N. The rest of this paper is organized as follows. We state our main results in Section. Tedious proofs that rely on residual reformulation

5 Convergence of GMRES on Tridiagonal Toeplitz Linear System 5 involving rectangular Vandermonde matrices and complicated analysis will be presented separately in Section 3. Exact residual norm formulas for two special right-hand sides b = e 1 and e N are established in Section 4. Finally in Section 5 we present our concluding remarks. Notation. Throughout this paper, K n m is the set of all n m matrices with entries in K, where K is C (the set of complex numbers) or R (the set of real numbers), K n = K n 1, and K = K 1. I n (or simply I if its dimension is clear from the context) is the n n identity matrix, and e j is its jth column. The superscript takes conjugate transpose while T takes transpose only. σ min (X) denotes the smallest singular value of X. We shall also adopt MATLAB-like convention to access the entries of vectors and matrices. The set of integers from i to j inclusive is i : j. For a vector u and a matrix X, u (j) is u s jth entry, X (i,j) is X s (i,j)th entry, diag(u) is the diagonal matrix with (diag(u)) (j,j) = u (j) ; X s submatrices X (k:l,i:j), X (k:l,:), and X (:,i:j) consists of intersections of row k to row l and column i to column j, row k to row l and all columns, and all rows and column i to column j, respectively. Finally p (1 p ) is the l p norm of a vector or the l p operator norm of a matrix, defined as u p = j u (j) p 1/p, X p = max u p=1 Xu p. α be the largest integer that is no bigger than α, and α the smallest integer that is no less than α. Main Results.1 Setting the Stage An N N tridiagonal Toeplitz A takes this form λ µ ν A = µ CN N. (.1) ν λ

6 6 Ren-Cang Li, Wei Zhang Throughout the rest of this paper, ν, λ, and µ are reserved as the defining parameters of A, and set µν ξ = ν, τ = λ { µν, ζ = min ξ, 1 }, (.) ξ { τ ρ = max + τ } 1, τ τ 1. (.3) Matrix A is diagonalizable when µ 0 and ν 0. In fact [, pp ] (see also [9,18]), A = XΛX 1, X = S Z, Λ = diag(λ 1,...,λ N ), (.4) λ j = λ µν t j, t j = cos θ j, θ j = jπ N + 1, (.5) Z (:,j) = N + 1 (sin jθ 1,...,sin jθ N ) T, (.6) S = diag(1,ξ 1,...,ξ N+1 ). (.7) It can be verified that Z T Z = I N ; So A is normal if ξ = 1, i.e., µ = ν > 0. Set ω = µν. By (.5), we have λ j = ω(t j τ), 1 j N. (.8) Any branch of µν, once picked and fixed, is a valid choice in this paper. Note ρ 1 always because (τ + τ 1)(τ τ 1) = 1. In particular if λ R, µ < 0 and ν > 0, then ρ = τ + τ + 1. A common starting point in existing quantitative analysis for GM- RES [10, Page 54] on Ax = b with diagonalizable A = XΛX 1 and Λ = diag(λ 1,...,λ N ) is r k / r 0 κ(x) min max φ k (0)=1 i φ k (λ i ). (.9) as it seems that there is no easy way to do otherwise. It simplifies the analysis by separating the study of GMRES convergence behavior into optimizing the condition number of the eigenvector matrix X and a polynomial minimization problem over A s spectrum, but it could potentially overestimate GMRES residuals. This is partly because, as observed by Liesen and Strakoš [18], possible cancelations of huge components in X and/or X 1 were artificially ignored for the sake of the convergence analysis. For tridiagonal Toeplitz matrix A we are interested in here, however, rich structure allows us to do differently, as we shall do later.

7 Convergence of GMRES on Tridiagonal Toeplitz Linear System 7 Let V,N be the (k + 1) N rectangular Vandermonde matrix def λ 1 λ λ N V,N =..... (.10). λ k 1 λk λk N having nodes {λ j } N j=1, and Y = Xdiag(X 1 b). (.11) Using (b,ab,...,a k b) = Y V T RES,N [5, Lemma.1], we have for GM- r k = min u (1) =1 (b,ab,...,ak b)u = min u (1) =1 Y V T,N u. (.1) Recall Chebyshev polynomials of the first kind: T m (t) = cos(m arccos t) for real t and t 1, (.13) = 1 ( ) m t + t 1 ( m 1 + t t 1), (.14) and define the mth Translated Chebyshev Polynomial in z of degree m as T m (z;ω,τ) def = T m (z/ω + τ) (.15) = a mm z m + a m 1 m z m a 1m z + a 0m,(.16) where a jm a jm (ω,τ) are functions of ω and τ, and upper triangular R m C m m, a matrix-valued function in ω and τ, too, as a 00 a 01 a 0 a 0 m 1 a 11 a 1 a 1 m 1 R m R m (ω,τ) def = a a m 1, (.17).... a m 1 m 1 i.e., the jth column consists of the coefficients of T j 1 (z;ω,τ). Set T 0 (t 1 ) T 0 (t ) T 0 (t N ) def T 1 (t 1 ) T 1 (t ) T 1 (t N ) T N = (.18)... T N 1 (t 1 ) T N 1 (t ) T N 1 (t N )

8 8 Ren-Cang Li, Wei Zhang and V N = V N,N for short. Then V T N R N = T T N. (.19) Equation (.19) yields VN T = T T N R 1 N. Extracting the first k + 1 columns from both sides of VN T = T T N R 1 N yields V T,N = T T,N R 1, (.0) where T,N = (T N ) (1:,:). Now notice Y = X diag(x 1 b) and X = SZ with Z in (.6) being real and orthogonal to get Y V,N T S 1 b)(t T N) (:,1:) R 1 = SM (:,1:) R 1 (.1) = SM (:,1:) S 1 S R 1. (.) where S = S (1:,1:), the ()th leading principle submatrix of S, It follows from (.1) and (.) that M = Z diag(z T S 1 b)t T N. (.3) σ min (SM (:,1:) S 1 ) r k min u(1) =1 S R 1 u SM (:,1:) S 1. (.4) The second inequality in (.4) is our foundation to bound GMRES residuals for a general right-hand side, through explicitly expressing min u(1) =1 S R 1 u in terms of Chebyshev polynomials and the parameters τ and ξ and bounding SM (:,1:) S 1 in terms of b.. An Equivalence Principle We shall now describe an equivalence principle between the linear system with A and b and the one with A T and a permuted b. The principle will simplify our analysis by allowing us to focus on the case ξ 1 only. Let Π = (e N,...,e,e 1 ) R N N be the permutation matrix. Notice Π T AΠ = A T and thus Ax = b is equivalent to A T Π T x = (Π T AΠ)(Π T x) = Π T b. (.5)

9 Convergence of GMRES on Tridiagonal Toeplitz Linear System 9 Note K k (A T,Π T b) = K k (Π T AΠ,Π T b) = Π T K k (A,b), and r k = min y K k (A,b) b Ay = min Π T (b AΠ Π T y) Π T y Π T K k (A,b) = min Π T b A T w. (.6) w K k (A T,Π T b) The last expression is the GMRES residual for the equivalent system (.5). Therefore Any result on GMRES for Ax = b leads to one for A T y = Π T b after performing the following substitutions µ ν, ν µ, ξ ξ 1, b Π T b. (.7).3 General Right-hand Sides Our first main result is given in Theorem.1 whose proof, along with the proofs of other results in the section, involve complicated computations and will be postponed to Section 3. Theorem.1 For Ax = b, where A is tridiagonal Toeplitz as in (.1) with nonzero (real or complex) parameters ν, λ, and µ. Then the kth GMRES residual r k satisfies for 1 k < N where and j r k r 0 k + 1 Φ (t,s) def = [ 1 + Φ (τ,ζ) ] 1/, (.8) k s j T j (t), (.9) j=0 means the first term is halved. Figure.1 plots residual histories for several examples of GMRES with each of b s entries being uniformly random in [ 1, 1]. In each of the plots in Figure.1, as well as in Figures. and.3 below, we fix τ and ξ, take λ = 1 always, and then take µ = ξ 1, µ = ± µ, and ν = ν = τ τξ. Thus µ, ν R, and in fact ν > 0 always. When µ > 0, ξ = µ/ν < 0 and τ = 1/( µν) > 0, but when µ < 0, both ξ = ι µ/ν and

10 10 Ren-Cang Li, Wei Zhang 10 0, upper bounds ( τ =0.8, ξ =0.7) 10 0, upper bounds ( τ =1, ξ =0.7) N = N = for µ=0.4375, ν= for µ= , ν= , upper bounds ( τ =1., ξ =0.7) for µ=0.35, ν= for µ= 0.35, ν= , upper bounds ( τ =1.4, ξ =0.7) N = N = for µ=0.9167, ν= for µ= , ν= for µ=0.5, ν=0.510 for µ= 0.5, ν= Fig..1. GMRES residuals for random b uniformly in [ 1, 1], and their upper bounds (dashed lines) by (.8). All indicate that our upper bounds are tight, except for the last few steps. Upper bounds for the case µ > 0 in the top two plots are visually indistinguishable from the horizonal line 10 0, suggesting slow convergence. Only ξ < 1 is shown because of (.7). τ = ι/( µν ) are imaginary, where ι = 1 is the imaginary unit. Figure.1 indicates that GMRES converges much faster for µ < 0 than for µ > 0 in each of the plots. There is a simple explanation for this: the eigenvalues of A are further away from the origin for a pure imaginary τ than for a real τ for any fixed τ. Our next main result given in Theorem. tells the worst asymptotic speed for r k.

11 Convergence of GMRES on Tridiagonal Toeplitz Linear System 11 Theorem. Under the conditions of Theorem.1, [ ] lim inf r k 1/k [ ] sup = lim N>k r 0 r 0 sup r k 1/k sup = min{(ζρ) 1,1}. N>k r 0 r 0 (.30) Moreover, sup r0 among all possible r 0 can be replaced by sup r0 {e 1,e N }..4 Special Right-hand Sides We now consider three special right-hand sides: b = e 1 or e N or b (1) e 1 + b (N) e N. In particular they show that the upper bound in Theorem.1 is within a factor about at most (k + 1) 3/ of the true residual for b = e 1 or e N, depending on whether ξ 1 or ξ 1. Because of the equivalence principle (.7), only b = e 1 will be considered. Inequalities in Theorems.3 and.5 upon replacing ξ by ξ 1 lead to related results for the case b = e N. Theorem.3 In Theorem.1, if b = e 1, then the kth GMRES residual r k satisfies for 1 k < N [ ξ j Φ(τ,ξ) 1 ] 1/ r k 4 j=0 In particular, for ξ (1 + ξ ) [ Φ (τ,ξ) 1 4] 1/ r k [ Φ (τ,ξ) 1 4] 1/. (.31) [ Φ (τ,ξ) 1 4] 1/. (.3) The upper bound and the lower bound in (.3) differ by a factor roughly (k + 1), and thus they are rather sharp; so are the bounds in (.31) for ξ 1. Comparing them to (.8), we conclude that the upper bound by (.8) is fairly sharp for worst possible b. But the bounds in (.31) differ by a factor O( ξ ) for ξ > 1, and thus at least one of them (upper or lower bound) is bad. Our numerical examples indicate that the upper bounds are rather good regardless of the magnitude of ξ for b = e 1. See Figure.. Given Theorem.3 and its implied result for b = e N by (.7), it would not be unreasonable to expect that the upper bound in Theorem.1 would be sharp for very large or tiny ξ within a factor

12 1 Ren-Cang Li, Wei Zhang 10 0, lower and upper bounds ( τ =0.8, ξ =0.7) 10 0, lower and upper bounds ( τ =0.8, ξ =1.) N =50 N = for µ=0.4375, ν= for µ= , ν= , lower and upper bounds ( τ =1, ξ =0.7) 10 0 for µ=0.75, ν= for µ= 0.75, ν= , lower and upper bounds ( τ =1, ξ =1.) N = N = for µ=0.35, ν= for µ= 0.35, ν= , lower and upper bounds ( τ =1., ξ =0.7) 10 0 for µ=0.6, ν= for µ= 0.6, ν= , lower and upper bounds ( τ =1., ξ =1.) N = N = for µ=0.9167, ν= for µ= , ν= for µ=0.5, ν=0.347 for µ= 0.5, ν= Fig... GMRES residuals for b = e 1, sandwiched by their lower and upper bounds by (.31). All lower and upper bounds are very good for ξ 1 as expected, but only upper bounds are good when ξ > 1.

13 Convergence of GMRES on Tridiagonal Toeplitz Linear System , lower and upper bounds ( τ =0.8, ξ =0.7) 10 0, lower and upper bounds ( τ =1, ξ =0.7) N = N = for µ=0.4375, ν= for µ= , ν= , lower and upper bounds ( τ =1., ξ =0.7) 10 0 for µ=0.35, ν= for µ= 0.35, ν= , lower and upper bounds ( τ =1.4, ξ =0.7) N = N = for µ=0.9167, ν= for µ= , ν= for µ=0.5, ν=0.510 for µ= 0.5, ν= Fig..3. GMRES residuals for b = e 1+e N, sandwiched by their lower and upper bounds by (.33) and (.34). Strictly speaking, (.33) is only proved for k N/, but it seems to be very good even for k > N/ as well. We also ran GMRES for b = e 1 e N and obtained residual history that is very much the same. Only ξ < 1 is shown because of (.7). possibly about at most () 3/ for right-hand side b with b (i) = 0 for i N 1 and b (1) = b (N) > 0. The following theorem indeed confirms this but only for k N/. Our numerical examples even support that the lower bounds by (.33) would be good for k > N/ (see Figure.3), too, but we do not have a way to mathematically justify it yet. Theorem.4 In Theorem.1, if b (i) = 0 for i N 1, then the kth GMRES residual r k satisfies for 1 k N/ 1 r k r 0 min i {1,N} b (i) χ r 0 [ Φ (τ,ζ) 1 4] 1/, (.33)

14 14 Ren-Cang Li, Wei Zhang and for 1 k < N where r k [ ] 1 1/ 3 r 0 + Φ (τ,ζ), (.34) 1 < χ = 1 j=0 k + 1 ζ j Figures. and.3 plot residual histories for several examples of GMRES with b = e 1 and b = e 1 + e N, respectively. Finally we have the following theorem about the asymptotic speeds of r k for b = e 1. Theorem.5 Assume the conditions of Theorem.1 hold. Let b = e 1. Then min{( ξ ρ) 1,( ξ ρ) 1,1} lim inf and lim sup inf r k 1/k N>k sup N>k. r k 1/k min{( ξ ρ) 1,1}, (.35) lim inf r k 1/k = lim sup r k 1/k = min{( ξ ρ) 1,1} for ξ 1. N>k N>k (.36) Remark 1 As we commented before, our numerical examples indicate that the upper bounds in Theorem.3 is rather accurate regardless of the magnitude of ξ for b = e 1 (see Figure.). This leads us to conjecture that the following equations lim inf r k 1/k = lim sup r k 1/k = min{( ξ ρ) 1,1} for b = e 1, N>k N>k (.37) lim inf r k 1/k = lim sup r k 1/k = min{( ξ 1 ρ) 1,1} for b = e N, N>k N>k (.38) would hold. As before if one of them is proven, the other one follows because of (.7).

15 Convergence of GMRES on Tridiagonal Toeplitz Linear System 15 3 Proofs In what follows, we will prove theorems in the previous section in the order of their appearance, except Theorem. whose proof requires Theorem.5 will be proved last. In (.4), there are two quantities to deal with min S R 1 u (1) =1 u and SM (:,1:) S 1. (3.1) We shall now do so. In its present general form, the next lemma was proven in [13,16]. It was also implied by the proof of [1, Theorem.1]. See also [17]. Lemma 3.1 If W has full column rank, then min Wu = [ e T 1 (W W) 1 ] 1/ e 1. (3.) u (1) =1 In particular if W is nonsingular, min u(1) =1 Wu = W e 1 1. By this lemma, we have (note a 00 = 1) [ ] 1 1/ min S R 1 u (1) =1 u = S R e 1 1 = + Φ (τ,ξ). (3.3) This gives the first quantity in (3.1). We now turn to the second one there. It can be seen that SM (:,1:) S 1 = (SMS 1 ) (:,1:) since S is diagonal. To compute SMS 1, we shall investigate M in (.3) first. M = = = N Z diag(zs 1 b (l) e l )T T N l=1 N b (l) ξ l 1 Z diag(ze l )T T N l=1 N b (l) ξ l 1 Z diag(z (:,l) )T T N. (3.4) l=1 In Lemma 3. and in the proof of Lemma 3.3 below, without causing notational conflict, we will temporarily use k as a running index, as opposed to the rest of the paper where k is reserved for GMRES step index. The following lemma is probably well-known, but an exact source is hard to find. For a proof, see [15, Lemma 4.1].

16 16 Ren-Cang Li, Wei Zhang Lemma 3. For θ j = j N+1π and integer l, N N, if l = m(n + 1) for some integer m, cos lθ k = 0, if l is odd, k=1 1, if l is even, but l m(n + 1) for any integer m. (3.5) def Lemma 3.3 Let M l = Z diag(z (:,l) )T T N for 1 l N. Then the entries of M l are zeros, except at those positions (i,j), graphically forming four straight lines: (a) i + j = l + 1, (b) i j = l 1, (c) j i = l + 1, (d) i + j = (N + 1) l + 1. (3.6) (M l ) (i,j) = 1/ for (a) and (b), except at their intersection (l,1) for which (M l ) (l,1) = 1. (M l ) (i,j) = 1/ for (c) and (d). Notice no valid entries for (c) if l N 1 and no valid entries for (d) if l. Proof: For 1 i, j N, N (N + 1) (M l ) (i,j) = 4 sin kθ i sin lθ k cos(j 1)θ k Since all = 4 = = k=1 N sin iθ k sinlθ k cos(j 1)θ k k=1 N [cos(i l)θ k cos(i + l)θ k ] cos(j 1)θ k k=1 N [cos(i + j l 1)θ k + cos(i j l + 1)θ k k=1 cos(i + j + l 1)θ k cos(i j + l + 1)θ k ]. i 1 = i + j l 1, i = i j l + 1, i 3 = i + j + l 1, i 4 = i j + l + 1 are either even or odd at the same time, Lemma 3. implies (M l ) (i,j) = 0 unless one of them takes the form m(n + 1) for some integer m. We now investigate all possible situations as such, keeping in mind that 1 i, j, l N.

17 Convergence of GMRES on Tridiagonal Toeplitz Linear System i 1 = i+j l 1 = m(n +1). This happens if and only if m = 0, and thus i + j = l + 1. Then i = j +, i 3 = l, i 4 = j + l +. They are all even. i 3 and i 4 do not take the form m(n + 1) for some integers m. This is obvious for i 3, while i 4 = m(n + 1) implies m = 0 and j = l+1, and thus i = 0 which cannot happen. However if i = m(n +1), then m = 0 and j = 1, and thus i = l. So Lemma 3. implies (M l ) (i,j) = 1/ for i + j = l + 1 and i l, while (M l ) (l,1) = 1.. i = i j l+1 = m(n +1). This happens if and only if m = 0, and thus i j = l 1. Then i 1 = j, i 3 = j + l, i 4 = l. They are all even. i 3 and i 4 do not take the form m(n + 1) for some integers m. This is obvious for i 4, while i 3 = m(n + 1) implies m = 1 and j = N + l, and thus i = N + 1 which cannot happen. However if i 1 = m(n + 1), then m = 0 and thus j = 1 and i = l which has already been considered in Item 1. So Lemma 3. implies (M l ) (i,j) = 1/ for i j = l 1 and i l, while (M l ) (l,1) = i 3 = i+j +l 1 = m(n +1). This happens if and only if m = 1, and thus i + j = (N + 1) l + 1. Then i 1 = (N+1) l, i = (N+1) j l+, i 4 = (N+1) j+. They are all even. i 1 and i do not take the form m(n + 1) for some integers m. This is obvious for i 1, while i = m(n + 1) implies m = 0 and j = N + l, and thus i = N + 1 which cannot happen. However if i 4 = m(n + 1), then m = 1 and thus j = 1 and i = (N + 1) l which is bigger than N + and not possible. So Lemma 3. implies (M l ) (i,j) = 1/ for i+j = (N +1) l i 4 = i j +l+1 = m(n +1). This happens if and only if m = 0, and thus j i = l + 1. Then i 1 = j l, i = l, i 3 = j. They are all even, and do not take the form m(n + 1) for some integers m. This is obvious for i. i 1 = m(n + 1) implies m = 0 and j = l+1, and thus i = 0 which cannot happen. i 3 = m(n+1) implies m = 0 and thus j = 1 and i = l which cannot happen either. So Lemma 3. implies (M l ) (i,j) = 1/ for j i = l + 1.

18 18 Ren-Cang Li, Wei Zhang This completes the proof. Now we know M l. We still need to find out SMS 1. Let us examine it for N = 5 in order to get some idea about what it may look like. SMS 1 for N = 5 is 1 b (1) ξ b () 1 ξ b (1) + 1 ξ4 b (3) 1 ξ4 b () + 1 ξ6 b (4) 1 ξ6 b (3) + 1 ξ8 b (5) 1 b () b (1) + 1 b (3) ξ 1 b (4) ξ 4 1 ξ b (1) + 1 ξ6 b (5) 1 ξ4 b () 1 b (3) b () + 1 ξ 1 b (4) b (1) + 1 b (5) ξ ξ b (1) 1 ξ6 b (5) b (4) 1 b (3) + 1 b (5) ξ 1 b () b (5) 1 b (4) 1 b (1) 1 b (5) ξ 4 1 b (4) ξ 4 1 b (3) 1 b (5) ξ 1 b () 1 ξ 1 b (4) b (1) 1 b (3) ξ We observe that for N = 5, the entries of SMS 1 are polynomials in ξ with at most two terms. This turns out to be true for all N. Lemma 3.4 The following statements hold. 1. The first column of SMS 1 is b. Entries in every other columns taking one of the three forms: (b (n1 )ξ m 1 + b (n )ξ m )/ with n 1 n, b (n1 )ξ m 1 /, and 0, where 1 n 1, n N and m i 0 are nonnegative integer.. In each given column of SMS 1, any particular entry of b appears at most twice. As the consequence, we have SM (:,1:) S 1 k + 1 b if ξ 1. Proof: Notice M = N l=1 b (l)ξ l 1 M l and consider M s (i,j)th entry which comes from the contributions from all M l. But not all of M l contribute as most of them are zero at the position. Precisely, with the help of Lemma 3.3, those M l that contribute nontrivially to the (i, j)th position are the following ones subject to satisfying the given inequalities. (a) if 1 i + j 1 N or equivalently i + j N + 1, M i+j 1 gives a 1/. (b) if 1 i j + 1 N or equivalently i j, M i j+1 gives a 1/. (c) if 1 j i 1 N or equivalently j i +, M j i 1 gives a 1/. (d) if 1 (N + 1) (i + j) + 1 N or equivalently i + j N + 3, M (N+1) (i+j)+1 gives a 1/. These inequalities, effectively 4 of them, divided entries of M into nine possible regions as detailed in Figure 3.1. We shall examine each region one by one. Recall (SMS 1 ) (i,j) = ξ i+1 M (i,j) ξ j 1 = ξ j i M (i,j),.

19 Convergence of GMRES on Tridiagonal Toeplitz Linear System 19 i + j N + 1 (a) (d) i + j N + 3 i j (c) (b) i j 0 (a) (c) (a) (b). (c) (d) (b) (d) Fig Computation of M (i,j). Left: Regions of entries as divided by inequalities in (a) and (d); Middle: Regions of entries as divided by inequalities in (b) and (c); Right: Regions of entries as divided by all inequalities in (a), (b), (c), and (d). and let γ a = 1 b (i+j 1)ξ j, γ b = 1 b (i j+1), γ c = 1 b (j i 1)ξ (j i 1), γ d = 1 b ((N+1) (i+j)+1)ξ (N+1) (i+j). Each entry in the 9 possible regions in the rightmost plot of Figure 3.1 is as follows. 1. (a) and (b): (SMS 1 ) (i,j) = γ a + γ b.. (a) and (c): (SMS 1 ) (i,j) = γ a + γ c. 3. (b) and (d): (SMS 1 ) (i,j) = γ b + γ d. 4. (c) and (d): (SMS 1 ) (i,j) = γ c + γ d. 5. (a) and i j = 1: (SMS 1 ) (i,j) = γ a. 6. (b) and i + j = N + : (SMS 1 ) (i,j) = γ b. 7. (c) and i + j = N + : (SMS 1 ) (i,j) = γ c. 8. (d) and i j = 1: (SMS 1 ) (i,j) = γ d. 9. i j = 1 and i + j = N + : (SMS 1 ) (i,j) = 0. In this case, i = (N + 1)/ and j = (N + 3)/. So there is only one such entry when N is odd, and none when N is even. With this profile on the entries of SMS 1, we have Item 1 of the lemma immediately. Item is the consequence of M = N l=1 b (l)ξ l 1 M l and Lemma 3.3 which implies that there are at most two nonzero entries in each column of M l. As the consequence of Item 1 and Item, each column of SMS 1 can be expressed as the sum of two vectors w and v such that

20 0 Ren-Cang Li, Wei Zhang w, v b / when ξ 1, and thus (SMS 1 ) (:,j) b for all 1 j N. Therefore SM (:,1:) S 1 (SMS 1 ) (:,j) k + 1 b, as expected. j=1 Proof of Theorem.1. We shall only prove r k b k + 1 [ 1 + Φ (τ,ξ)] 1/ for ξ 1 (3.7) since for the other case when ξ > 1, we have, by (.7) and (3.7) assuming proven true, r k Π T b k + 1 [ 1 + Φ (τ,ξ 1 )] 1/. The conclusion of the theorem now follows from the observation that ζ = min{ ξ, ξ 1 }. Assume ξ 1. Inequality (3.7) is the consequence of (.4), (3.3), and Lemma 3.4. Remark The leftmost inequality in (.4) gives a lower bound on r k in terms of σ min (SM (:,1:) S 1 ) which, however, is hard to bound from below because it can be as small as zero, unless we know more about b such as b = e 1 as in Theorem.3. Proof of Theorem.3: If b = e 1, then M = M 1 is upper triangular. More specifically and, by (.1), 1 0 1/. 1/ 0.. M = M 1 = 1/ 1/ / (3.8) ( ) ( Y V,N T S MR = 1 S MS 1 = D 1 DS R 1 ), 0 0

21 Convergence of GMRES on Tridiagonal Toeplitz Linear System 1 where M = M (1:,1:) and D = diag(,1,1,...,1). Therefore,N u 1 σ min (S MS D 1 ) min u (1) =1 Y V T min u(1) =1 DS R 1 Recall r k = min u(1) =1 Y V T,N u. Let It can be seen that u S MS 1 D 1. (3.9) P = (e 1,e 3,...,e,e 4,...) R () (). P T (S MS 1 D 1 )P = 1 where E 1 C 1 ξ E i = ξ 1, E C, E 1 i =, and ( E1, E) 1 ξ ξ (m 1) ξ. 1 Hence E i E i 1 E i = 1 + ξ. Therefore S MS 1 D 1 = 1 max{ E 1, E } 1 (1 + ξ ). Similarly use Ei 1 E 1 i 1 Ei 1 to get E j=0 ξ j, E 1 1 j=0 ξ j. Therefore σ min (S MS 1 D 1 ) = 1 min{σ min(e 1 ),σ min (E )} = 1 min{ E 1 1 1, E ξ j. j=0 }

22 Ren-Cang Li, Wei Zhang Finally, by Lemma 3.1, we have min DS R 1 u (1) =1 u = D S R e 1 1 = This, together with (3.9), lead to (.31). [ Φ (τ,ξ) 1 4] 1/. Proof of Theorem.4: Now b = b (1) e 1 +b (N) e N. Notice the form of M 1 in (3.8), and that M N is M 1 after its rows reordered from the last to the first. For the case M = b (1) M 1 + ξ N 1 b (N) M N, and also Lemma 3.4 implies that only positive powers of ξ appear in the entries of SMS 1. Therefore when ξ 1, SM (:,1:) S 1 SMS 1 b (1) M 1 + b (N) M N b (1) 3/ + b (N) 3/ 3 b, (3.10) where M l takes entrywise absolute value, and we have used M N = M 1 M 1 1 M 1 = 3/. Inequality (.34) for ξ 1 is the consequence of (.4), (3.3), and (3.10). Inequality (.34) for ξ 1 follows from itself for ξ 1 applied to the permuted system (.5). To prove (.33), we use the lines of arguments in the proof for Theorem.3 and notice that for 1 k N/ 1 Y V T,N = W 1 N () 0 W It can be seen from the proof for Theorem.3 that min W 1u b (1) 1 1 [ ξ j Φ(τ,ξ) 1 ] 1/, u (1) =1 4 min W u b (N) u (1) =1 Finally use j=0 1 j=0 min u (1) =1 Y V T,N u max { ξ j 1 [ Φ(τ,ξ 1 ) 1 ] 1/. 4 } min W 1u, min W u u (1) =1 u (1) =1

23 Convergence of GMRES on Tridiagonal Toeplitz Linear System 3 to complete the proof. Proof of Theorem.5: We note that lim sup sup N>k r k 1/k 1 because r k is non-increasing. Suppose b = e 1. Consider first ρ > 1. Then T j (τ) 1 ρj, and thus Φ (τ,ξ) k ( ξ ρ) j = 1 4 ( ξ ρ)() 1 ( ξ ρ). (3.11) 1 j=0 If ξ ρ > 1, then (3.11) and Theorem.3 imply lim sup lim inf sup N>k [ ] r k 1/k 1 1/k lim (1 + ξ ) [ Φ (τ,ξ) 4] 1 1/(k) (3.1) = ( ξ ρ) 1, inf r k 1/k lim N>k = 1 1/k 1 j=0 ξ j 1/k [ Φ (τ,ξ) 1 4] 1/(k) (3.13) { ( ξ ρ) 1, for ξ 1, ( ξ ρ) 1, for ξ > 1. They together give (.35) for the case ξ ρ > 1. If ξ ρ 1, then must ξ < 1 and min{( ξ ρ) 1,1} = 1, min{( ξ ρ) 1,( ξ ρ) 1,1} = 1, and lim inf inf r k 1/k 1 N>k by (3.13) because Φ (τ,ξ) 1 4 is approximately bounded by ()/4 by (3.11). So (.35) holds for the case ξ ρ 1, too. Now consider ρ = 1. Then τ + τ 1 = e ιθ for some 0 θ π, where ι = 1 is the imaginary unit. Thus τ [ 1,1] and in fact τ = (τ + τ 1) + (τ τ 1) = cos θ, T j (τ) = cos jθ.

24 4 Ren-Cang Li, Wei Zhang Therefore Φ (τ,ξ) k j=1 ξ j (cos jθ). Instead of (3.1) and (3.13), we have where lim sup sup N>k r k 1/k η, lim inf inf r k 1/k ξ 1 η 1, (3.14) N>k η 1 = lim inf η = lim sup 1 k 4 + ξ j (cos jθ) j=1 1 k 4 + ξ j (cos jθ) j=1 1/(k) 1/(k) Now if ξ 1, then k j=1 ξ j (cos jθ) k and thus If, however, ξ > 1, we claim that η 1 = η = 1 if ξ 1. (3.15) α ξ (k 1) 1 k 4 + ξ j (cos jθ) (k + 1) ξ k (3.16) j=1 for some constant α > 0, independent of θ and k. The second inequality in (3.16) is quite obvious. To see the first inequality, we notice k j=k 1 ξ j (cos jθ) ξ (k 1) [cos kθ + cos (k 1)θ] = ξ (k 1) [ 1 + cos kθ + = ξ (k 1) [1 + cos(k 1)θ cos θ].,. ] 1 + cos (k 1)θ It suffices to show that there is positive constant α, independent of k and θ, such that 1 + cos(k 1)θ cos θ α. Assume to the contrary that inf [1 + cos(k 1)θ cos θ] = 0 (3.17) k,θ which means there are sequences {k i } and θ i such that 1 + cos(k i 1)θ i cos θ i 0 as i. Since cos( ) 1, this implies as i either cos(k i 1)θ i 1 and cos θ i 1 or cos(k i 1)θ i 1 and cos θ i 1. But none of which is possible. Therefore (3.17) cannot hold. This proves (3.16) which yields η 1 = η = ξ 1 if ξ > 1. (3.18)

25 Convergence of GMRES on Tridiagonal Toeplitz Linear System 5 (.35) for ρ = 1 is the consequence of (3.14), (3.15), and (3.18). Finally if ξ 1, then both the leftmost side and rightmost side of (.35) are equal to min{( ξ ρ) 1,1}. This proves (.36). Proof of Theorem.: Note that lim sup sup N>k [ ] r k 1/k sup 1. r 0 r 0 Because of the equivalence principle (.7), it suffices to consider only ξ 1. Then ζ = ξ. First we prove lim sup sup N>k [ ] r k 1/k sup min{( ξ ρ) 1,1}. (3.19) r 0 r 0 If ρ = 1, then min{( ξ ρ) 1,1} = 1 because ξ 1 1; no proof is needed. If ρ > 1, then T j (τ) 1 ρj, and thus (3.11). Now if ξ ρ > 1, then (3.11) and Theorem.1 imply lim sup sup N>k [ ] r k 1/k sup ( ξ ρ) 1 r 0 r 0 which also holds if ξ ρ 1 because then ( ξ ρ) 1 1. Next we notice lim inf N>k [ 1/k r k r 0 lim inf r0 =e 1] lim sup [ inf r k sup N>k r 0 r 0 sup N>k [ sup r 0 r k r 0 ] 1/k ] 1/k,(3.0) where the first limit has been proven to exist and equal to min{( ξ ρ) 1,1} in Theorem.5. The proof is now completed by combining (3.19), (3.0), and (.36). 4 Exact Residual Norms for b = e 1 and e N In this section we present two theorems in which exact formulas for r k for b = e 1 and b = e N are established. Let S and S have their assignments as in Section 3.

26 6 Ren-Cang Li, Wei Zhang Theorem 4.6 In Theorem.1, if b = e 1, then the kth GMRES residual r k satisfies for 1 k < N where y C N/ is defined as y (j 1) = r k = S y (1:) 1, (4.1) j Ti (τ), y (j) = i=1 j i=1 T i 1 (τ) for j = 1,,..., N/, and T j (τ) is the complex conjugate of T j (τ). ( ) S MR Proof: We still have (3.8), and Y V,N T = 1, where 0 M = M (1:,1:) as in the proof of Theorem.3. Let D = diag(,1,1,...,1). 1 Noticing S MR = S MD 1 DR 1 is nonsingular, we have by Lemma 3.1 min Y V,N T u = w 1 u (1) =1, where w = S ( MD 1 ) T D T R e 1, or equivalently We shall now solve it for w. Let It can be verified that ( MD 1 ) T S w = D T R e 1. P = (e 1,e 3,...,e,e 4,...) R () (). P T ( MD 1 )P = 1 where G 1 R, G R G i =, G ( G1, G) i =, and Solve (P MD T 1 P ) T P T S w = P T D T R e 1 P T z for w to get ( ) w = S G T P 1 G T P T z, where z = ( 1 T 0(τ),T 1 (τ),t (τ),...,t k (τ)). Finally notice w = S y (1:) to complete the proof.

27 Convergence of GMRES on Tridiagonal Toeplitz Linear System 7 Remark 3 For b = e N = Π T e 1, apply Theorem 4.6 to the permuted system (.5) to get the kth GMRES residual r k satisfies for 1 k < N r k = S y (1:) 1, (4.) where y C N/ is the same as the one in Theorem Concluding Remarks There are a few GMRES error bounds with simplicity comparable to the well-known bound for the conjugate gradient method [3,10,0, 4]. In [6, Section 6], Eiermann and Ernst proved r k r 0 [ 1 γ(a)γ(a 1 ) ] k/ (5.1) where γ(a) = inf{ z Az : z = 1} is the distance from the origin to A s field of values. When A s Hermitian part, H = (A + A )/, is positive definite, it yields a bound by Elman [8] (see also [7]) r k r 0 [ ( ) ] 1 k/ 1 H 1. (5.) A As observed in [1], this bound of Elman can be easily extended to cover the case when only γ(a) > 0 r k r 0 (sin θ) k, θ = arccos γ(a) A. (5.3) Recently Beckermann, Goreinov, and Tyrtyshnikov [1] improved (5.3) to r k r 0 ( + / 3)( + δ)δ k, δ = sin θ 4 θ/π. (5.4) All three bounds (5.1), (5.3), and (5.4) yield meaningful estimates only when γ(a) > 0, i.e., A s field of values does not contain the origin. However in general, there is not much concrete quantitative results for the convergence rate of GMRES, based on limited information on A and/or b. In part, it is a very difficult problem, and such a result most likely does not exist, thanks to the negative result of Greenbaum, Pták, and Strakoš [11] which says that Any Nonincreasing

28 8 Ren-Cang Li, Wei Zhang Convergence Curve is Possible for GMRES. A commonly used approach, as a step towards getting a feel of how fast GMRES may be, is through assuming that A is diagonalizable to arrive at (.9): r k / r 0 κ(x) min max φ k (0)=1 i φ k (λ i ), (5.5) and then putting aside the effect of κ(x) to study only the effect in the factor of the associated minimization problem. This approach does not always yield satisfactory results, especially when κ(x) 1 which occurs when A is highly nonnormal. Getting a fairly accurate quantitative estimate for the convergence rate of GMRES for a highly nonnormal case is likely to be very difficult. Trefethen [3] established residual bounds based on pseudospectra, which sometimes is more realistic than (5.5) but is very expensive to compute. In [4], Driscoll, Toh, and Trefethen provided an nice explanation on this matter. Our analysis here on tridiagonal Toeplitz A represents one of few diagonalizable cases where one can analyze r k directly to arrive at simple quantitative results such as (5.1) (5.4). Previous other results except those in Ernst [9], while helpful in explaining and understanding various convergence behaviors, are more qualitative than quantitative. Two conjectures are made in Remark 1. There is also an unsolved question whether (.33) or a slightly modified version of it holds for N/ < k < N (see Figure.3). Acknowledgements The authors wish to thank an anonymous referee for his constructive suggestions that improve the presentation considerably. References 1. B. Beckermann, S. A. Goreinov, and E. E. Tyrtyshnikov, Some remarks on the Elman estimate for GMRES, SIAM J. Matrix Anal. Appl., 7, (006) B. Beckermann and A. B. J. Kuijlaars, Superlinear CG convergence for special right-hand sides, Electron. Trans. Numer. Anal., 14,(00) J. Demmel, Applied Numerical Linear Algebra (SIAM, Philadelphia, 1997) 4. T. A. Driscoll, K.-C. Toh, and L. N. Trefethen, From potential theory to matrix iterations in six steps, SIAM Rev., 40, (1998) M. Eiermann, Fields of values and iterative methods, Linear Algebra Appl., 180, (1993) M. Eiermann and O. G. Ernst, Geometric aspects in the theory of Krylov subspace methods, Acta Numer., 10, (001) S. C. Eisenstat, H. C. Elman, and M. H. Schultz, Variational iterative methods for nonsymmetric systems of linear equations, SIAM J. Numer. Anal., 0, (1983)

29 Convergence of GMRES on Tridiagonal Toeplitz Linear System 9 8. H. C. Elman, Iterative Methods for Large, Sparse Nonsymmetric Systems of Linear Equations (PhD thesis, Department of Computer Science, Yale University, 198) 9. O. G. Ernst, Residual-minimizing Krylov subspace methods for stabilized discretizations of convection-diffusion equations, SIAM J. Matrix Anal. Appl., 1, (000) A. Greenbaum, Iterative Methods for Solving Linear Systems (SIAM, Philadelphia, 1997) 11. A. Greenbaum, V. Pták, and Z. Strakoš, Any nonincreasing convergence curve is possible for GMRES, SIAM J. Matrix Anal. Appl., 17, (1996) I. C. F. Ipsen, Expressions and bounds for the GMRES residual, BIT, 40, (000) R.-C. Li, Sharpness in rates of convergence for CG and symmetric Lanczos methods, Technical Report (Department of Mathematics, University of Kentucky, 005) Avaliable at math/mareport/. 14., Convergence of CG and GMRES on a tridiagonal Toeplitz linear system, BIT, 47, (007) , Hard cases for conjugate gradient method, Internat. J. Info. Sys. Sci., 4, (008) , On Meinardus examples for the conjugate gradient method, Math. Comp., 77, (008) Electronically published on September 17, J. Liesen, M. Rozlozník, and Z. Strakoš, Least squares residuals and minimal residual methods, SIAM J. Sci. Comput., 3, (00) J. Liesen and Z. Strakoš, Convergence of GMRES for tridiagonal Toeplitz matrices, SIAM J. Matrix Anal. Appl., 6, (004) ,GMRES convergence analysis for a convection-diffusion model problem, SIAM J. Sci. Comput., 6, (005) Y. Saad, Iterative Methods for Sparse Linear Systems (SIAM, Philadelphia, nd ed., 003) 1. Y. Saad and M. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7, (1986) G. D. Smith, Numerical Solution of Partial Differential Equations (Clarendon Press, Oxford, UK, nd ed., 1978) 3. L. N. Trefethen, Pseudospectra of matrices, in Numerical Analysis 1991:Proceedings of the 14th Dundee Conference, June, 1991, D. F. Griffiths and G. A. Watson, eds. (Research Notes in Mathematics Series, Longman Press, 199) 4. L. N. Trefethen and D. Bau, III, Numerical Linear Algebra (SIAM, Philadelphia, 1997) 5. I. Zavorin, D. P. O Leary, and H. Elman, Complete stagnation of GMRES, Linear Algebra Appl.,367, (003)

Numerische Mathematik

Numerische Mathematik Numer. Math. (009) 11:67 93 DOI 10.1007/s0011-008-006- Numerische Mathematik The rate of convergence of GMRES on a tridiagonal Toeplitz linear system Ren-Cang Li Wei Zhang Received: 1 November 007 / Revised:

More information

ABSTRACT OF DISSERTATION. Wei Zhang. The Graduate School. University of Kentucky

ABSTRACT OF DISSERTATION. Wei Zhang. The Graduate School. University of Kentucky ABSTRACT OF DISSERTATION Wei Zhang The Graduate School University of Kentucky 2007 GMRES ON A TRIDIAGONAL TOEPLITZ LINEAR SYSTEM ABSTRACT OF DISSERTATION A dissertation submitted in partial fulfillment

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

JURJEN DUINTJER TEBBENS

JURJEN DUINTJER TEBBENS Proceedings of ALGORITMY 005 pp. 40 49 GMRES ACCELERATION ANALYSIS FOR A CONVECTION DIFFUSION MODEL PROBLEM JURJEN DUINTJER TEBBENS Abstract. When we apply the GMRES method [4] to linear systems arising

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Vandermonde matrices with Chebyshev nodes

Vandermonde matrices with Chebyshev nodes Available online at www.sciencedirect.com Linear Algebra and its Applications 48 (008) 803 83 www.elsevier.com/locate/laa Vandermonde matrices with Chebyshev nodes Ren-Cang Li Department of Mathematics,

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol., No. 1, pp. 33 51 c Society for Industrial and Applied Mathematics CONVERGENCE OF GMRES FOR TRIDIAGONAL TOEPLITZ MATRICES J. LIESEN AND Z. STRAKOŠ Abstract. We analyze the

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

ON MEINARDUS EXAMPLES FOR THE CONJUGATE GRADIENT METHOD

ON MEINARDUS EXAMPLES FOR THE CONJUGATE GRADIENT METHOD MATHEMATICS OF COMPUTATION Volume 77 Number 61 January 008 Pages 335 35 S 005-5718(07)019-9 Article electronically published on September 17 007 ON MEINARDUS EXAMPLES FOR THE CONJUGATE GRADIENT METHOD

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM

ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM JÖRG LIESEN AND PETR TICHÝ Abstract. We show that certain matrix approximation problems in the matrix 2-norm have uniquely defined

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Principles and Analysis of Krylov Subspace Methods

Principles and Analysis of Krylov Subspace Methods Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

onto the column space = I P Zj. Embed PZ j

onto the column space = I P Zj. Embed PZ j EIGENVALUES OF AN ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING CHI-KWONG LI, REN-CANG LI, AND QIANG YE Abstract Alignment algorithm is an effective method recently proposed for nonlinear manifold learning

More information

Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011)

Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Before we consider Gelfond s, and then Schneider s, complete solutions to Hilbert s seventh problem let s look back

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Complete stagnation of GMRES

Complete stagnation of GMRES Linear Algebra and its Applications 367 (003) 165 183 www.elsevier.com/locate/laa Complete stagnation of GMRES Ilya Zavorin a, Dianne P. O Leary b, Howard Elman b, a Applied Mathematics and Scientific

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Sudoku and Matrices. Merciadri Luca. June 28, 2011

Sudoku and Matrices. Merciadri Luca. June 28, 2011 Sudoku and Matrices Merciadri Luca June 28, 2 Outline Introduction 2 onventions 3 Determinant 4 Erroneous Sudoku 5 Eigenvalues Example 6 Transpose Determinant Trace 7 Antisymmetricity 8 Non-Normality 9

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006 PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS by Eugene Vecharynski M.S., Belarus State University, 2006 A thesis submitted to the University of Colorado Denver

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Ritz Value Bounds That Exploit Quasi-Sparsity

Ritz Value Bounds That Exploit Quasi-Sparsity Banff p.1 Ritz Value Bounds That Exploit Quasi-Sparsity Ilse Ipsen North Carolina State University Banff p.2 Motivation Quantum Physics Small eigenvalues of large Hamiltonians Quasi-Sparse Eigenvector

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

On Multivariate Newton Interpolation at Discrete Leja Points

On Multivariate Newton Interpolation at Discrete Leja Points On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function Will Dana June 7, 205 Contents Introduction. Notations................................ 2 2 Number-Theoretic Background 2 2.

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

Lines With Many Points On Both Sides

Lines With Many Points On Both Sides Lines With Many Points On Both Sides Rom Pinchasi Hebrew University of Jerusalem and Massachusetts Institute of Technology September 13, 2002 Abstract Let G be a finite set of points in the plane. A line

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information