CONVERGENCE ANALYSIS OF PROJECTION METHODS FOR THE NUMERICAL SOLUTION OF LARGE LYAPUNOV EQUATIONS

Size: px
Start display at page:

Download "CONVERGENCE ANALYSIS OF PROJECTION METHODS FOR THE NUMERICAL SOLUTION OF LARGE LYAPUNOV EQUATIONS"

Transcription

1 CONVERGENCE ANALYSIS OF PROJECTION METHODS FOR THE NUMERICAL SOLUTION OF LARGE LYAPUNOV EQUATIONS V. SIMONCINI AND V. DRUSKIN Abstract. The numerical solution of large-scale continuous-time Lyapunov matrix equations is of great importance in many application areas. Assuming that the coefficient matrix is positive definite, but not necessarily symmetric, in this paper we analyze the convergence of projection type methods for approximating the solution matrix. Under suitable hypotheses on the coefficient matrix, we provide new asymptotic estimates for the error matrix when a Galerkin method is used in a Krylov subspace, Numerical experiments confirm the good behavior of our upper bounds when linear convergence of the solver is observed. 1. The problem. We are interested in the approximate solution of the following Lyapunov matrix equation: AX + XA = BB, (1.1) with A a real matrix of large dimension and B a real tall matrix. Here A indicates the transpose of A. We assume that the n n matrix A is either symmetric and positive definite, or nonsymmetric with positive definite symmetric part, that is, (A + A )/ is positive definite. In the following we mostly deal with the case of B having a single column, that is B = b and we assume that b has unit Euclidean norm, that is b = 1. Nonetheless, our results can be extended to the multiple vector case. This problem arises in a large variety of applications, such as signal processing and system and control theory. The symmetric solution X carries important information on the stability and energy of an associated dynamical linear system and on the feasibility of order reduction techniques [1], [3], [5]. The analytic solution of (1.1) can be written as X = e ta BB e ta dt = xx dt, (1.) where we have set x = e ta B. Let α min be the smallest eigenvalue of the symmetric part of A, α min = λ min ((A + A )/) >. Then it can be shown that x exp( tα min ) B ; see, e.g., [5]. Projection type methods seek an approximate solution X m in a subspace of R n, by requiring that the residual BB (AX m +X m A ) be orthogonal to this subspace. A particularly effective choice as approximation space is given by (here for B = b) the Krylov subspace K m (A,b) = span{b,ab,...,a m 1 b}, of dimension m n [16], [17], [6]. Abundant experimental evidence over the years has shown that the use of this space allows one to often obtain a satisfactorily accurate approximation X m, in a space of much lower dimension than n. A particularly attractive feature is that X m may be written as a low rank matrix, X m = U m Um with U m of low column rank, so that only the matrix U m needs to be stored. Version of August 6, 7. Dipartimento di Matematica, Università di Bologna, Piazza di Porta S. Donato 5, I-417 Bologna, Italy and CIRSA, Ravenna, Italy (valeria@dm.unibo.it). Scientific Advisor, Ph.D., 1 Hampshire st. Cambridge, MA 139 USA (druskin1@boston.oilfield.slb.com) 1

2 V. Simoncini and V. Druskin To the best of our knowledge, no asymptotic convergence analysis of this Galerkin method is available in literature. The aim of this paper is to fill this gap. We also refer to [5] for a-priori estimates on the residual norm when solving the Sylvester equation with projection type methods. To provide our error estimates, we shall use the integral representation (1.) for both X and X m, and explicitly bound the norm of the error matrix X X m ; we refer to [6] for early considerations in this direction. Our approach is highly inspired by, and fully relies on, the papers [8], [18], where general estimates for the error in approximating matrix operators by polynomial methods are derived. We provide explicit estimates when A is symmetric, and when A is nonsymmetric with its field of values (or spectrum) contained in certain not necessarily convex sets of C +. We also show that the convergence of the Galerkin method is closely related to that of Galerkin methods for solving the linear system (A + α min I)d = b. Our estimates are asymptotic, and thus linear, that is, they do not capture the possibly superlinear convergence behavior of the method that is sometimes observed [4]. In the linear system setting, the superlinear behavior is due to the fact that Krylov based methods tend to adapt to the (discrete) spectrum of A, accelerating convergence as spectral information is gained while enlarging the space; see, e.g., [9] for a discussion and more references. Throughout the paper we assume exact arithmetic.. Numerical solution and preliminary considerations. Given the Krylov subspace K m (A,b) and a matrix V m whose orthonormal columns span K m (A,b), with b = V e 1, we seek an approximation in the form X m = V m Y m V m. Here and in the following, e i denotes the ith column of the identity matrix of given dimension. Imposing that the residual R m = bb (AX m + X m A ) be orthogonal to the given space, the Galerkin condition, yields the equation V m R m V m = T m Y + Y T m = e 1 e 1, where T m = V m AV m ; see, e.g., [1], [6]. The m m matrix Y m can thus be computed by solving the resulting small size Lyapunov equation. The matrix X m can be equivalently written in integral form. Indeed, let x m = x m (t) = V m e ttm e 1 be the so-called Krylov approximation to x = x(t) in K m (A,b). Then X m can be written as ( X m = V m e ttm e 1 e 1 e tt m dt )Vm = V m e ttm e 1 e 1 e tt m V m dt = x m x mdt. We are interested in finding a-priori bounds for the -norm of the error matrix, that is for X X m, where the -norm is the matrix norm induced by the vector Euclidean norm. We start by observing that X X m = (xx x m x m)dt, and that xx x m x m = x(x x m ) (x x m )x m ( x + x m ) x x m. It holds that λ min ((T m +Tm)/) α min. Using x m exp( tλ min ((T m +Tm)/)) exp( tα min ), we have X X m xx x m x m dt ( x + x m ) x x m dt e tαmin x x m dt. (.1)

3 Projection methods for Lyapunov equations 3 We notice that e tαmin x x m = exp( t(a + α min I))b V m exp( t(t m + α min I))e 1 =: ˆx ˆx m, which is the error in the approximation of the exponential of the shifted matrix A + α min I with the Krylov subspace solution. Therefore, X X m ˆx ˆx m dt. (.) In the following we will bound X X m by judiciously integrating an upper bound of the integrand function. The matrix V m = [v 1,...,v m ] can be generated one vector at the time, by means of the following Arnoldi recursion: AV m = V m T m + v m+1 t m+1,m e m, v 1 = b/ b, (.3) where V m+1 = [V m,v m+1 ] has orthonormal columns and spans K m+1 (A,b). In general, T m is upper Hessenberg, and it is symmetric, and thus tridiagonal, when A is itself symmetric. We conclude this section with a technical lemma, whose proof is included for completeness; see, e.g., [18] for a similar result in finite precision arithmetic. More details on the admissibility of series expansions in terms of Faber polynomials are given in section 4. Lemma.1. Let Φ k be a Faber polynomial of degree at most k. Let f(z) = k= f kφ k (z) be a series expansion of the analytic function f and assume that the expansion of f(a) and f(t m ) is well defined. Then Proof. We have f(a)b V m f(t m )e 1 f(a)b V m f(t m )e 1 = m 1 k= + f k ( Φ k (A) + Φ k (T m ) ). f k (Φ k (A)b V m Φ k (T m )e 1 ) f k (Φ k (A)b V m Φ k (T m )e 1 ). Using the Arnoldi relation and the fact that T m is upper Hessenberg, A k V m e 1 = V m T k me 1 for k = 1,...,m 1, and thus Φ k (A)b = Φ k (A)V m e 1 = V m Φ k (T m )e 1, k = 1,...,m 1, so that f(a)b V m f(t m )e 1 = Taking norms, the result follows. f k (Φ k (A)b V m Φ k (T m )e 1 ).

4 4 V. Simoncini and V. Druskin 3. The symmetric case. In the symmetric case, we show that the asymptotic convergence rate of the Krylov subspace solver is the same as that of the Conjugate Gradient method applied to the shifted system (A+α min I)x = b, where α min = λ min, the smallest eigenvalue of the positive definite matrix A [1]; see also section 5. Proposition 3.1. Let A be symmetric and positive definite, and let λ min be the smallest eigenvalue of A. Let ˆλ min, ˆλ max, be the extreme eigenvalues of A + λ min I and ˆκ = ˆλ max /ˆλ min. Then X X m ˆκ + 1 ˆλ min ˆκ ( ˆκ 1 ˆκ + 1 ) m. (3.1) Proof. Using (.1) we are left to estimate e tαmin x x m dt. Let λ max be the largest eigenvalue of A. From [7] it follows that x x m 4exp( t λ max + λ min ) I k (t λ max λ min ), where I k is the Bessel function of an imaginary argument. Therefore, setting p = (3λ min + λ max )/(λ max λ min ) = (ˆκ + 1)/(ˆκ 1) and ρ = p + p 1, we have X X m 4 8 ˆx ˆx m dt exp( t( 3 λ min + 1 λ max))i k (t λ max λ min )dt 8 = ( 3 λ min + λmax ) (λmax λmin) 4 8(ˆκ + 1) 1 = ˆκ(3λmin + λ max ) 4(ˆκ + 1) ρ 1 = ˆκ(3λmin + λ max ) ρ 1 ρ m. 1 ( p + ) k (3.) p 1 ( p + p 1 To get (3.) we used the following integral formula for Bessel functions in [13, Formula ( )] ) k e αt I ν (βt)dt = β ν α β (α + α β ) ν, for Rν > 1 and Rα > Rβ. Standard algebraic manipulations give ρ = ˆκ + 1 ˆκ 1, ρ ρ 1 = ˆκ + 1. In Figure 3.1 we report the behavior of the bound of Proposition 3.1 for a 4 4 diagonal matrix A having uniformly distributed eigenvalues between 1 and 1. Here α min = λ min = 1. The vector b is the normalized vector of all ones. We explicitly observe that the linearity of the convergence rate is exactly reproduced by the upper bound of Proposition 3.1.

5 Projection methods for Lyapunov equations error norm X X m estimate of Proposition 3.1 absolute error norm dimension of Krylov subspace Fig Example of section diagonal matrix with uniformly distributed eigenvalues in [1,1]. True error norm and its estimate of Proposition 3.1 for the Krylov subspace approximation of the Lyapunov solution. 4. The nonsymmetric case. For A nonsymmetric, the result of the previous section can be generalized whenever the field of values of A is contained in a wellbehaved set of the complex plane. The following results make use of the theory of Faber polynomials, and of recently obtained results that have been used in the context of linear systems. To this end, we need some definitions on conformal mappings. Let C = C { }, and let C(,ϱ) = { τ ϱ} be the closed disk centered at zero and radius ϱ; define ψ : C \ C(,ϱ) C a conformal mapping that maps C\C(,ϱ) onto a simply connected domain, and such that ψ( ) = and ψ ( ) > ; see, e.g., [3]. Let φ denote the inverse of ψ. The principal (polynomial) part of the Laurent series of φ k is the Faber polynomial Φ k, of exact degree k. Let Ω ϱ be a bounded set of the complex plane that is the image through ψ of the domain C \ C(,ϱ), such that its complement is simply connected. Let f(λ) exp( λt) = f k Φ k (λ), be the expansion of exp( λt) in Faber series in Ω r with ϱ < <. For ϱ < ρ <, the expansion coefficients are given as f k = 1 exp( tψ(τ)) πi τ k+1 dτ, f k 1 ρ k sup exp( tψ(τ)), τ =ρ τ =ρ see, e.g., [3, sec..1.3],[31]. Note that f k = f k (t). We start by recalling the following result in [3, Theorem (4), sec..1.3, p.14]. Theorem 4.1. Let Ω be a bounded set such that its complement is simply connected. Let f be regular in ψ(c(, )), ϱ < <, and 1 π π k= f(ψ(re iθ )) dθ M

6 6 V. Simoncini and V. Druskin for all r, ϱ < r <, where M is a constant. If the closure Ω is such that Φ k (z) aϱ k, z Ω, k =,1,..., where a is a constant, then for r such that ϱ r <, f(z) m f k Φ k (z) k= f k Φ k (z) Ma r ( ) m r, z ψ(c(,r)). r We next discuss a few known examples of computable mappings ψ for which Theorem 4.1 can be effectively applied. We first consider the case when Ω coincides with an ellipse. In such setting, Faber polynomials correspond to scaled Chebyshev polynomials. Moreover, it has been recently shown that their norm is bounded by when Ω is convex [], allowing us to derive a simple error estimate. We recall that the field of values of a real matrix A in the Euclidean inner product is defined as F(A) = {x Ax,x C n, x = 1}. The location of the field of values plays a crucial role in the behavior and analysis of polynomial type methods for the solution of linear systems; see, e.g., [1], []. Corollary 4.. Assume the field of values of the real matrix A is contained in an ellipse E C + of center (c,), foci (c ± d,) and major semi-axis a. Let α min = λ min ((A + A )/). Then X X m 4 ( ) m r α min r where r = a d + (a d ) 1, r = c + α min d (c ) + αmin + 1. d Proof. Let α = α min and let Ê be the ellipse containing the field of values of A + αi. We consider the usual mapping whose boundary image is an elliptic curve in C + containing the spectrum of A+αI, ψ(τ) = c+α d (τ+ 1 τ ), with τ = σeiθ C(,σ). For σ = r > 1 = ϱ, ψ( τ = r) = Ê and it holds so that 1 π exp( tα) = max τ C(,r) exp( tψ(τ)), π f(ψ(re iθ )) dθ exp( tα) =: M(t). Moreover, the value corresponds to the reciprocal of the asymptotic convergence factor of the Faber expansion, namely = ψ 1 (). Using the fact that Ê is convex, it also follows that Φ k (A + αi) for k =,1,...; see []. The same holds for Φ k (T m + αi), since the field of values of T m + αi is included in that of A + αi. Therefore, Lemma.1 and Theorem 4.1 ensure that ˆx ˆx m f k ( Φ k (A + αi) + Φ k (T m + αi) ) ( ) m r 8exp( tα), z ψ(c(,r)), r

7 for r satisfying ϱ r <. Finally, Projection methods for Lyapunov equations 7 X X m ˆx ˆx m dt 4 α r ( ) m r, which completes the proof. In the case of an ellipse, Knizhnerman suggested that one could simplify the derivation and proceed as in Proposition 3.1, where an additional multiplier of the type const k would arise in (3.) in the proof [19]. Example 4.3. We consider a 4 4 (normal) diagonal matrix A whose eigenvalues are λ = c + a 1 cos θ + ıa sinθ, θ uniformly distributed in [,π] and c =, a 1 = 1 and a =, so that the eigenvalues are on an elliptic curve with center c and focal distance d = a 1 a = 96. Here α min 1.1, yielding r 1.47 and The vector b is the vector of all ones, normalized to have unit norm. In Figure 4.1 we report the error associated with the Krylov subspace approximation of the Lyapunov solution, and the estimate of Corollary 4.. The agreement is impressive, as it should be expected since the spectrum exactly lies on the elliptic curve and the matrix is normal, so that the field of values coincides with the associated convex hull. 1 1 error norm X X m estimate of Corollary absolute error norm dimension of Krylov subspace Fig Example 4.3. True error and its estimate of Corollary 4. for the Krylov subspace solver of the Lyapunov equation. Example 4.4. We next consider the 4 4 matrix A stemming from the centered finite difference discretization of the operator L(u) = u + 4(x + y)u x + u in the unit square, with Dirichlet boundary conditions. The spectrum of A, together with its field of values (computed with the Matlab function fv.m in [14]) and a surrounding ellipse, are shown in the left plot of Figure 4.. Here α min = The ellipse has parameters c = , a 1 = c α min, a = 3.7 and focal distance d = a 1 a 1.5, yielding r = and = The right plot of Figure 4. shows the convergence history of the Krylov solver, together with the asymptotic factor (r/ ) m in the estimate of Corollary 4.. The initial asymptotic convergence rate is reasonably well captured by the estimate. Example 4.5. We consider the 4 4 bidiagonal matrix A with uniformly distributed diagonal elements in the interval [1, 11] and unit upper diagonal. In this

8 8 V. Simoncini and V. Druskin error norm X X m asymp. estimate of Corollary 4. I(λ) 1 1 absolute error norm R(λ) dimension of Krylov subspace Fig. 4.. Example 4.4. Left plot: Spectrum of A, field of values (thin solid curve), and smallest computed elliptic curve including the field of value (thick solid curve). Right plot: True error and its asymptotic factor in the estimate of Corollary 4. for the Krylov subspace solver of the Lyapunov equation error norm X X m asymp. estimate of Corollary I(λ) 1 1 absolute error norm R(λ) dimension of Krylov subspace Fig Example 4.5. Left plot: real spectrum, field of values (thin solid curve), and smallest computed elliptic curve including the field of value (thick solid curve). Right plot: True error and its estimate of Corollary 4. for the Krylov subspace solver of the Lyapunov equation. case α min = The vector b is the normalized vector of all ones. Our numerical computation reported in the left plot of Figure 4.3 showed that the field of values of A (computed once again with fv.m [14]) is contained in an ellipse with center c = 6, semi-axes a 1 = 5.8, a = 4. and focal distance d = a 1 a 5.6, yielding r = and = The right plot of Figure 4.3 shows the convergence history of the Krylov solver, together with the asymptotic factor in the estimate of Corollary 4.. Once again, the asymptotic rate is a good estimate of the actual convergence rate. Even more accurate bounds for this example might be obtained by using more appropriate conformal mappings than the ellipse. It may be possible to include the field of values into a rectangle, for which the mapping ψ could be

9 numerically estimated [9], [11]. Projection methods for Lyapunov equations 9 The following mapping is a modified version of the external mapping used for instance in [15], ψ(τ) = γ 1 γ (1 1 τ ) θ τ, τ = σe iω, (4.1) for < θ < 1 and γ 1,γ R +. The function ψ maps the exterior of the disc C(,σ) onto a wedge-shaped convex set Ω in C +. The following result holds. Corollary 4.6. Let ˆΩ C + be the wedge-shaped set which is the image through ˆψ of the disk C(,r), where ˆψ is as in (4.1). Let = ˆψ 1 (). Assume the field of values of the matrix A + α min I, with α min = λ min ((A + A )/), is contained in ˆΩ. Then X X m 4 ( ) m r. α min r Proof. The proof follows the same steps of that of Corollary 4.. Example 4.7. We consider the 4 4 (normal) diagonal matrix A whose eigenvalues are on the curve ψ(τ) = (1 1/τ) ω τ, for τ C(,r) with r = 1.1 and ω =.3 (see the left plot of Fig. 4.4). Here α min = The image of the mapping ˆψ(τ) = α min + ψ(τ), τ C(,r) thus contains the spectrum of A + α min I. Numerical computation yields = ˆψ 1 () The vector b is the normalized vector of all ones. The right plot of Figure 4.4 shows the convergence history of the Krylov solver, together with the asymptotic factor (r/ ) m in the estimate of Corollary 4.6. The linear asymptotic convergence is fully captured by the estimate error norm X X m asymp. estimate of Corollary 4.6 I(λ).5.5 absolute error norm R(λ) dimension of Krylov subspace Fig Example 4.7. Left plot: Spectrum of A. Right plot: True error and its asymptotic factor in the estimate of Corollary 4.6 for the Krylov subspace solver of the Lyapunov equation. The following mapping was analyzed in [1] and it is associated with a non-convex domain; the specialized case of an annular sector is discussed for instance in [4]. Given a set Ω, assume that Ω is an analytic Jordan curve. If Ω is of bounded (or finite)

10 1 V. Simoncini and V. Druskin boundary rotation, then max Φ k(z) V (Ω) z Ω π, where V (Ω) is the boundary rotation of Ω, defined as the total variation of the angle between the positive real axis and the tangent of Ω. In particular, this bound is scale-invariant, so that it also holds that V (sω) = V (Ω) [1]. These important properties ensure that for a diagonalizable matrix A, Φ k (A + α min I) is bounded independently of k, on a non-convex set with bounded boundary rotation. Indeed, letting A = QΛQ 1 be the spectral decomposition of A, then Φ k (A + α min I) κ(q) Φ k (Λ + α min I), where κ(q) = Q Q 1, and the estimate above can be applied. Corollary 4.8. Assume that A is diagonalizable, and let A = QΛQ 1 be its spectral decomposition. Let ϱ 1. Assume the spectrum of A + α min I is contained in the set sω C +, with s >, whose boundary is the bratwurst image of ψ(τ) = (τ λn)(τ λm) (N M)τ + λ(nm 1), where N,M and λ are given, τ C(,r), ϱ r < = ψ 1 (), with r such that ψ(c(,r)) C +. Then, X X m 8V (Ω)κ(Q) πψ min ( ) m r r, r where ψ min = min τ =r R(ψ(τ)). Proof. We have = ψ 1 () = N [], [1]. Proceeding as in Corollary 4. we have ˆx ˆx m f k ( Φ k (A + αi) + Φ k (T m + αi) ) 4M(t) V (Ω)κ(Q) π r ( ) m r, z ψ(c(,r)), for r satisfying ϱ r <. Here M(t) = exp( tψ min ). Finally, X X m ˆx ˆx m dt 8V (Ω)κ(Q) π M(t)dt ( ) m r. r from which the result follows. Example 4.9. This example is taken from []; see also [1] for more details. In this case, A is the 5 5 matrix pde5 of the Matrix Market repository [3] and it is such that α min.849. The spectrum of A + α min I is included in the set Ω whose boundary is the bratwurst image of ψ as in Corollary 4.8, with λ = 1, N = 1.457, M =.6639 (exact to the first decimal digits). The left plot of Figure 4.5 shows the spectrum of A+α min I as ; the solid curve corresponds to the boundary of ψ(c(,r)) with r =.98, enclosing the whole spectrum, while the dashed curve is the boundary of ψ(c(, )) with r = N. The right plot of Figure 4.5 shows the convergence ( curve of the Krylov subspace solver, together with the asymptotic quantity r r r ) m, m = 1,,..., from Corollary 4.8. We observe that the initial

11 Projection methods for Lyapunov equations =N 1 I(λ) r=.98 absolute error norm error norm X X m estimate of Corollary R(λ) dimension of Krylov subspace Fig Example 4.9. Left plot: spectrum and bratwurst curves associated with disks of different radius. Right plot: True error and the asymptotic factor of its estimate in Corollary 4.8 for the Krylov subspace solver of the Lyapunov equation. convergence phase is well captured by the estimate. As expected, the estimate cannot reproduce the superlinear convergence of the solver at later stages. We conclude this section by observing that other sets can be analyzed within our framework; indeed, more general mappings can be obtained and numerically approximated within the class of Schwarz-Christoffel conformal mappings [6]. 5. Connections to linear system solvers and further considerations. The relation z 1 = e tz dt can be used to show a close connection between our estimates and the solution of the linear system (A + α min I)d = b in the Krylov subspace. Let V m (T m + α min I) 1 e 1 be the Galerkin approximation to the linear system solution d in the Krylov subspace K m (A,b) = K m (A + α min I,b). Then the system error can be written as (A + α min I) 1 b V m (T m + α min I) 1 e 1 = (exp( t(a + α min I))b V m exp( t(t m + αi))e 1 ) dt. Comparing the last integral with the error bound in (.), shows that the error norm (A+α min I) 1 b V m (T m +α min I) 1 e 1 may be bounded by exactly the same tools we have used for the Lyapunov error and that the two initial integral bounds only differ by a factor of two. Indeed, the estimates of Proposition 3.1 (symmetric case) and of Corollary 4. (spectrum contained in an ellipse) employ the same asymptotic factors that characterize the convergence rate of methods such as the Conjugate Gradients in the symmetric case, and FOM or GMRES in the nonsymmetric case, when applied to the system (A + α min I)d = b ; see, e.g., [7]. Therefore, we have shown that the convergence of a Galerkin procedure in the Krylov subspace for solving (1.1) has the same convergence factor as a corresponding Krylov subspace method for the shifted (single vector) linear system.

12 1 V. Simoncini and V. Druskin As a natural consequence of the discussion above, the previous results can be generalized to the case when b is replaced by a matrix B, with more than one column. A Galerkin approximation may be obtained by first generating the block Krylov subspace K m (A,B) = span{b,ab,...,a m 1 B} and then proceeding as described in section ; see, e.g., [1]. Let B = [b 1,...,b s ]. Setting Z = exp( ta)b and letting Z m K m (A,B) be the associated Krylov approximation to the exponential, we can bound ZZ Z m Z m for instance as ZZ Z m Z m s k=1 z (k) m (z (k) m ) z (k) m (z (k) m ), where Z = [z (1),...,z (s) ] and Z m = [z (1) m,...,z (s) m ]. The results of the previous sections can be thus applied to each term in the sum. Refined bounds may possibly be obtained by using the theory of matrix polynomials, but this is beyond the scope of this work; see, e.g., [7]. We also observe that our convergence results can be generalized to the case of accelerated methods, such as that described in [8], by using the theoretical matrix function framework described in [8]. Acknowledgments. We are deeply indebted to Leonid Knizhnerman for several insightful comments which helped improve a previous version of this manuscript. REFERENCES [1] A. C. Antoulas. Approximation of large-scale Dynamical Systems. Advances in Design and Control. SIAM, Philadelphia, 5. [] B. Beckermann. Image numérique, GMRES et polynômes de Faber. C. R. Acad. Sci. Paris, Ser. I, 34:855 86, 5. [3] P. Benner. Control Theory, chapter 57. Chapman & Hall/CRC, 6. Handbook of Linear Algebra. [4] J. P. Coleman and N. J. Myers. The Faber polynomials for annular sectors. Math. Comput., 64(9):181 3, [5] M. J. Corless and A. E. Frazho. Linear systems and control - An operator perspective. Pure and Applied Mathematics. Marcel Dekker, New York - Basel, 3. [6] T. A. Driscoll and L. N. Trefethen. Schwarz-Christoffel Mapping. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge,. [7] V. Druskin and L. Knizhnerman. Two polynomial methods of calculating functions of symmetric matrices. U.S.S.R. Comput. Math. Math. Phys., 9:11 11, [8] V. Druskin and L. Knizhnerman. Extended Krylov subspaces: approximation of the matrix square root and related functions. SIAM J. Matrix Anal. Appl., 19(3): , [9] Michael Eiermann. On semiiterative methods generated by Faber polynomials. Numer. Math., 56: , [1] Michael Eiermann. Field of values and iterative methods. Lin. Alg. Appl., 18: , [11] S. W. Ellacott. Computation of Faber series with application to numerical polynomial approximation in the complex plane. Mathematics of Computation, 4(16): , [1] G. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 3rd edition, [13] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Corrected and enlarged edition. Academic Press, San Diego, California, 198. [14] Nicholas J. Higham. The Matrix Computation Toolbox. [15] M. Hochbruck and C. Lubich. On Krylov subspace approximations to the matrix exponential operator. SIAM J. Numer. Anal., 34(5): , [16] I. M. Jaimoukha and E. M. Kasenally. Krylov subspace methods for solving large Lyapunov equations. SIAM J. Numer. Anal., 31(1):7 51, Feb [17] K. Jbilou and A.J. Riquet. Projection methods for large Lyapunov matrix equations. Linear Algebra and its Applications, 415(-3): , 6.

13 Projection methods for Lyapunov equations 13 [18] L. Knizhnerman. Calculus of functions of unsymmetric matrices using Arnoldi s method. Comput. Math. Math. Phys., 31:1 9, [19] L. Knizhnerman. Private communication, 7. [] T. Koch and J. Liesen. The conformal bratwurst maps and associated Faber polynomials. Numer. Math., 86: ,. [1] J. Liesen. Construction and analysis of polynomial iterative methods for non-hermitian systems of linear equations. PhD thesis, Fakultät für Mathematik der Universität Bielefeld, Nov [] T.A. Manteuffel. The Tchebychev iteration for nonsymmetric linear systems. Numer. Math., 8:37 37, [3] Matrix Market. A visual repository of test data for use in comparative studies of algorithms for numerical linear algebra, Mathematical and Computational Sciences Division, National Institute of Standards and Technology. Online at [4] O. Nevanlinna. Convergence of iterations for linear equations. Birkhäuser, Basel, [5] M. Robbé and M. Sadkane. A convergence analysis of GMRES and FOM for Sylvester equations. Numerical Algorithms, 3:71 89,. [6] Y. Saad. Numerical solution of large Lyapunov equations. In M. A. Kaashoek, J. H. van Schuppen, and A. C. Ran, editors, Signal Processing, Scattering, Operator Theory, and Numerical Methods. Proceedings of the international symposium MTNS-89, vol III, pages , Boston, 199. Birkhauser. [7] Y. Saad. Iterative methods for sparse linear systems. The PWS Publishing Company, [8] V. Simoncini. A new iterative method for solving large-scale Lyapunov matrix equations. SIAM J. Sci. Comput., 9(3): , 7. [9] Valeria Simoncini and Daniel B. Szyld. On the occurrence of superlinear convergence of exact and inexact Krylov subspace methods. SIAM Review, 47:47 7, 5. [3] V. I. Smirnov and N. A. Lebedev. Functions of a complex variable, constructive theory. M.I.T. Press, Cambridge, Massachusetts, [31] P. K. Suetin. Fundamental properties of Faber polynomials. Russ. Math. Surv., 19(4):11 149, 1964.

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc.

CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc. Constr. Approx. (2001) 17: 267 274 DOI: 10.1007/s003650010021 CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc. Faber Polynomials Corresponding to Rational Exterior Mapping Functions J. Liesen

More information

Approximation of functions of large matrices. Part I. Computational aspects. V. Simoncini

Approximation of functions of large matrices. Part I. Computational aspects. V. Simoncini Approximation of functions of large matrices. Part I. Computational aspects V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem Given

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

CONVERGENCE ANALYSIS OF THE EXTENDED KRYLOV SUBSPACE METHOD FOR THE LYAPUNOV EQUATION

CONVERGENCE ANALYSIS OF THE EXTENDED KRYLOV SUBSPACE METHOD FOR THE LYAPUNOV EQUATION CONVERGENCE ANALYSIS OF THE EXTENDED KRYLOV SUBSPACE METHOD FOR THE LYAPUNOV EQUATION L. KNIZHNERMAN AND V. SIMONCINI Abstract. The Extended Krylov Subspace Method has recently arisen as a competitive

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS V. SIMONCINI Abstract. The Extended Krylov Subspace has recently received considerable attention as a powerful tool for matrix function evaluations

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS V. DRUSKIN AND V. SIMONCINI Abstract. The rational Krylov space is recognized as a powerful tool within Model Order Reduction techniques

More information

arxiv: v1 [cs.na] 25 May 2018

arxiv: v1 [cs.na] 25 May 2018 NUMERICAL METHODS FOR DIFFERENTIAL LINEAR MATRIX EQUATIONS VIA KRYLOV SUBSPACE METHODS M. HACHED AND K. JBILOU arxiv:1805.10192v1 [cs.na] 25 May 2018 Abstract. In the present paper, we present some numerical

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 35, pp. 8-28, 29. Copyright 29,. ISSN 68-963. MONOTONE CONVERGENCE OF THE LANCZOS APPROXIMATIONS TO MATRIX FUNCTIONS OF HERMITIAN MATRICES ANDREAS

More information

Valeria Simoncini and Daniel B. Szyld. Report October 2009

Valeria Simoncini and Daniel B. Szyld. Report October 2009 Interpreting IDR as a Petrov-Galerkin method Valeria Simoncini and Daniel B. Szyld Report 09-10-22 October 2009 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld INTERPRETING

More information

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS V. SIMONCINI Abstract. The Extended Krylov Subspace has recently received considerable attention as a powerful tool for matrix function evaluations

More information

Error Estimation and Evaluation of Matrix Functions

Error Estimation and Evaluation of Matrix Functions Error Estimation and Evaluation of Matrix Functions Bernd Beckermann Carl Jagels Miroslav Pranić Lothar Reichel UC3M, Nov. 16, 2010 Outline: Approximation of functions of matrices: f(a)v with A large and

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials [ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 765 781 c 2005 Society for Industrial and Applied Mathematics QUADRATURE RULES BASED ON THE ARNOLDI PROCESS DANIELA CALVETTI, SUN-MI KIM, AND LOTHAR REICHEL

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

In honour of Professor William Gear

In honour of Professor William Gear Functional Calculus and Numerical Analysis Michel Crouzeix Université de Rennes 1 ICNAAM 2011 In honour of Professor William Gear Halkidiki, September 2011 The context Let us consider a closed linear operator

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

PDEs, Matrix Functions and Krylov Subspace Methods

PDEs, Matrix Functions and Krylov Subspace Methods PDEs, Matrix Functions and Krylov Subspace Methods Oliver Ernst Institut für Numerische Mathematik und Optimierung TU Bergakademie Freiberg, Germany LMS Durham Symposium Computational Linear Algebra for

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

C M. A two-sided short-recurrence extended Krylov subspace method for nonsymmetric matrices and its relation to rational moment matching

C M. A two-sided short-recurrence extended Krylov subspace method for nonsymmetric matrices and its relation to rational moment matching M A C M Bergische Universität Wuppertal Fachbereich Mathematik und Naturwissenschaften Institute of Mathematical Modelling, Analysis and Computational Mathematics (IMACM) Preprint BUW-IMACM 16/08 Marcel

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER LARS ELDÉN AND VALERIA SIMONCINI Abstract. Almost singular linear systems arise in discrete ill-posed problems. Either because

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

The Leja method revisited: backward error analysis for the matrix exponential

The Leja method revisited: backward error analysis for the matrix exponential The Leja method revisited: backward error analysis for the matrix exponential M. Caliari 1, P. Kandolf 2, A. Ostermann 2, and S. Rainer 2 1 Dipartimento di Informatica, Università di Verona, Italy 2 Institut

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011 University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR-2011-04 March 2011 LYAPUNOV INVERSE ITERATION FOR IDENTIFYING HOPF BIFURCATIONS

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM

ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM ON BEST APPROXIMATIONS OF POLYNOMIALS IN MATRICES IN THE MATRIX 2-NORM JÖRG LIESEN AND PETR TICHÝ Abstract. We show that certain matrix approximation problems in the matrix 2-norm have uniquely defined

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

JURJEN DUINTJER TEBBENS

JURJEN DUINTJER TEBBENS Proceedings of ALGORITMY 005 pp. 40 49 GMRES ACCELERATION ANALYSIS FOR A CONVECTION DIFFUSION MODEL PROBLEM JURJEN DUINTJER TEBBENS Abstract. When we apply the GMRES method [4] to linear systems arising

More information

Bergman shift operators for Jordan domains

Bergman shift operators for Jordan domains [ 1 ] University of Cyprus Bergman shift operators for Jordan domains Nikos Stylianopoulos University of Cyprus Num An 2012 Fifth Conference in Numerical Analysis University of Ioannina September 2012

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information