c 2004 Society for Industrial and Applied Mathematics

Size: px
Start display at page:

Download "c 2004 Society for Industrial and Applied Mathematics"

Transcription

1 SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS VALERIA SIMONCINI AND MICHELE BENZI Abstract. In this paper we derive bounds on the eigenvalues of the preconditioned matrix that arises in the solution of saddle point problems when the Hermitian and skew-hermitian splitting preconditioner is employed. We also give sufficient conditions for the eigenvalues to be real. A few numerical experiments are used to illustrate the quality of the bounds. Key words. saddle point problems, iterative methods, preconditioning, eigenvalues AMS subject classifications. 65F0, 65N, 65F50, 5A06 DOI. 0.37/S Introduction. We are given the saddle point problem A B T u f. =, or Ax = b B 0 v g with A R n n symmetric positive semidefinite and B R m n with rankb =m n. We assume that the null spaces of A and B have trivial intersection, which implies that A is nonsingular. We set H = A B T S = B 0 so that A = H+S. We consider the preconditioner P = H+IS +I, with real >0, and we study the eigenvalue problem associated with the preconditioned matrix, that is,,. H + Sx = η H + IS + Ix. This preconditioner has been studied in a somewhat more general setting in [4], motivated by the paper []. Letting D, := {z C ; z < }, it was shown in [4] that the spectrum of the preconditioned matrix satisfies σp A D, \{0}. Furthermore, σp A D, if A is positive definite. Some rather special cases including the case A = I have been studied in [, 3]. The purpose of this paper is to provide more refined inclusion regions for the spectrum of P A for saddle point problems of the form.. Most of our bounds are in terms of the extreme eigenvalues and singular values of the blocks A and B, respectively. Although these quantities may be difficult to estimate, our results can be used to explain why small values of usually give the best results in terms of convergence rates. For instance, we show Received by the editors September 7, 003; accepted for publication in revised form by D. Szyld December 9, 003; published electronically November 7, Dipartimento di Matematica, Università di Bologna, P.zza di Porta S. Donato, 5, I-407 Bologna, Italy and IMATI-CNR, Pavia, Italy valeria@dm.unibo.it. Department of Mathematics and Computer Science, Emory University, Atlanta, GA 303 benzi@mathcs.emory.edu. The work of this author was supported in part by National Science Foundation grant DMS

2 378 VALERIA SIMONCINI AND MICHELE BENZI that sufficiently small values of always result in preconditioned matrices having a real spectrum consisting of two tight clusters. Throughout the paper, we write M T for the transpose of a matrix M and u for the conjugate transpose of a complex vector u. Also, A>0A 0 means that matrix A is symmetric positive definite respectively, semidefinite.. Spectral bounds. In this section we provide bounds for the eigenvalues of the preconditioned matrix. In the following we shall use the fact that A is symmetric positive semidefinite, so that 0 λ n u Au. u u λ u C n,u 0, where λ n,λ are the smallest and largest eigenvalues of A, respectively. Moreover, we denote by σ,...,σ m the decreasingly ordered singular values of B. The spectrum of the preconditioned matrix can be more easily analyzed by means of a particular spectral mapping, which we introduce next. We shall then derive estimates for the location of the eigenvalues of.. We first observe that H + IS + I =HS + H + S+ I. By collecting the terms with H + S we can write the eigenvalue problem. as. η H + Sx = η If η = 0, then η =. For η 0 we set I + HS x..3 θ := η, from which η = η θ + = θ θ +. Therefore,. can be written as H + Sx = θ I + HS x. By explicitly writing the term HS, the eigenproblem above becomes A B T I x = θ AB T x, or Ax = θgx, B 0 0 I where G := I AB T. 0 I The equivalent eigenproblem G Ax = θx can be explicitly written as A + AB T B B.4 T x = θx. B 0 Therefore, the two eigenproblems. and.4 have the same eigenvectors, while the eigenvalues are related by.3. Our spectral analysis aims at describing the behavior of the spectrum of G A, from which considerations on the spectrum of. can be derived. In the following, Iθ and Rθ denote the imaginary and real part of θ, respectively. Lemma.. Assume A is symmetric and positive semidefinite. Let K := I + B T B. For each eigenpair η, [u; v] of., η either is η =or can be written as η = +θ, where θ 0satisfies the following:

3 . If Iθ 0, then PRECONDITIONING SADDLE POINT PROBLEMS Rθ = u KAKu u Ku, θ = u KB T Bu u. Ku. If Iθ =0, then min λ n, σ m λ + σ m θ ρ where ρ := λ + σ. Proof. The first statement of the lemma was already shown by means of the mapping in.3. We are thus left with proving the estimates for θ. First of all, note that θ 0 or else η = 0, which is not possible since P A is nonsingular. Let x =[u; v] 0 be the complex eigenvector associated with θ. We explicitly observe that K = I + B T B is symmetric positive definite and that KB T B is symmetric. We shall make use of the following properties of K, λ max K =+ σ.6, λ mink, where the inequality becomes an equality whenever B is not square. In addition,.7 λ n u KAKu u K u λ, and using KB T B = K K,.8 0 u KB T Bu u K u = u K u u Ku u K u The two matrix equations in.4 are given by A +.9 ABT B u + B T v = θu, = u Ku u K u 0. u.0 Bu = θv. It must be u 0; otherwise.0 would imply θ =0orv = 0, neither of which can be satisfied. For u 0 and v = 0, from.9, θ must satisfy AKu = θu and Bu =0. Since K is symmetric and positive definite, we can write K AK û = θû, û = K u, from which it follows that θ is real and satisfies 0 <θ λ K = λ λ max I + BT B = λ + σ = ρ. We now assume u 0 v. Using.0, we write v = θ Bu, which, substituted into.9, yields θai + B T Bu B T Bu = θ u. By multiplying this equation from the left by u K we obtain. θu KAKu u KB T Bu = θ u Ku.

4 380 VALERIA SIMONCINI AND MICHELE BENZI Let θ = θ + ıθ.fora symmetric, the quadratic equation. has real coefficients so that its roots are given by. θ ± = u KAKu u ± Ku 4 u KAKu u u KB T Bu Ku u. Ku Eigenvalues with nonzero imaginary part arise if the discriminant is negative. Case θ 0. It must be.3 u KAKu 4u Kuu KB T Bu < 0, and from. we get θ = u KAKu u Ku. By substituting θ in., we obtain θ + θ = u KB T Bu u Ku. Case θ = 0. In this case, from. it follows that θ = θ > 0. For Bu =0, from.0 it follows that v =0θ 0, and the reasoning for v = 0 applies. We now assume that Bu 0. We have θ u Ku + θ u KAKu = u KB T Bu > 0, where the last inequality follows from.8. Since θ > 0, the inequality θ u KAKu θ u Ku > 0 implies u KAKu θ u Ku > 0, hence θ <λ λ max K =ρ. To prove the lower bound on θ, write the equation.9 as AK θiu = B T v. If θ is an eigenvalue of AK, then θ λ n λ min K λ n. Otherwise, AK θi is invertible, so that u = AK θi B T v, which, substituted into.0, yields.4 BAK θi B T v = θv BK A θk B T v = θv. [ ] Let B T =[W,W ] Σ0 Q T be the singular value decomposition of B T, and note that I + K =[W,W ] Σ 0 0 I [W T,W T ] T, BK = Q ΣI + Σ 0 [W T,W T ] T = QD ΣW T, where D = I + Σ. Problem.4 can be thus written as QD ΣW T A θk W ΣQ T v = θv, or, equivalently, from which ΣW T A θk W Σw = θdw, w = Q T v,.5 W T A θk W ŵ = θσ DΣ ŵ, ŵ =Σw. We multiply both sides from the left by ŵ and we notice that the left-hand side is positive for any ŵ 0. Ifθ λ min AK λ n, then λ n is the sought-after lower bound. Assume now that θ<λ min AK. Then, the matrix A θk is symmetric and positive definite. Therefore,.6 ŵ W T A θk W ŵ λ min A θk W ŵ = λ min A θk ŵ,

5 and we have PRECONDITIONING SADDLE POINT PROBLEMS 38 λ min A θk = λ max A θk λ θλ min K = λ θ λ maxk = λ θ, τ where τ := λ max K =+ σ. This, together with.6, provides a lower bound for the left-hand side of.5. Using θŵ Σ DΣ ŵ = θŵ Σ + I ŵ θ σm + ŵ and recalling that λ τ θ>0, from.5 we obtain λ θ θ σ + θ τ m, i.e., τ + σ m + σm λ θ. Since θ > 0, we get σ m +σ λ m θ, and the final bound follows. The quantities in part of the lemma can also be bounded with techniques similar to those for the real case. However, in the next theorem, we derive sharper bounds for complex η than those one would obtain by using estimates for complex θ. Theorem.. Under the hypotheses and notation of Lemma., the eigenvalues of problem. are such that the following hold:. If Iη 0, then.7 + λ nλ n 3 { < Rη < min, 4 + λ n },.8 λ n 3 + < η 4 λ n. If Iη =0, then η>0 and λ n σ m.9 ϱ min, + λ n + σ m ϱ σ + λ n η ρ + ρ <, where ϱ := λ + σ m and ρ := λ + σ. Proof. We have that η is real if and only if θ is real. Assume Iη 0 and write θ = θ + ıθ. Recall that τ =+ σ. Using the definition of θ in.3 we obtain θ + θ Rη = +θ + θ, that is, +θ + θ Rη =θ + θ. We substitute the quantities in.5 to get u Ku + u KAKu + u KB T BuRη =u KAKu +u KB T Bu. Note that u Ku + u KB T Bu = u K u. We divide by u K u>0 to obtain + u KAKu u K Rη = u KAKu KB T Bu u u K u +u u K u.

6 38 VALERIA SIMONCINI AND MICHELE BENZI We recall that for Iη 0 relation.3 holds, which implies by.6 and.8.0 u KAKu u K u < 4 u Ku u K u u KB T Bu u K u 4 and. u KB T Bu u K u > u KAKu u K u 4 u K u u Ku 4 λ n. Therefore, by applying.7,.0, and.8, we obtain + λ n Rη <+ Rη < 4. + λ n By once more applying.0,.7, and., we also get + Rη >λ n + λ n Rη > + λ nλ n 3, which provide the upper and lower bounds for Rη. To complete the proof of the first statement, we write η using.3 to obtain +θ η =4 η θ. Substituting.5 as before and dividing by u K u, it yields u Ku u K u + KAKu u u K η =4 η u KB T Bu u u K u. Note that 4 η > 0. As before, we bound η from both sides, keeping in mind.6,.7,.8,., and.0, to get τ + λ n η 4 η η 4, + + σ + λ n and + η > 4 λ n4 η η > λ n This completes the proof of the first part. Assume now that η is real. Then, from the corresponding bound for real θ in Lemma. and the fact that η = φθ = θ +θ is a strictly increasing function of its argument, we obtain the desired bounds on η. A few comments are in order. We start by noticing that, in general, real eigenvalues η may well cover the whole open interval 0,, depending on the parameter. Our numerical experiments show that these bounds are indeed sharp for several values of cf. section 4. Although much less sharp in general, we also found the bounds for eigenvalues with nonzero imaginary part of interest. The lower estimate for η indicates that nonreal eigenvalues are not close to the origin, especially for small. In addition, they are located in a section of an annulus as in Figure.. We will see in Theorem 3. λ n

7 PRECONDITIONING SADDLE POINT PROBLEMS Fig... Inclusion region for the typical spectrum of the preconditioned matrix. that complex eigenvalues cannot arise for values of smaller than one half the smallest eigenvalue of A. Remark.. We note that when A is positive definite, selecting = λ n provides constant bounds for the cluster of eigenvalues with nonzero imaginary part. Indeed, substituting = λ n in.7 and in.8 we obtain Rη < and 4 3 η 4λ n + σ 3λ n +σ 4λ n + σ λ n +σ =. For λ n we expect to obtain similar bounds. This complex clustering seems to be relevant in the performance of the preconditioned iteration; cf. section Conditions for a real spectrum and clustering properties. We next show that under suitable conditions, the spectrum of the nonsymmetric preconditioned matrix P A is real. We stress the fact that a real spectrum is a welcome property, because it enables the efficient use of short-recurrence Krylov subspace methods such as Bi-CGSTAB; see, e.g., [, p. 39]. Theorem 3.. Assume the hypotheses and notation of Lemma. hold and assume in addition that A is symmetric positive definite. If λ n, then all eigenvalues η are real. Proof. We prove our assertion for the eigenvalues θ, from which the statement for η will follow. Let x =[u; v] be an eigenvector associated with θ. Foru 0,v =0we already showed that the spectrum is real, while u = 0 implies v = 0, a contradiction. We now assume u 0 v. The eigenvalues θ of.4 are the roots of equation., which can be expressed as in.. These are all real if the discriminant is nonnegative. Equivalently, θ R if u KAKu 4u Kuu KB T Bu u 0. Since u K u>0 for u 0, we write the problem above as θ R if u KAKu u K u 4 u Ku u KB T Bu u K u u K u u 0.

8 384 VALERIA SIMONCINI AND MICHELE BENZI We have u KAKu u K u λ n, and u Ku u K u λ mink ; see.6. Therefore, using.8, if λ n,wehave u KAKu u K u λ n 4 4 u Ku u KB T Bu 3. u K u u K u 0. u The discriminant is nonnegative, therefore all roots of. are real, and so are the eigenvalues θ. The smallest eigenvalue of A can be increased by suitable scalings, thus enlarging the interval of values leading to a real spectrum. Note, however, that multiplying. by a positive constant ω is equivalent to applying the Hermitian/skew-Hermitian splitting preconditioner with parameter ˆ := ω to the original, unscaled system. Under additional assumptions on the spectrum of the block matrices, it is possible to provide a less strict condition on. This is stated in the following corollary. Corollary 3.. Under the hypotheses and notation of Theorem 3., assume that 4σ λ n > 0. If λnσ then all eigenvalues η are real. 4σ λ n Proof. Using.8, we can write u KB T Bu u K u = u Ku u K u + σ = σ + σ. Therefore, if λ n 4 σ, the bound equivalent to 3. follows. Moreover, we +σ note that under the assumption that 4σ λ n > 0, λ n 4 σ + σ λ nσ 4σ. λ n It is interesting to observe that if σ = λ, the condition 4σ λ n > 0 corresponds to the inequality λ > λ n 4 λ n, which is easily satisfied since usually λ n is small and λ is much bigger than λ n. Note that such a setting is very common in the Stokes problem, where A is a discretization of a vector Laplacian and BB T can also be regarded as a discrete Laplacian. The following result shows that the eigenvalues form two tight clusters as 0. This is an important property from the point of view of convergence of preconditioned Krylov subspace methods. This result extends and sharpens the clustering result obtained in [3] using different tools for the special case of Poisson s equation in saddle point form. Proposition 3.3. Assume A is symmetric and positive definite. For sufficiently small >0, the eigenvalues of P A cluster near zero and two. More precisely, for small >0, η 0,ε ε,, with ε,ε > 0 and ε,ε 0 for 0. Proof. We assume is small, and in particular λ n; therefore all eigenvalues are real. Let [u; v] be an eigenvector of.4 and let θ ± be the roots of equation.. These are given by.. Collecting u Ku and dividing and multiplying. by u K u>0, we obtain θ ± = u K u u Ku u KAKu u K u ± 4 u KAKu u Ku u K u u K u u KB T Bu u K u u K u u Ku ν±.

9 PRECONDITIONING SADDLE POINT PROBLEMS 385 We recall the bounds in.7 and.8, while u K u u Ku + σ for any u 0, with + σ =O as 0. Moreover, 0 u Ku u KB T Bu u K u u K u,so that u Ku u KAKu u K u u K u u KB T Bu u K u 0as 0. We thus have ν + u KAKu u K u is bounded independently of, we also obtain u Ku u KB T Bu ν = O u K u u K for 0. u as 0. Since Therefore, θ + = O u K u u Ku =O as 0, whereas θ = O u KB T Bu u K u =O as 0. It thus follows that η + = and η = 0 for 0. + θ+ + θ We mention that the dependency of the optimal value of on the mesh size h has been discussed, using Fourier analysis, in [3] for the case of Poisson s equation in first order system form, and in [5] for the case of the Stokes problem. In the first case one can choose so as to have h-independent convergence, whereas in the second case there is a moderate growth in the number of iterations as h 0. It is important to remark that the occurrence of a gap in the spectrum for small can be deduced from known results for overdamped systems. Indeed, equation. stems from the quadratic eigenvalue problem θ Ku θkaku + KB T Bu =0. The eigenproblem above has n eigenvalues, n m of which are zero, corresponding to the dimension of the null space of KB T B. The remaining n + m eigenvalues coincide with the eigenvalues of our problem.4. By introducing θ = θ, we obtain the quadratic symmetric eigenproblem see [6] θ Ku + θkaku + KB T Bu =0, K > 0, KAK > 0, KB T B 0. It can be shown see, e.g., [6, Theorem 3.] that if the discriminant is positive that is, if u KAKu 4u Kuu KB T Bu > 0 for any u 0 then all eigenvalues θ are real and nonpositive. Moreover, the spectrum is split in two parts, each of which contains n eigenvalues. In our context, and in light of Proposition 3.3, the result above implies that m eigenvalues η will cluster towards zero, while n eigenvalues η will cluster around, for sufficiently small. 4. Numerical experiments. In this section we present the results of a few numerical tests aimed at assessing the tightness of our bounds. The first problem we consider is a saddle point system arising from a finite element discretization of a model Stokes problem leaky-lid driven cavity. This problem was generated using the IFISS software written by Howard Elman, Alison Ramage, and David Silvester [9]. Here n = 578, m = 54, λ n = , λ = , σ = , and σ m = Note that the B matrices discrete divergence operators generated by this software are rank deficient; we obtained a full rank matrix by dropping the two first rows of B. Note that in the statement of Theorem 3. in [6], matrix KB T B is required to be positive definite rather than just semidefinite. However, the result is still true under the weaker assumption KB T B 0; see also the treatment in [0] and references therein.

10 386 VALERIA SIMONCINI AND MICHELE BENZI Table 4. Real bounds in.9 vs. actual eigenvalues, Stokes problem. Lower bound η min η max Upper bound Table 4. Bounds in.9 vs. actual real eigenvalues, groundwater flow problem. Lower bound η min η max Upper bound In Table 4. we compare the lower and upper bounds given in Theorem. with the actual values of the smallest and largest eigenvalues of P A, which in this case are all real. One can see that the upper bound is always very tight and that the lower bound is fairly tight, especially for small values of. For 0.0 or smaller, the eigenvalues form two tight clusters near 0 and, containing m and n eigenvalues, respectively, as predicted by Proposition 3.3. Next, we consider a saddle point system arising from the discretization of a groundwater flow problem using mixed-hybrid finite elements [7]. In the example at hand, n = 70, m = 07, n + m = 477, and A contains, 746 nonzeros. Here we have λ n =0.007, λ =0.00, σ =.6, and σ m = In this case there are nonreal eigenvalues except for very small. In Table 4. we compare the lower and upper bounds given in Theorem. with the actual values of the smallest and largest real eigenvalues of P A while in Tables 4.3 and 4.4 we provide the analogous results for the real part and modulus of the nonreal eigenvalues. One can see that the location of the real eigenvalues is well detected with our bounds. In particular, the lower bound is very sharp, whereas the upper bound gets looser when the whole spectrum becomes complex 0.05, providing again good estimates for large values of. The lower bounds suggest that the leftmost cluster will not be too close to zero, particularly for between 0 3 and 0, and it turns out that these values of yield the best results see below.

11 PRECONDITIONING SADDLE POINT PROBLEMS 387 Table 4.3 Bounds in.7 vs. actual real part of nonreal eigenvalues, groundwater flow problem. Lower bound min Rη max Rη Upper bound Table 4.4 Bounds in.8 vs. actual modulus of nonreal eigenvalues, groundwater flow problem. Lower bound min η max η Upper bound Concerning nonreal eigenvalues, we observe that our bounds are generally not very sharp. The real part of the eigenvalues changes considerably as varies, clustering on different regions of the interval 0,. Our lower bounds on Rη are rather loose, although they get better for larger values of ; conversely, the upper bounds are tight for small and loose for large. We conclude this section with the results of a few experiments that illustrate the convergence behavior of full GMRES [8] with Hermitian/skew-Hermitian splitting preconditioning; we refer to [4] for more extensive experimental results. The purpose of these experiments is to investigate the influence of the eigenvalue distribution, and in particular of the clustering that occurs as 0, on the convergence of GMRES. We also monitor the conditioning of the eigenvectors of the preconditioned matrix for different values of. In Table 4.5 we report a sample of results for both the Stokes and the groundwater flow problem, for different values of from tiny to fairly large. Here κ V := σ maxv σ minv denotes the spectral condition number of the matrix of normalized eigenvectors of P A, and Its denotes the corresponding number of preconditioned GMRES iterations matrix-vector products needed to reduce the initial residual by at least six orders of magnitude. For the Stokes problem, the condition number of the eigenvector matrix of the unpreconditioned A is κ V =6.94. Without preconditioning, full GMRES converges in 99 iterations. For the unpreconditioned groundwater flow problem, it is κ V =.37 and GMRES stagnates. Note that for both problems, the best results in terms of GMRES iterations are obtained for =0.005, with generally good convergence behavior for between 0 6 and 0. Good performance is observed in particular for λ n, for which nonreal eigenvalues, when they occur, lie in a small region in the disc D, cf. Remark..

12 388 VALERIA SIMONCINI AND MICHELE BENZI Table 4.5 Conditioning of the eigenvectors and iteration count. Stokes Groundwater flow κ V Its κ V Its 0.8E+8 > E E E E E E E E E E E E E E E E E E E E E E E E E E E E E+00 > E E+00 > E E+00 > E E+00 > 00 The convergence rate remains fairly stable even for smaller values of, but eventually it starts deteriorating as approaches zero. It is likely that this is due to the fact that the preconditioner and with it, the preconditioned matrix becomes singular as 0. On the other hand, as the preconditioned matrix tends to the unpreconditioned one and the preconditioner becomes ineffective. Note that somewhat better results can be obtained by a suitable diagonal scaling of A see [4]; however, no scaling was used here. For both problems, κ V appears to be very sensitive to changes in, at least when is small. This is in stark contrast with the rather smooth variation in the number of GMRES iterations. Overall, the condition number of the eigenvector matrix does not seem to have much influence on the convergence of GMRES. 5. Conclusions. In this paper we have provided bounds and clustering results for the spectra of preconditioned matrices arising from the application of the Hermitian/skew-Hermitian splitting preconditioner to saddle point problems. Numerical experiments have been used to illustrate the capability of our estimates to locate the actual spectral region. We have also shown that for small, all the eigenvalues are real and fall in two clusters, one near 0 and the other near. Our bounds are especially sharp precisely for these values of, which are those of practical interest. Indeed, our analysis suggests that the best value of should be small enough so that the spectrum is clustered, but not so small that the preconditioned matrix is close to being singular. Numerical experiments confirm this, and it appears that when A is positive definite, λ n A is generally a good choice. Finally, we found a connection with the quadratic eigenvalue problems arising in the theory of overdamped systems; it is possible that exploitation of this connection may lead to further insight into the spectral properties of preconditioned saddle point problems. Acknowledgment. We would like to thank Martin Gander for useful comments on an earlier draft of the paper.

13 PRECONDITIONING SADDLE POINT PROBLEMS 389 REFERENCES [] Z. Z. Bai, G. H. Golub, and M. K. Ng, Hermitian and Skew-Hermitian Splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 4 003, pp [] Z. Z. Bai, G. H. Golub, and J. Y. Pan, Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Linear Systems, Technical Report SCCM-0-, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, Stanford, CA, 00. [3] M. Benzi, M. J. Gander, and G. H. Golub, Optimization of the Hermitian and skew- Hermitian splitting iteration for saddle-point problems, BIT, , pp [4] M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 6 004, pp [5] M. Gander, Optimization of a Preconditioner for Its Performance with a Krylov Method, talk delivered at the Dagstuhl Seminar 034 on Theoretical and Computational Properties of Matrix Algorithms, Dagstuhl, Germany, [6] I. Gohberg, P. Lancaster, and L. Rodman, Matrix Polynomials, Academic Press, New York, 98. [7] J. Maryška, M. Rozložník, and M. Tůma, Mixed-hybrid finite element approximation of the potential fluid flow problem, J. Comput. Appl. Math., , pp [8] Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 986, pp [9] D. Silvester, Private communication, 00. [0] F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 00, pp [] H. A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge Monogr. Appl. Comput. Math. 3, Cambridge University Press, Cambridge, UK, 003.

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it p. 1 Collaborators on this project Mario Arioli, RAL, UK Michele Benzi,

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems Structured Preconditioners for Saddle Point Problems V. Simoncini Dipartimento di Matematica Università di Bologna valeria@dm.unibo.it. p.1/16 Collaborators on this project Mario Arioli, RAL, UK Michele

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini Indefinite Preconditioners for PDE-constrained optimization problems V. Simoncini Dipartimento di Matematica, Università di Bologna, Italy valeria.simoncini@unibo.it Partly joint work with Debora Sesana,

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties V. Simoncini Dipartimento di Matematica, Università di ologna valeria@dm.unibo.it 1 Outline

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Block preconditioners for saddle point systems arising from liquid crystal directors modeling Noname manuscript No. (will be inserted by the editor) Block preconditioners for saddle point systems arising from liquid crystal directors modeling Fatemeh Panjeh Ali Beik Michele Benzi Received: date

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure: SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 1, pp. 20 41 c 2004 Society for Industrial and Applied Mathematics A PRECONDITIONER FOR GENERALIZED SADDLE POINT PROBLEMS MICHELE BENZI AND GENE H. GOLUB Abstract.

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Regularized HSS iteration methods for saddle-point linear systems

Regularized HSS iteration methods for saddle-point linear systems BIT Numer Math DOI 10.1007/s10543-016-0636-7 Regularized HSS iteration methods for saddle-point linear systems Zhong-Zhi Bai 1 Michele Benzi 2 Received: 29 January 2016 / Accepted: 20 October 2016 Springer

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Fast solvers for steady incompressible flow

Fast solvers for steady incompressible flow ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

arxiv: v1 [math.na] 26 Dec 2013

arxiv: v1 [math.na] 26 Dec 2013 General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

When is the hermitian/skew-hermitian part of a matrix a potent matrix?

When is the hermitian/skew-hermitian part of a matrix a potent matrix? Electronic Journal of Linear Algebra Volume 24 Volume 24 (2012/2013) Article 9 2012 When is the hermitian/skew-hermitian part of a matrix a potent matrix? Dijana Ilisevic Nestor Thome njthome@mat.upv.es

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED

More information

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS MICHELE BENZI AND ZHEN WANG Abstract. We analyze a class of modified augmented Lagrangian-based

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Block triangular preconditioner for static Maxwell equations*

Block triangular preconditioner for static Maxwell equations* Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS Department of Mathematics University of Maryland, College Park Advisor: Dr. Howard Elman Department of Computer

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006

PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS. Eugene Vecharynski. M.S., Belarus State University, 2006 PRECONDITIONED ITERATIVE METHODS FOR LINEAR SYSTEMS, EIGENVALUE AND SINGULAR VALUE PROBLEMS by Eugene Vecharynski M.S., Belarus State University, 2006 A thesis submitted to the University of Colorado Denver

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

A Residual Inverse Power Method

A Residual Inverse Power Method University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER LARS ELDÉN AND VALERIA SIMONCINI Abstract. Almost singular linear systems arise in discrete ill-posed problems. Either because

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 2, pp. 305 32 c 2005 Society for Industrial and Applied Mathematics JORDAN CANONICAL FORM OF THE GOOGLE MATRIX: A POTENTIAL CONTRIBUTION TO THE PAGERANK COMPUTATION

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems Zeng et al. Journal of Inequalities and Applications 205 205:355 DOI 0.86/s3660-05-0879-x RESEARCH Open Access Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS CARLO LOVADINA AND ROLF STENBERG Abstract The paper deals with the a-posteriori error analysis of mixed finite element methods

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems NATURAL PRECONDITIONING AND ITERATIVE METHODS FOR SADDLE POINT SYSTEMS JENNIFER PESTANA AND ANDREW J. WATHEN Abstract. The solution of quadratic or locally quadratic extremum problems subject to linear(ized)

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information