Gene H. Golub and Gérard Meurant Matrices, Moments and Quadrature with Applications
|
|
- Ira Griffith
- 5 years ago
- Views:
Transcription
1 Gene H. Golub and Gérard Meurant Matrices, Moments and Quadrature with Applications Princeton University Press, 2010, ix pp., ISBN Zdeněk Strakoš Charles University in Prague, Faculty of Mathematics and Physics, Sokolovská 83, Prague, Czech Republic, strakos@karlin.mff.cuni.cz. 1 Introduction As the title suggests, this book has its center of mass 1 in applications of the classical concepts of moments and quadrature to matrix computations. With their deeply rooted mathematical background and interconnections, Gauss quadrature, moments and matrices belong to the backbone of computational methods for solving a very large variety of problems. Presenting this in a comprehensive way is the main goal of the book by Golub and Meurant. The book has two parts. Part 1 recalls the underlying theory, while Part 2 describes various applications. Moments, quadrature and matrices have a remarkable history. The concept of moments is linked with the work of Chebyshev, Markov and Stieltjes in the second half of the 19th century. The quadrature considered in the book is even older. It started with Gauss (1814), with further founding contributions due to Jacobi, Christoffel, Markov, Stieltjes and others. The related concept of continued fractions can essentially be traced back to Euclid and other ancient mathematicians (see, e.g., [5, 51]), and the more recent concept of orthogonal polynomials to several mathematicians of the 18th and 19th centuries. We can observe an impact of the related results in forming the foundations of functional analysis by Hilbert as well as in forming the mathematical foundations of quantum mechanics by von Neumann. Continued fractions and Jacobi matrices are still important in the spectral theory of operators in mathematical physics and are used in physics and computational chemistry. In modern computational mathematics, sciences and engineering, Krylov subspace methods and matching moments model reduction (in approximation of large-scale dynamical systems and elsewhere) can be viewed as nothing but a translation of the classical concepts mentioned above to the language of large-scale matrix computation. In order to see how the classical topics of 19th century mathematics found such widespread applications in solving modern computational problems formulated via matrices, it is useful to present a brief historical review. 1 Here we use the terminology adequate to the mechanical analogies present in the seminal works on moments in the 19th century. 1
2 2 Historical perspective The history of ideas behind using moments and quadratures for computations with matrices is so rich that an attempt to give a brief summary must inevitably refrain from being close to complete. Our selection aims at matching the topics covered in the book. 2.1 From approximation of numbers to analytic theory of continued fractions and the problem of moments The beginning of the theory of continued fractions is linked with the 17th century mathematicians Brouncker and Wallis. The latter published in the year 1655 the book Arithmetica infinitorium, in which he exposed the earlier results of Brouncker (never published by himself), and possibly invented the name continued fraction (for expansion or approximation of numbers, not functions) (see [5, Section 3.1, in particular pp ], [51, Section 1.2, in particular Theorem 1.4]). Among the results, Wallis presented the three-term recurrences for the numerators and denominators of the convergents of continued fractions, now known as the Brouncker-Wallis formulas. About one hundred years later Euler showed (among his many other fundamental contributions) how to expand infinite series into continued fractions (see [14, Chapter XVIII], [51, Section 4.1, Theorem 4.2]). The idea on how to expand the formal power series into continued fractions was developed further by Chebyshev in the paper [8] published in Russian in 1855, then translated into French and subsequently republished in 1858 by Bienaymé, who significantly influenced some of Chebyshev s later works. Chebyshev showed that the denominators associated with a continued fraction form a sequence of orthogonal polynomials, and presented the three-term recurrence relations for the numerators and denominators of the continued fraction convergents; see also the later paper by Christoffel from 1877 [11]. These recurrences for polynomials are formally the same as the Brouncker-Wallis recurrences for numbers. The orthogonality of polynomials (without giving it a name) was however used as a basic concept already by Jacobi in his 1826 paper [45] which reformulated the new quadrature method invented by Gauss in 1814 [20]. 2 Orthogonal polynomials were extensively studied under the name reciprocal functions by Murphy in the second of his several memoirs published in the Transactions of the Cambridge Philosophical Society in 1835; see [66, Introduction, pp , Part IV. Inverse Method for Definite Integrals which vanish; and Theory of Reciprocal Functions, pp ]. According to Gautschi [21, Section 1.3], the term or- 2 In order to determine the quadrature nodes which would maximize the algebraic degree of the quadrature, Gauss used in his discovery continued fractions associated with the hypergeometric series. That led him (for the standard Riemann integral without weight) to Legendre polynomials. The approaches of Gauss and Jacobi, as well as further generalizations and developments up to Christoffel were summarized by Heine in [35, Part I, Mechanische Quadratur, 1-16, pp. 1-31], and the development up to modern times were beautifully exposed by Gautschi in his survey paper [21]; see also [24, 25]. 2
3 thogonal probably appeared first in E. Schmidt s dissertation in 1905, and its use with polynomials is attributed to the early works of Szegö from In order to get an integral expression for continued fractions, Stieltjes extended in his monumental paper [82] published in the concept of the Riemann integral for an arbitrary nondecreasing distribution function ω(λ) defined on [0, + ), giving 0 f(λ)dω(λ), now called the Riemann-Stieltjes integral (ω(0) = 0). The name distribution function was first used by Chebyshev and later by Stieltjes in analogy of the distribution of mass on the real line. The moment problem came out as a byproduct of the Stieltjes convergence theory of continued fractions. The summary is given in [82, Chapter VIII, pp (in the English translation)]. Stieltjes considered the asymptotic expansion (here we use the notation standard in other literature and replace the denominator λ + µ, used by Stieltjes, by λ µ) where ξ (0, 1) and 0 dω(µ) λ µ = ξ 0 λ + ξ 1 λ ξ n 1 λ n + O ξ j = 0 ( ξ ξ n λ n+1 ), (1) λ j dω(λ), j = 0, 1,.... (2) In this way, the integral on the left-hand side (and the corresponding continued fraction) is approximated by the first n-terms of the power expansion, where the coefficients are given by moments of the distribution function ω(λ). In the recent context of approximation of large-scale dynamical systems (see, e.g., [2]), the equality (1) with (2) is nothing but the minimal partial realization, described by Kalman in 1979 [50]; see also the expository paper by Gragg and Lindquist [29] and the purely algebraic interpretation via projections by Gallivan, Grimme and Van Dooren [16]. Before Stieltjes, the formula (1) can be found in the paper from 1858 by Christoffel [10, Section 1, p. 63] and, for the sum N i=1 ω i λ λ i, (3) which is nothing but the discrete analogue of the integral on the left-hand side of (1), in the paper on interpolation from 1859 by Chebyshev [9, IV]. One can also point out that in 1737 Euler studied the number e using its continued fraction expansion. He introduced a variable, and set up and solved a Riccatti equation. In the English translation of Euler s paper [13], published in 1985, the editor points out the relationship of these early results to partial realizations in systems theory. of At the same year Stieltjes died at the age of 38. Chebyshev died also in 1894 at the age 3
4 For lack of space, we cannot recall here other remarkable contributions of Bienaymé, Chebyshev, Markov, Heine, Christoffel, Stieltjes, and many others, in particular the Chebyshev and Markov approach to moments as a tool for proving the so-called (Bienaymé-) Chebyshev inequalities, further developments of the Gauss quadrature theory and extensions, of the moment problem after Stieltjes as well as the development of Padé approximations. For an extensive coverage of these topics we refer to the works of Van Assche [89], Kjeldsen [53], Butzer and Jongmans [7], Gautschi [21, 24], Szegö [85], Shohat and Tamarkin [76], Akhiezer [1, pp and Chapter 3, Section 3, in particular pp ], and Brezinski [5]. The related work of Chebyshev and Markov, together with an explanation of the overlap and distinction with the work of Stieltjes, is thoroughly covered by Krein in [54]. In the modern model-reduction literature the moments (2) present as the numerators in the expansion (1) are called Markov parameters. Except for the mathematically-oriented exposition by Gragg and Lindquist [29], who refer to the results of Chebyshev and Stieltjes, the connection to moments and quadrature seems not mentioned in the engineering model-reduction literature. 2.2 Towards spectral theory of operators and matrices Developments towards operators and matrices started early in the 20th century with several lines of thought. 4 In the seminal paper [39] published in 1906 Hilbert outlined concepts that later developed into the theory of Hilbert spaces. He acknowledged the influence of the Stieltjes paper [82]; see also [40]. In 1909 F. Riesz extended the Riemann-Stieltjes integral to distribution functions of bounded variation and used the integral for representation of linear functionals (see [53, p. 32] for a comment on a possible unpublished use of the integral before Stieltjes). Using determinants of matrices in relation to moments goes back at least to Jacobi. In a short paper [47] published in 1850 (but presented to the Berlin Academy of Sciences two years earlier) Jacobi outlined his method for (translated into matrix language) tridiagonalizing a symmetric matrix. He, however, formulated in these two papers everything in terms of quadratic forms (and used integer coefficients). Six years after Jacobi s death, his student Borchardt published in 1857 another of Jacobi s papers [48] which presented a reduction of the quadratic form into diagonal form, and related the coefficients in the diagonal representation to the denominators of the corresponding continued fractions, expressed in terms of determinants; see also [12, Chapter VII, p. 154], [5, pp ]. Following the 1858 paper of Painvin [69], Heine expressed 20 years later the three-term recurrences for the numerators and denominators associated with continued fractions in terms of determinants of real tridiago- 4 We will not cover the development starting from Markov and leading to the Perron- Frobenius theorem; an interested reader can consult, e.g., [33], where, however, the impact of Stieltjes (acknowledged by Perron; see [70]) was omitted. Developments related to structured matrices are, together with new results, surveyed by Holtz and Tyaglov in [42]. These lines are out of the scope of our exposition inspired by the book of Golub and Meurant. 4
5 nal matrices with positive products of the corresponding subdiagonal entries; see [34, Part I, Chapter 15, 64-68, pp ]. Inspired by Hilbert and his student Hellinger, Toeplitz published in 1910 a paper on reduction of the Jacobi quadratic forms [86], and together with Hellinger in 1914 a paper which links the spectral decomposition of Jacobi matrices with the Stieltjes analytic theory of continued fractions and their integral representation given on the left-hand side of (1); see [36]. That is where the Jacobi matrix appeared in print most probably for the first time. Many relations which are now considered well known without any reference to their origin are presented in this paper. The Riesz representation theorem led to the Riemann-Stieltjes representation of self-adjoint operators in Hilbert spaces, published by von Neumann [92, 93] and Wintner [95] between 1927 and The so-called resolution of unity was, however, rightly attributed by von Neumann to Hilbert; see [92, p. 33] and [12, 39, 40]. The motivations of Hilbert, von Neumann and Wintner were theoretical. The related topics including moment problems, spectral properties of Jacobi matrices and applications to the spectral theory of operators in mathematical physics are the subject of intensive research up to now; see, e.g., [79, 58, 52, 78]. As described next, the same combination of functional analytic, approximation theory, classical analysis, and algebraic views turned at about the same time into a major development in the area of computational methods. 2.3 Krylov subspace methods and the method of moments A. N. Krylov, a student of Lyapunov and Korkin and a distinguished member of the mathematical school founded by Chebyshev 5, published in 1931 a ground breaking paper on the computation of the minimal polynomial via transformation of the secular equation [55]. Krylov used in his description linear differential equations, and refers to the work of Lagrange, Laplace, Leverrier and Jacobi. Following the Jacobi method for iterative diagonalization of a homogeneous symmetric system of linear algebraic equations (see [46, Section 6, pp ]), he used what is now called Givens rotations [55, p , relations (48)-(54)]. 6 Krylov s method was immediately reformulated algebraically by Luzin (in 1931) and, in particular, by Gantmacher in 1934 [17], who started, for an arbitrary vector v and the given square matrix A, with the sequence v, Av, A 2 v,..., (4) now called the Krylov sequence; see also [18]. A decade later two seminal papers of Lanczos from 1950 and 1952 [56, 57] and a paper of Hestenes and Stiefel from 1952 [38] changed the history of iter- 5 Not to be taken for N. M. Krylov; see [80, pp and 177]. 6 The idea was also formulated by Grassman as a method of circular change (Circuläre Aenderung) in 1862; see [30, Part 1, Chapter 4, Section 2, Point 153] and the translation [31]. A recent summary of Grassmann s pioneering contribution towards development of linear algebra can be found in Liesen s paper [59]. 5
6 ative methods by inventing the Lanczos method and the method of conjugate gradients (CG) for solving partial eigenvalue problems and systems of linear algebraic equations respectively. To be precise, Lanczos invented a family of methods, including the method mathematically equivalent to the bi-conjugate gradients method (Bi-CG) described about 20 years later by Fletcher. The methods of Lanczos, Hestenes and Stiefel orthogonalise (in principle, not in practical computations) the sequence (4), and the corresponding recurrences are mathematically equivalent to a three-term recurrence for orthogonal polynomials. Moreover, the optimality properties of these methods, which nowadays are often expressed in terms of projections, can be related to Gauss quadrature and moment matching. Lanczos, Hestenes and Stiefel were well aware of these connections. In Sections of their paper, Hestenes and Stiefel described them in detail and with full clarity. They present the link to the Gauss quadrature of the Riemann-Stieltjes integral defined by the symmetric positive definite matrix A and the given initial vector v, as well as the link with continued fractions and with their partial fraction representation (3) determined by the eigenvalues of A and the size of the projections of v in the corresponding invariant subspaces; see also the work of Ljusternik published in 1956 [61]. A seminal work of Gantmacher and Krein [19] on oscillating systems is deeply related to moments, continued fractions and the work of Krylov. The book starts with the results of Sturm from and gives an ingenious synthetic exposition of theory and various applications. With the advent of computers, numerous methods for solving systems of linear algebraic equations and for computing eigenvalues of linear operators appeared in the late 1940 s and early 1950 s, and they were applied to various practical problems, including discretized integral and partial differential equations. Unlike many of his contemporaries in the fast growing field of computational mathematics, Russian mathematician Vorobyev did not focus on algorithmic development. His book [94], originally published (in Russian) in 1958, aimed at presenting a family of methods, including the Lanczos method and CG, as a single method of moments, with the setting generalized to Hilbert spaces. He realized that in the case of a self-adjoint (Hermitian) operator the scalar problem of moments presented in the work of Chebyshev, Heine, Markov and Stieltjes is completely equivalent to his operator moment problem formulation, and presented a unified theory, including the minimal partial realization (1) (2). He presented several examples and further generalizations. The work of Vorobyev did not receive any significant recognition. Without the points made more than three decades later by Brezinski [6], Vorobyev and his method of moments would essentially have been forgotten. Recently, the ideas of Vorobyev have been extended towards the non-hermitian moment matching model reduction; see [83] and [84]. The contributions of Krylov, Lanczos, Hestenes and Stiefel, and Vorobyev were truly revolutionary. They were followed by works of Rutishauser [73], Henrici [37], Stiefel [81] and Householder [43, 44]. Still, Krylov subspace methods (like the Lanczos method or CG) had not been genuinely accepted by mathematicians as competitive computational tools until 1970 s; for an excellent 6
7 source concerning the early history up to 1976 see [27]. The change came with the seminal Ph.D. thesis of Paige from 1971 [68] showing that the effects of rounding errors in the Lanczos method (and consequently also in CG) can be understood; for the detailed description see the review published in Acta Numerica [65] and the book of Meurant [64]. At the same time Reid [71] pointed out that CG should be used as an iterative method a fact which was clearly stated by Lanczos, Hestenes and Stiefel in the original papers, but which somehow did not get enough attention in the mathematical literature. The situation was somewhat different in engineering and science; see, e.g. [74, Part II, Introduction and Section 5]. Developments in computational physics and chemistry can be documented, e.g. by the papers by Friedrichs and Horvay [15], Gordon [28], Schlesinger and Schwartz [75], Reinhard [72] and by the references given there. The practical application of CG and other Krylov subspace methods relies upon preconditioning. The term itself is attributed to Turing [88], but the concept can be traced as far back as Gauss and Jacobi. The credit for its popularization during the 1970s should be given to Meijerink and van der Vorst, Axelsson and others; see, e.g., the thorough surveys in [4] and [63, Chapter 8, in particular Section 8.22]. 2.4 Golub and his influence The work of Gene Golub became highly visible in the 1960 s, and soon he had played an integrating role in the field of scientific computing. In his view, analysis and algebra as well as theory, algorithms and applications were always mixed together; they formed a single entity. This fact has been preciously documented by Gautschi in his paper [23] and in his commentary published in [26, p ]; see also this selection of Golub s work as a whole. Gene Golub during all his career emphasized the role of moments and the fact that the roots of many modern methods can be found in the works on moments from the 19th century. He deeply influenced by his views many of his friends and collaborators. Among them, Gérard Meurant belongs among the closest. Their friendship has also brought us the book which is the topic of this review. 3 Book: Part 1 Theory A significant part of the text summarizes theoretical background and relationships. The book is primarily computationally oriented which is also reflected by the order of the topics in the title starting from matrices and considering moments and quadrature as tools (from the analytic point of view the order could be reversed). That is why the exposition does not follow a systematic textbook approach by giving all proofs. Instead it offers, together with recalling the main results (with extensive references to the original literature and recommendations for further reading), numerous valuable comments on interconnections between individual developments and approaches. 7
8 In Chapter 2 on orthogonal polynomials the authors present, in addition to the basic background including matrices of moments and Jacobi matrices, also extensions to matrix orthogonal polynomials. The Jacobi matrices are then described in more detail in Chapter 3, including the connection to the QD algorithm. The authors mention the problem of finding the poles of a function from its Taylor expansion, with the solution attributed to Hadamard (1892), which can also be related to the results on the moment problem published by Stieltjes (1894), with the power expansions studied earlier by Chebyshev, Christoffel and others: see Section 2.1 of this review. An interesting complementary exposition can be found in [1, Chapter 1]. Chapter 4 links the previous material with the Lanczos and CG algorithms. It recalls the three-term version of CG which makes it easy to point out later the difference between the Chebyshev semi-iterative method and CG. Block Lanczos algorithm is explained at the same time. Here it is perhaps worth mentioning, as a little addition, the direct derivation of the Lanczos and CG algorithms from the matrix (operator) moment problem, given by Vorobyev in [94, Chapter 3, in particular Section 4, pp ]. Chapter 5 builds upon the material collected in the previous chapters and deals with the problem of how to compute the coefficients of three-term recurrences for orthogonal polynomials, or, equivalently, how to compute Jacobi matrices using either the Riemann-Stieltjes distribution function, or moments, or nodes and weights of the Gauss quadrature formula. There is a vast amount of literature on the problem, with the book by Gautschi [24] giving a thorough overview written from the orthogonal polynomial perspective; see also the literature review in [67, Section 2]. The exposition in the book includes the generalization of the modified Chebyshev algorithm for indefinite weight functions (Section 5.4). It uses the material of Chapter 4 for comparing CG with the Chebyshev semi-iterative method estimating the extreme eigenvalues using modified moments, as suggested by Golub and Kent. While the Chebyshev semi-iterative method offers more flexibility on parallel computers, CG has the principal advantage in taking into account the distribution of all eigenvalues. Briefly (and omitting, for simplicity, the influence of the right-hand side): If the eigenvalues are far from being uniformly distributed, then CG does capitalize upon a dominance of some part of the spectrum and the convergence is strongly superlinear. This point is fundamental. The superlinear convergence is a consequence of the relationship of CG with moments and Gauss quadrature, and this makes the method principally different from linear iterative methods. The fact that this difference is still not fully accepted in the literature can be documented by a frequent identification of the CG rate of convergence with the linear convergence bounds based on Chebyshev polynomials. Section 5.6 returns to reconstruction of Jacobi matrices from spectral data. It very nicely summarizes related topics, starting from the three-term recurrences for orthogonal polynomials 7 and vectors (Lanczos algorithm), through the orthogonal similarity 7 This recurrence is named after Stieltjes, but it was known before him to Chebyshev and Christoffel, not counting the three-term recurrences (for numbers) given by Brouncker, Wallis 8
9 transformations (Gragg and Harrod) to the variants of the QD algorithm (Laurie). Section 5.7 presents in a comprehensive way approaches to modifications of weight functions, recalling important results of Kautský, Elhay, Golub, Fischer, Gautschi and others. Chapter 6 reviews the Gauss quadrature-related rules with a focus on some recent descriptions and on the computational perspective. For the history (including the role of Jacobi and Christoffel) we recommend the paper by Gautschi [22], and the early summary by Heine [34, Part I, Chapter 15, 64-68, pp ]. An interesting comparison with the Clenshaw-Curtis quadrature can be found in the paper by Trefethen [87]. Chapter 6 offers, in particular, a valuable description of the block extensions introduced in the earlier works of Golub and Meurant. Approaches for approximating (or bounding) the bilinear form I(f) = u T f(a)v, (5) where A is a symmetric square matrix, u and v given vectors, and f a smooth function (possibly C ) on a given interval of the real line, are reviewed in Chapter 7. The problem of efficient numerical approximation of (5) represents the main topic of the book; Part 2 demonstrates computational approaches to numerous variants and reformulations of (5). Extension to nonsymmetric matrices is briefly summarized in Chapter 8. In this context, it is useful to mention the connection to matching-moments model reduction in the approximation of large-scale linear dynamical systems; see, e.g., [2, Part IV], [83]. The recent paper [84] complements the approaches reviewed in the book by suggesting mathematically equivalent but (likely) numerically preferable estimates based on using the Bi-CG method. Chapter 9 is devoted to secular equations which represent another interesting tool used for centuries. Golub and Meurant explain the origin of the name secular equation with referring to classical works of Cauchy and Sylvester on movements of planets (see also [66]). In this context it is worth noticing that the founding paper of Krylov [55] mentioned in Section 2.3 of this review deals with transformations of the secular equation (in the sense of the characteristic equation) with inspiration taken from the works of Lagrange, Laplace, Leverrier and Jacobi on celestial mechanics. 4 Book: Part 2 Applications In the second part of the book the authors deal with various applications. In some chapters they solve test problems, in others they also develop further theoretical background specific for the given area. Chapter 10 presents examples of Gauss quadrature rules. Attention is rightly paid to the Golub and Welsch paper from It is worth mentioning, in addition, the remarkable paper by Gordon [28] published almost simultaneously; and Euler; see Section 2.1 of this review. 9
10 see the commentary by Gautschi in [26, pp ]. As a suggestion for a further possible extension, the material could be complemented by demonstrating sensitivity of the Gauss quadrature to small changes of the distribution function; see [67]. When the support of the distribution function (i.e., the set of points of its increase) is enlarged, the results of the Gauss quadrature for the fixed number of nodes can dramatically change. This seemingly counterintuitive fact is intimately related to matching moments, and it can be seen from the following (equivalent) perspective. If the spectrum of a symmetric positive definite matrix is formed of t tight clusters, then, in general, t steps of CG applied to this matrix (with a generic right-hand side or initial residual) may not be sufficient for reaching any reasonable approximation to the (exact) solution. Here replacing a single eigenvalue with a tight cluster (of the same weight) corresponds to enlarging the support of the underlying distribution function. Such replacement can very significantly affect the CG convergence behaviour; 8 see, e.g., the thorough investigation of the CG rate of convergence and the convergence of Ritz values in the presence of close eigenvalues by van der Sluis and van der Vorst [90, 91]. A full mathematical explanation and experimental evidence was presented in the work of Greenbaum [32] and her collaborators on numerical stability analysis of the CG method. A comprehensive description can be found, e.g., in [65, Section 5.2]. Consequences for the so-called support preconditioning theory have been pointed out in [60, Section 4]. Section of the book deals with the numerical reconstruction of Jacobi matrices from spectral data; it refers to the theoretical background in Section 5.6. The authors report experimental results somewhat different from the literature. It might be interesting to investigate the sources of such differences in order to eliminate a possible influence of computer implementations. The first seven sections of Chapter 11 on bounds and estimates for elements of functions of matrices develop further techniques briefly outlined in Chapter 7; Section 11.8 presents experimental results. Here it could be added that approximations of d T A 1 c computed via the explicitly formed approximate solutions x n of Ax = c and the subsequent inner product d T x n d T A 1 b should be avoided due to poor numerical properties. The alternative numerically more stable approach is proposed in [84]. Chapter 12 deals with estimates of norms in CG computations. This subject is reviewed, with an extensive biography, also in [65]. An important point on relation to a priori error bounds in finite element discretisation of the elliptic self-adjoint model problem is presented in Section 12.7, with references to the pioneering work of Arioli and his coauthors. In recent papers [62, 49, 3, 77], information on the algebraic part of the error is integrated into a posteriori error estimates (with references to related work in this fast developing area). 8 It is interesting that the common belief, from time to time presented in the literature as something absolutely obvious, is not in agreement with this indisputable fact. 10
11 Chapters 13 and 14 are devoted to least squares (LS) and total least squares (TLS) problems respectively. Most of the material in Chapter 13 deals with the classical problem of least squares data fitting by polynomials, including updating and downdating the least squares solution, with references to the work of Golub, Elhay and Kautský. In numerical experiments for the backward error (Section 13.4) it is suggested to compute convenient estimates using the Golub-Kahan bidiagonalization algorithm. The first two sections of Chapter 14 introduce the TLS problem. The nonexistence of the TLS solution is handled using the so called core problem formulation suggested by Paige and the author of this review, which naturally links the TLS problem with the Golub-Kahan iterative bidiagonalization on the one hand, and with the secular equation on the other. TLS solvers using the secular equation approach (assuming existence of the TLS solution or preprocessing the TLS problem into its core form) with various techniques reviewed in Chapter 9 are presented in Section Extensive experimental results finish the chapter. The final chapter deals with discrete ill-posed problems. After an instructive introduction it turns into iterative methods based on the Golub-Kahan iterative bidiagonalization with subsequent regularization of the projected (small) bidiagonal problem. Here various techniques used for determining the regularization parameter use secular equations, which links nicely the whole chapter to the rest of the book. Another use of the Golub-Kahan iterative bidiagonalization for solving discrete ill-posed problems has been recently presented in the paper [41]. Under generic assumptions it has been shown that the Golub-Kahan bidiagonalization reveals the level of the white noise present in data. This result is very close to the spirit of the reviewed book by Golub and Meurant. Indeed, the justification in [41] follows from the Gauss quadrature approximation of the Rieman-Stieltjes distribution function determined by the input data. 5 Conclusion The book represents a unique collection of material scattered throughout papers published over more than three decades with the unifying theme: Using moments and quadrature for solving computational problems formulated through matrices. The text goes much beyond a presentation of collected materials it emphasizes interconnections, points out interesting original references and offers perspectives. Experimental parts typically compares various approaches. One can always suggest some improvements. The exposition is in places too brief and it requires some effort to understand all the details. Some experimental parts stop with presenting the results in tables and graphs. Summarizing the message would in such cases be helpful. Several additional references could be included. These can possibly be considered in the preparation of future editions. One of the authors, Gene Golub, passed away unexpectedly in 2007, while writing the book was still in process. The reader should feel greatly indebted to the second author, Gérard Meurant, for finishing the work. The resulting text is valuable in summarizing the theoretical background and presenting algorithmic 11
12 developments. The availability of most of the Matlab software used to produce the numerical results makes the book useful not only to specialists who would like to apply some of the techniques, but also to teachers and students who can further experiment and continue from any point where the authors stopped. Gene Golub could not take the final book into his hands. I believe that he has an opportunity to see the book in some other way. I am sure that he likes it very much. 6 Acknowledgment This work was supported by the research project MSM and partially also by the GACR grant 201/09/0917. The author is indebted to Jörg Liesen for pointing out several important references, for insightful comments and help, and to Michael Todd for valuable grammatical corrections. References [1] N. I. Akhiezer, The Classical Moment Problem and Some Related Questions in Analysis, Translated by N. Kemmer, Hafner Publishing Co., New York, [2] A. C. Antoulas, Approximation of Large-Scale Dynamical Systems, vol. 6 of Advances in Design and Control, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, With a foreword by Jan C. Willems. [3] M. Arioli, E. H. Georgoulis, and D. Loghin, Convergence of inexact adaptive finite element solvers for elliptic problems, RAL-TR , Rutherford Appleton Laboratory (RAL), Didcot, UK, [4] M. Benzi, Preconditioning techniques for large linear systems: A survey, J. of Comp. Phys., 182 (2002), pp [5] C. Brezinski, History of continued fractions and Padé approximants, vol. 12 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, [6], Projection Methods for Systems of Equations, vol. 7 of Studies in Computational Mathematics, North-Holland Publishing Co., Amsterdam, [7] P. Butzer and F. Jongmans, P. L. Chebyshev ( ). A guide to his life and work, J. Approx. Theory, 96 (1999), pp [8] P. Chebyshev, Sur les fractions continues, (1855). Reprinted in Oeuvres I, 11 (Chelsea, New York, 1962), pp
13 [9], Sur l interpolation par la méthode des moindres carrés, (1859). Reprinted in Oeuvres I, 18 (Chelsea, New York, 1962), pp [10] E. B. Christoffel, Über die Gaußische Quadratur und eine Verallgemeinerung derselben, J. Reine Angew. Math., 55 (1858), pp Reprinted in Gesammelte mathematische Abhandlungen I (B. G. Teubner, Leipzig, 1910), pp [11], Sur une classe particulière de fonctions entières et de fractions continues, Annali di Matematica Pura ed Applicata, 8 (1877), pp Reprinted in Gesammelte mathematische Abhandlungen II (B. G. Teubner, Leipzig, 1910), pp [12] J. Dieudonné, History of functional analysis, vol. 49 of North-Holland Mathematics Studies, North-Holland Publishing Co., Amsterdam, [13] L. Euler, An essay on continued fractions, Math. Systems Theory, 18 (1985), pp Translated from the Latin by B. F. Wyman and M. F. Wyman. [14], Introduction to analysis of the infinite. Book I, Springer-Verlag, New York, Translated from the Latin and with an introduction by John D. Blanton. [15] K. Friedrichs and G. Horvay, The finite Stieltjes momentum problem, Proc. Nat. Acad. Sci. U. S. A., 25 (1939), pp [16] K. Gallivan, E. Grimme, and P. Van Dooren, Asymptotic waveform evaluation via a Lanczos method, Appl. Math. Lett., 7 (1994), pp [17] F. R. Gantmacher, On the algebraic analysis of Krylov s method of transforming the secular equation, Trans. Second Math. Congress, II (1934), pp (In Russian). [18], The Theory of Matrices. Vols. 1, 2, Chelsea Publishing Co., New York, [19] F. R. Gantmacher and M. G. Krein, Oscillation matrices and kernels and small vibrations of mechanical systems, AMS Chelsea Publishing, Providence, RI, revised ed., Translation based on the 1941 Russian original, Edited and with a preface by Alex Eremenko. [20] C. F. Gauss, Methodus nova integralium valores per approximationem inveniendi, Commentationes Societatis Regiae Scientarium Gottingensis, (1814), pp Reprinted in Werke, Band III (Göttingen, 1876), pp [21] W. Gautschi, A survey of Gauss-Christoffel quadrature formulae, in E. B. Christoffel (Aachen/Monschau, 1979), Birkhäuser, Basel, 1981, pp
14 [22], On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput., 3 (1982), pp [23], The interplay between classical analysis and (numerical) linear algebra a tribute to Gene H. Golub, Electron. Trans. Numer. Anal., 13 (2002), pp [24], Orthogonal polynomials: computation and approximation, Numerical Mathematics and Scientific Computation, Oxford University Press, New York, Oxford Science Publications. [25] H. H. Goldstine, A History of Numerical Analysis from the 16th through the 19th Century, Springer-Verlag, New York, Studies in the History of Mathematics and Physical Sciences, Vol. 2. [26] G. H. Golub, Milestones in Matrix Computation: Selected Works of Gene H. Golub, with Commentaries, Oxford Science Publications, Oxford University Press, Oxford, Edited by Raymond H. Chan, Chen Greif and Dianne P. O Leary. [27] G. H. Golub and D. P. O Leary, Some history of the conjugate gradient and Lanczos algorithms: , SIAM Rev., 31 (1989), pp [28] R. G. Gordon, Error bounds in equilibrium statistical mechanics, J. Math. Phys., 9 (1968), pp [29] W. B. Gragg and A. Lindquist, On the partial realization problem, Linear Algebra Appl., 50 (1983), pp [30] H. Graßmann, Die Ausdehnungslehre. Vollständig und in strenger Form bearbeitet, Th. Chr. Fr. Enslin, Berlin, Reprinted with corrections and notes in Gesammelte Werke 1.2 (Teubner, Leipzig, 1896). [31], Extension Theory, American Mathematical Society, Providence, RI, Translated by Lloyd C. Kannenberg. [32] A. Greenbaum, Behavior of slightly perturbed Lanczos and conjugategradient recurrences, Linear Algebra Appl., 113 (1989), pp [33] T. Hawkins, Frobenius and the symbolical algebra of matrices, Arch. Hist. Exact Sci., 62 (2008), pp [34] E. Heine, Handbuch der Kugelfunctionen. Theorie und Anwendungen. Erster Band, Zweite umgearbeitete und vermehrte Auflage, Reimer, Berlin, [35], Handbuch der Kugelfunctionen. Theorie und Anwendungen. Zweiter Band, Zweite umgearbeitete und vermehrte Auflage, Reimer, Berlin,
15 [36] E. Hellinger and O. Toeplitz, Zur Einordnung der Kettenbruchtheorie in die Theorie der quadratischen Formen von unendlichvielen Veränderlichen, J. Reine Angew. Math., 144 (1914), pp , 318. [37] P. Henrici, The quotient-difference algorithm, Nat. Bur. Standards Appl. Math. Ser. no., 49 (1958), pp [38] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), pp (1953). [39] D. Hilbert, Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen (Vierte Mitteilung), Nachr. Ges. Wiss. Göttingen, Math.-phys. Kl., (1906), pp [40], Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen, Teubner, Leipzig und Berlin, [41] I. Hnětynková, M. Plešinger, and Z. Strakoš, The regularizing effect of the Golub-Kahan iterative bidiagonalization and revealing the noise level in the data, BIT, 49 (2009), pp [42] O. Holtz and M. Tyaglov, Structured matrices, continued fractions, and root localization of polynomials, Research report arxiv: v2, [43] A. S. Householder, Principles of numerical analysis, Dover Publications Inc., New York, Unabridged, corrected version of the 1953 edition. [44], The Theory of Matrices in Numerical Analysis, Dover Publications Inc., New York, Reprint of 1964 edition. [45] C. G. J. Jacobi, Ueber Gauss neue Methode, die Werthe der Integrale näherungsweise zu finden, J. Reine Angew. Math., 1 (1826), pp Reprinted in Gesammelte Werke, 6. Band (Reimer, Berlin, 1891), pp [46], Über ein leichtes Verfahren die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch aufzulösen, J. Reine Angew. Math., 30 (1846), pp Reprinted in Gesammelte Werke, 7. Band (Reimer, Berlin, 1891), pp [47], Über die Reduction quadratischer Formen auf die kleinste Anzahl Glieder, Bericht über die zur Bekanntmachung geeigneten Verhandlungen der Königl. Preuss. Akad. Wiss. Berlin, (1848), pp [48], Über eine elementare Transformation eines in Bezug auf jedes von zwei Variablen-Systemen linearen und homogenen Ausdrucks, J. Reine Angew. Math., 53 (1857), pp Aus den hinterlassenen Papieren von C. G. J. Jacobi mitgetheilt durch C. W. Borchardt. Reprinted in Gesammelte Werke, 3. Band (Reimer, Berlin, 1884), pp
16 [49] P. Jiránek, Z. Strakoš, and M. Vohralík, A posteriori error estimates including algebraic error: computable upper bounds and stopping criteria for iterative solvers, SIAM J. Sci. Comput., 32 (2010), pp [50] R. E. Kalman, On partial realizations, transfer functions, and canonical forms, Acta Polytech. Scand. Math. Comput. Sci. Ser., (1979), pp Topics in systems theory. [51] S. Khrushchev, Orthogonal polynomials and continued fractions, vol. 122 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, From Euler s point of view. [52] R. Killip and B. Simon, Sum rules for Jacobi matrices and their applications to spectral theory, Ann. of Math. (2), 158 (2003), pp [53] T. H. Kjeldsen, The early history of the moment problem, Historia Math., 20 (1993), pp [54] M. G. Kreĭn, The ideas of P. L. Čebyšev and A. A. Markov in the theory of limiting values of integrals and their further development, Uspehi Matem. Nauk (N.S.), 6 (1951), pp English translation in Amer. Math. Soc. Transl., Series 2, 12 (1959), pp [55] A. N. Krylov, On the numerical solution of the equation by which the frequency of small oscillations is determined in technical problems, Izv. Akad. Nauk SSSR Ser. Fiz.-Mat., 4 (1931), pp (Title translation as given in [18]). [56] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Research Nat. Bur. Standards, 45 (1950), pp [57], Solution of systems of linear equations by minimized iterations, J. Research Nat. Bur. Standards, 49 (1952), pp [58] A. Laptev, S. Nabokov, and O. Safronov, On new relations between spectral properties of Jacobi matrices and their coefficients, Comm. Math. Phys., 241 (2003), pp [59] J. Liesen, Hermann Grassmann s theory of linear transformations, in From past to future: Graßmann s work in context, Birkhäuser, Basel, [60] J. Liesen and Z. Strakoš, On the computational cost of Krylov subspace methods for solving linear algebraic systems, Technical report (2010). [61] L. A. Ljusternik, Solution of problems in linear algebra by the method of continued fractions, Trudy Voronezskovo Gosudarstvennovo Instituta, Voronezh, 2 (1956), pp (In Russian). 16
17 [62] D. Meidner, R. Rannacher, and J. Vihharev, Goal-oriented error control of the iterative solution of finite element equations, J. Numer. Math., 17 (2009), pp [63] G. Meurant, Computer Solution of Large Linear Systems, vol. 28 of Studies in Mathematics and its Applications, North-Holland Publishing Co., Amsterdam, [64] G. Meurant, The Lanczos and conjugate gradient algorithms, vol. 19 of Software, Environments, and Tools, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, From theory to finite precision computations. [65] G. Meurant and Z. Strakoš, The Lanczos and conjugate gradient algorithms in finite precision arithmetic, Acta Numer., 15 (2006), pp [66] R. Murphy, Second memoir on the inverse method of definite integrals, Trans. Cambridge Phil. Soc., 5 (1835), pp [67] D. P. O Leary, Z. Strakoš, and P. Tichý, On sensitivity of Gauss- Christoffel quadrature, Numer. Math., 107 (2007), pp [68] C. C. Paige, The computation of eigenvalues and eigenvectors of very large and sparse matrices, PhD thesis, London University, London, England, [69] L. Painvin, Sur un certain système d équations linéaires, J. Math. Pures Appl., 3 (1858), pp [70] O. Perron, Die Lehre von den Kettenbrüchen, Teubner, Leipzig und Berlin, [71] J. K. Reid, On the method of conjugate gradients for the solution of large sparse systems of linear equations, in Large sparse sets of linear equations (Proc. Conf., St. Catherine s Coll., Oxford, 1970), Academic Press, London, 1971, pp [72] W. P. Reinhardt, l 2 discretization of atomic and molecular electronic continua: Moment, quadrature and j-matrix techniques, Comp. Phys. Comm., 17 (1979), pp [73] H. Rutishauser, Beiträge zur Kenntnis des Biorthogonalisierungs- Algorithmus von Lanczos, Z. Angew. Math. Physik, 4 (1953), pp [74] V. K. Saul yev, Integration of equations of parabolic type by the method of nets, Translated from the Russian by G. J. Tee. Translation edited and editorial introduction by K. L. Stewart. International Series of Monographs in Pure and Applied Mathematics, Vol. 54, Pergamon Press, London, [75] L. Schlessinger and C. Schwartz, Analyticity as a useful computational tool, Phys. Rev. Lett., 16 (1966), pp
18 [76] J. A. Shohat and J. D. Tamarkin, The Problem of Moments, American Mathematical Society Mathematical surveys, vol. II, American Mathematical Society, New York, [77] D. Silvester and V. Simoncini, An optimal iterative solver for symmetric indefinite systems stemming from mixed approximation, TOMS, to appear (2010). [78] B. Simon, The classical moment problem as a self-adjoint finite difference operator, Adv. Math., 137 (1998), pp [79] L. A. Steen, Highlights in the history of spectral theory, Amer. Math. Monthly, 80 (1973), pp [80] K.-G. Steffens, The history of approximation theory, Birkhäuser Boston Inc., Boston, MA, From Euler to Bernstein. [81] E. L. Stiefel, Kernel polynomials in linear algebra and their numerical applications, Nat. Bur. Standards Appl. Math. Ser., 1958 (1958), pp [82] T. J. Stieltjes, Recherches sur les fractions continues, Ann. Fac. Sci. Toulouse Sci. Math. Sci. Phys., 8 (1894), pp. J Reprinted in Oeuvres II (P. Noordhoff, Groningen, 1918), pp English translation Investigations on continued fractions in Thomas Jan Stieltjes, Collected Papers, Vol. II (Springer-Verlag, Berlin, 1993), pp [83] Z. Strakoš, Model reduction using the Vorobyev moment problem, Numer. Algorithms, 51 (2009), pp [84] Z. Strakoš and P. Tichý, On efficient numerical approximation of the bilinear form c A 1 b, submitted to SIAM J. Sci. Comput., (2010). [85] G. Szegö, Orthogonal Polynomials, American Mathematical Society, New York, American Mathematical Society Colloquium Publications, v. 23. [86] O. Toeplitz, Zur Theorie der quadratischen Formen von unendlichvielen Veränderlichen, Nachr. Ges. Wiss. Göttingen, Math.-phys. Kl., (1910), pp [87] L. N. Trefethen, Is Gauss quadrature better than Clenshaw-Curtis?, SIAM Rev., 50 (2008), pp [88] A. M. Turing, Rounding-off errors in matrix processes, Quart. J. Mech. Appl. Math., 1 (1948), pp [89] W. Van Assche, The impact of Stieltjes work on continued fractions and orthogonal polynomials, in Thomas Jan Stieltjes, Collected Papers, Vol. I, G. van Dijk, ed., Springer-Verlag, Berlin, 1993, pp
19 [90] A. van der Sluis and H. A. van der Vorst, The rate of convergence of conjugate gradients, Numer. Math., 48 (1986), pp [91], The convergence behavior of Ritz values in the presence of close eigenvalues, Linear Algebra Appl., 88/89 (1987), pp [92] J. von Neumann, Mathematische Begründung der Quantenmechanik, Nachr. Ges. Wiss. Göttingen, Math.-phys. Kl., (1927), pp [93] J. von Neumann, Mathematical foundations of quantum mechanics, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, Translated from the 1932 German original and with a preface by Robert T. Beyer. [94] Y. V. Vorobyev, Methods of Moments in Applied Mathematics, Translated from the Russian by Bernard Seckler, Gordon and Breach Science Publishers, New York, [95] A. Wintner, Spektraltheorie der unendlichen Matrizen. Einführung in den analytischen Apparat der Quantenmechanik, S. Hirzel, Leipzig,
Matching moments and matrix computations
Matching moments and matrix computations Jörg Liesen Technical University of Berlin and Petr Tichý Czech Academy of Sciences and Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences
More informationOn the Vorobyev method of moments
On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015
More informationMoments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems
Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationLanczos tridiagonalization, Krylov subspaces and the problem of moments
Lanczos tridiagonalization, Krylov subspaces and the problem of moments Zdeněk Strakoš Institute of Computer Science AS CR, Prague http://www.cs.cas.cz/ strakos Numerical Linear Algebra in Signals and
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationSensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data
Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationComposite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations
Numerical Algorithms manuscript No. (will be inserted by the editor) Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations Tomáš Gergelits Zdeněk
More informationOn the interplay between discretization and preconditioning of Krylov subspace methods
On the interplay between discretization and preconditioning of Krylov subspace methods Josef Málek and Zdeněk Strakoš Nečas Center for Mathematical Modeling Charles University in Prague and Czech Academy
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationPrinciples and Analysis of Krylov Subspace Methods
Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.
More informationOn the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012
On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter
More informationEfficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method
Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. Emory
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationJacobi s Ideas on Eigenvalue Computation in a modern context
Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationRecurrence Relations and Fast Algorithms
Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationRESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent
More informationKrylov subspace methods from the analytic, application and computational perspective
Krylov subspace methods from the analytic, application and computational perspective Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Rencontres
More informationIntroduction to Numerical Analysis
J. Stoer R. Bulirsch Introduction to Numerical Analysis Translated by R. Bartels, W. Gautschi, and C. Witzgall Springer Science+Business Media, LLC J. Stoer R. Bulirsch Institut fiir Angewandte Mathematik
More informationFrederic Brechenmacher. To cite this version: HAL Id: hal
A controversy and the writing of a history: the discussion of small oscillations (1760-1860) from the standpoint of the controversy between Jordan and Kronecker (1874) Frederic Brechenmacher To cite this
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationHow Rutishauser may have found the qd and LR algorithms, t
How Rutishauser may have found the qd and LR algorithms, the fore-runners of QR Martin H Gutnecht and Beresford N Parlett Seminar for Applied Mathematics ETH Zurich Department of Mathematics University
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationChebyshev semi-iteration in Preconditioning
Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his
More informationSolving large sparse Ax = b.
Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationOrthogonal polynomials
Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed
More informationEfficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method
Efficient Estimation of the A-norm of the Error in the Preconditioned Conjugate Gradient Method Zdeněk Strakoš and Petr Tichý Institute of Computer Science AS CR, Technical University of Berlin. International
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationKey words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values
THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case
More informationPartial Differential Equations with Numerical Methods
Stig Larsson Vidar Thomée Partial Differential Equations with Numerical Methods May 2, 2003 Springer Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo Preface Our purpose in this
More informationESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM
ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM GÉRARD MEURANT In memory of Gene H. Golub Abstract. In this paper we study how to compute an estimate
More informationSchur functions. J. Rovnyak and H. S. V. de Snoo
Schur functions J. Rovnyak and H. S. V. de Snoo The Schur class in complex analysis is the set of holomorphic functions S(z) which are defined and satisfy S(z) 1 on the unit disk D = {z : z < 1} in the
More informationContents. I Basic Methods 13
Preface xiii 1 Introduction 1 I Basic Methods 13 2 Convergent and Divergent Series 15 2.1 Introduction... 15 2.1.1 Power series: First steps... 15 2.1.2 Further practical aspects... 17 2.2 Differential
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationComprehensive Introduction to Linear Algebra
Comprehensive Introduction to Linear Algebra WEB VERSION Joel G Broida S Gill Williamson N = a 11 a 12 a 1n a 21 a 22 a 2n C = a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn a m1 a m2 a mn Comprehensive
More informationOn the accuracy of saddle point solvers
On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at
More informationANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationIntroduction to Numerical Analysis
J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationApplied Linear Algebra
Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University
More informationRiemann bilinear relations
Riemann bilinear relations Ching-Li Chai The Riemann bilinear relations, also called the Riemann period relations, is a set of quadratic relations for period matrices. The ones considered by Riemann are
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationSign patterns of the Schwarz matrices and generalized Hurwitz polynomials
Electronic Journal of Linear Algebra Volume 24 Volume 24 (2012/2013) Article 16 2012 Sign patterns of the Schwarz matrices and generalized Hurwitz polynomials Mikhail Tyaglov tyaglov@gmailcom Follow this
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationApplication for Funding to the College Academy for Research, Scholarship, and Creative Activity (CARSCA)- Mathematics and Natural Sciences
Application for Funding to the College Academy for Research, Scholarship, and Creative Activity (CARSCA)- Mathematics and Natural Sciences February 25, 2013 1. Project Title When are two operators the
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationNUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS
NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,
More informationPreface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:
Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationSpecial Functions of Mathematical Physics
Arnold F. Nikiforov Vasilii B. Uvarov Special Functions of Mathematical Physics A Unified Introduction with Applications Translated from the Russian by Ralph P. Boas 1988 Birkhäuser Basel Boston Table
More informationTHE NUMBER OF ORTHOGONAL CONJUGATIONS
THE NUMBER OF ORTHOGONAL CONJUGATIONS Armin Uhlmann University of Leipzig, Institute for Theoretical Physics After a short introduction to anti-linearity, bounds for the number of orthogonal (skew) conjugations
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the
More informationMATRIX AND LINEAR ALGEBR A Aided with MATLAB
Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND
More informationSingular Value Decomposition
The Bulletin of Society for Mathematical Services and Standards Online: 2014-09-01 ISSN: 2277-8020, Vol. 11, pp 13-20 doi:10.18052/www.scipress.com/bsmass.11.13 2014 SciPress Ltd., Switzerland Singular
More informationChapter 12 Solving secular equations
Chapter 12 Solving secular equations Gérard MEURANT January-February, 2012 1 Examples of Secular Equations 2 Secular equation solvers 3 Numerical experiments Examples of secular equations What are secular
More informationOn prescribing Ritz values and GMRES residual norms generated by Arnoldi processes
On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationPreface to Second Edition... vii. Preface to First Edition...
Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and
More informationATLANTIS STUDIES IN MATHEMATICS VOLUME 3 SERIES EDITOR: J. VAN MILL
ATLANTIS STUDIES IN MATHEMATICS VOLUME 3 SERIES EDITOR: J. VAN MILL Atlantis Studies in Mathematics Series Editor: J. van Mill VU University Amsterdam, Amsterdam, the Netherlands (ISSN: 1875-7634) Aims
More informationAMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear
More informationHONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013
HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationOn numerical stability in large scale linear algebraic computations
ZAMM Z. Angew. Math. Mech. 85, No. 5, 307 325 (2005) / DOI 10.1002/zamm.200410185 On numerical stability in large scale linear algebraic computations Plenary lecture presented at the 75th Annual GAMM Conference,
More informationLAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM
LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM ORIGINATION DATE: 8/2/99 APPROVAL DATE: 3/22/12 LAST MODIFICATION DATE: 3/28/12 EFFECTIVE TERM/YEAR: FALL/ 12 COURSE ID: COURSE TITLE: MATH2800 Linear Algebra
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationPreface and Overview. vii
This book is designed as an advanced text on unbounded self-adjoint operators in Hilbert space and their spectral theory, with an emphasis on applications in mathematical physics and various fields of
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 435 (011) 1845 1856 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Hurwitz rational functions
More informationSolving discrete ill posed problems with Tikhonov regularization and generalized cross validation
Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov
More informationGram-Schmidt Orthogonalization: 100 Years and More
Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationPositive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract
Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationNP-hardness results for linear algebraic problems with interval data
1 NP-hardness results for linear algebraic problems with interval data Dedicated to my father, Mr. Robert Rohn, in memoriam J. Rohn a a Faculty of Mathematics and Physics, Charles University, Malostranské
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this
More informationON LINEAR COMBINATIONS OF
Física Teórica, Julio Abad, 1 7 (2008) ON LINEAR COMBINATIONS OF ORTHOGONAL POLYNOMIALS Manuel Alfaro, Ana Peña and María Luisa Rezola Departamento de Matemáticas and IUMA, Facultad de Ciencias, Universidad
More informationOn a residual-based a posteriori error estimator for the total error
On a residual-based a posteriori error estimator for the total error J. Papež Z. Strakoš December 28, 2016 Abstract A posteriori error analysis in numerical PDEs aims at providing sufficiently accurate
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More information