ETNA Kent State University
|
|
- Charlene Fox
- 6 years ago
- Views:
Transcription
1 Electronic Transactions on Numerical Analysis. Volume 9, 1999, pp Copyright 1999,. ISSN ETNA OTHOGONAL POLYNOMIALS AND QUADATUE WALTE GAUTSCHI Abstract. Various concepts of orthogonality on the real line are reviewed that arise in connection with quadrature rules. Orthogonality relative to a positive measure and Gauss-type quadrature rules are classical. More recent types of orthogonality include orthogonality relative to a sign-variable measure, which arises in connection with Gauss-Kronrod quadrature, and power (or implicit) orthogonality encountered in Turán-type quadratures. elevant questions of numerical computation are also considered. Key words. orthogonal polynomials, Gauss-Lobatto, Gauss-Kronrod, and Gauss-Turán rules, computation of Gauss-type quadrature rules. AMS subject classifications. 33C45, 65D32, 65F Introduction. Orthogonality concepts arise naturally in connection with numerical quadrature, when one tries to optimize the degree of precision. The classical example is the Gaussian quadrature rule, which maximizes the (polynomial) degree of exactness. Closely related quadrature rules are those of adau and Lobatto, where one or two nodes are prescribed. In Gauss-Kronrod rules about half of the nodes are prescribed, all within the support of the integration measure, which gives rise to orthogonality relative to a sign-variable measure. Quadrature rules with multiple nodes lead naturally to power (or implicit) orthogonality. The classical example here is the Turán quadrature rule. In the following, these interrelations between orthogonal polynomials and quadrature rules, as well as relevant computational algorithms, are discussed in more detail. 2. Quadrature rules and orthogonality. We begin with the simplest kind of quadrature rule, (2.1) f(t)dλ(t) = λ ν f(τ ν )+ n (f), where the integral of a function f relative to some (in general positive) measure dλ is approximated by a finite sum involving n values of f at suitably selected distinct nodes τ ν. The respective error is n (f). The support of the measure is usually a finite interval, a halfinfinite interval, or the whole real line, but could also be a finite or infinite collection of mutually distinct intervals or points. The formula (2.1) is said to have polynomial degree of exactness d if (2.2) n (f) = for all f Pd, where Pd denotes the set of polynomials of degree d. Interpolatory formulae (or Newton- Cotes formulae) are those having degree d = n 1. They are precisely the quadrature rules obtained by replacing f in (2.1) by its Lagrange polynomial of degree n 1 interpolating f at the nodes τ ν. We denote by (2.3) n ω n (t) = (t τ ν ) eceived November 1, Accepted for publicaton December 1, ecommended by F. Marcellán. Department of Computer Sciences, Purdue University, West Lafayette, IN (wxg@cs.purdue.edu). 65
2 66 W. Gautschi the node polynomial associated with the rule (2.1), i.e., the monic polynomial of degree n having the nodes τ ν as its zeros. We see that, given the n nodes τ ν, we can always achieve degree of exactness n 1. A natural question is how to select the nodes τ ν and weights λ ν to do better. This is answered by the following theorem. THEOEM 2.1. The quadrature rule (2.1) has degree of exactness d = n 1+k, k, if and only if both of the following conditions are satisfied: (a) the formula (2.1) is interpolatory; (b) the node polynomial ω n satisfies (2.4) ω n(t)p(t)dλ(t) = for all p Pk 1. It is difficult to trace the origin of this theorem, but Jacobi [11] must have been aware of it. We remark that k =corresponds to the Newton-Cotes formula, which requires no condition other than being interpolatory. The requirement (2.4), accordingly, is empty. If k>, the condition (2.4) is a condition that involves only the nodes τ ν of (2.1); it imposes exactly k nonlinear constraints on them, in fact, orthogonality (relative to the measure dλ) of the node polynomial ω n to the space of all polynomials of degree k 1. Once a set of nodes τ ν has been determined that satisfies this constraint, the condition (a) then uniquely determines the weights λ ν in (2.1), for example, as the solution of the linear (Vandermonde) system of equations n λ ντ ν µ = tµ dλ(t), µ =, 1,...,n The Gaussian quadrature rule. If the measure dλ is positive, then k n, since otherwise (2.4) would have to hold for k = n +1, implying that ω n is orthogonal to all polynomials of degree n, hence, in particular, orthogonal to itself, which is impossible. Thus, k = n is optimal, in which case ω n is orthogonal to all polynomials of lower degree, i.e., ω n is the (monic) orthogonal polynomial of degree n relative to the measure dλ. We express this by writing (2.5) ω n (t) =π n (t; dλ). The interpolatory quadrature rule (2.6) f(t)dλ(t) = λ G ν f(τν G )+n G (f) corresponding to this node polynomial is precisely the Gaussian quadrature rule for the measure dλ. It was discoveredby Gauss in 1814 ([4]) in the specialcase of thelebesguemeasure dλ(t) =dt on [ 1, 1]. For general dλ, its nodes are the zeros of the orthogonal polynomial π n ( ; dλ), and its weights λ ν, the so-called Christoffel numbers, are obtainable by interpolation as above. (See, however, Theorem 3.1 below.) 2.2. The Gauss-adau quadrature rule. If the support of dλ is bounded from one side, say from the left, it is sometimes convenient to take a = inf supp dλ as one of the nodes, say τ 1 = a. According to Theorem 2.1, this reduces the degree of freedom by one, and the maximum possible value of k is k = n 1. If we write ω n (t) =ω n 1 (t)(t a), the polynomial ω n 1 having the remaining nodes as its zeros must be orthogonal to all lowerdegree polynomials relative to the measure dλ a (t) =(t a)dλ(t), i.e., (2.7) ω n 1 (t) =π n 1 (t; dλ a ).
3 Orthogonal polynomials and quadrature 67 The corresponding interpolatory quadrature rule, (2.8) f(t)dλ(t) =λ 1 f(a)+ λ ν f(τν )+n (f), ν=2 is called the Gauss-adau rule for dλ (adau [19]). There is an analogous rule in the case b = sup supp dλ <, with,say,τ n = b. Indeed, both rules make sense also in the case a<inf supp dλ resp. b>sup supp dλ. They are special cases of a quadrature rule already considered by Christoffel in 1858 ([3]) The Gauss-Lobatto quadrature rule. If the support of dλ is bounded from both sides, we may take τ 1 = a inf supp dλ and τ n = b sup supp dλ, thereby restricting the degree of freedom once more. The optimal quadrature rule becomes the Gauss-Lobatto rule for dλ ([15]), n 1 (2.9) f(t)dλ(t) =λl 1 f(a)+ λ L ν f(τ ν L )+λl n f(b)+l n (f), ν=2 whose interior nodes τ ν are the zeros of (2.1) π n 2 ( ; dλ a,b ), dλ a,b =(t a)(b t)dλ(t). It, too, is a special case of the Christoffel quadrature rule, which has an arbitrary number of prescribed nodes outside or on the boundary of the support of dλ The Gauss-Kronrod rule. This quadrature rule has also prescribed nodes, but they all are in the interior of the support of dλ, and it therefore transcends the class of Christoffel quadrature rules. In trying to estimate the error of the Gauss quadrature rule, Kronrod in 1964 ([12], [13]) indeed constructed a (2n+1)-point quadrature rule of maximum algebraic degree of exactness that has n prescribed nodes the n Gauss nodes τν G in addition to n +1free nodes. It thus has the form n+1 (2.11) f(t)dλ(t) = λ K ν f(τ ν G )+ λ K µ f(τ µ K )+K n (f). Here, the node polynomial is n+1 (2.12) ω 2n+1 (t) =π n (t; dλ)πn+1 (t), π n+1 (t) = (t τµ K ), and, according to Theorem 2.1 with n replaced by 2n +1, the quadrature rule (2.11) has degree of exactness d =2n + k if and only if (2.11) is interpolatory and π n+1 (t)p(t)π n(t; dλ)dλ(t) = for all p Pk 1. The optimal value of k is k = n +1, in which case πn+1 is orthogonal to all polynomials of lower degree, i.e., µ=1 µ=1 (2.13) π n+1(t) =π n+1 (t; π n dλ). That is, πn+1 is the (monic) polynomial of degree n +1orthogonal relative to the oscillating measure dλ n (t) =π n(t; dλ)dλ(t). While πn+1 always exists uniquely, there is no assurance that all its zeros are inside the support of dλ, or even real (cf. [5]).
4 68 W. Gautschi 3. Computation of Gauss-type quadrature rules. All quadrature rules introduced in 2 can be computed via eigenvalues and eigenvectors of a symmetric tridiagonal matrix. Part or all of this matrix is made up of the coefficients in the three-term recurrence relation satisfied by the (monic) orthogonal polynomials π k ( )=π k ( ; dλ) relative to the (positive) measure dλ, (3.1) π k+1 (t) =(t α k )π k (t) β k π k 1 (t), k=, 1, 2,...,n 1, π 1 (t) =, π (t) =1, where α k = α k (dλ), β k = β k (dλ) > depend on dλ and β = dλ(t) by convention. The tridiagonal matrices involved are, respectively, the Jacobi, the Jacobi-adau, the Jacobi- Lobatto, and the Jacobi-Kronrod matrix The Gauss quadrature rule. The Jacobi matrix of order n is defined by (3.2) Jn G = J n G (dλ) = α β1 β1 α 1 β βn 1 βn 1 α n 1 The Gauss formula (2.6) can be obtained in terms of the eigenvalues and eigenvectors of Jn G according to the following theorem. THEOEM 3.1. (Golub and Welsch [1]) The Gauss nodes τν G are the eigenvalues of Jn G, and the Gauss weights λg ν are given by. (3.3) λ G ν = β [u G ν,1] 2, ν =1, 2,...,n, where u G ν is the normalized eigenvector of Jn G corresponding to the eigenvalue τν G (i.e., [u G ν ]T u G ν =1) and ug ν,1 its first component. Thus, to compute the nodes and weights of the Gauss formula, it suffices to compute the eigenvalues and first components of the eigenvectors of the Jacobi matrix Jn G. Efficient methods for this, such as the Q algorithm, are well known (see, e.g., Parlett [18]). Since the first components u G ν,1 of u G ν are easily seen to be nonzero, the positivity of the Gauss rule can be read off from (3.3) The Gauss-adau formula. We replace n by n+1in(2.8)soastohaven interior nodes, f(t)dλ(t) =λ f(a)+ λ ν f(τ ν )+ n+1 (3.4) (f), n+1 (P 2n) =. The Jacobi-adau matrix of order n +1is then defined by [ J Jn+1 = Jn+1(dλ) G = n (dλ) βn e (3.5) βn n e T n α n ], where J G n (dλ) is as in (3.2), et n =[,,...,1] is the nth coordinate vector in n,and (3.6) α n = a β n π n 1 (a) π n (a),
5 Orthogonal polynomials and quadrature 69 with π k ( )=π k ( ; dλ) as before. One then has the following theorem analogous to Theorem 3.1. THEOEM 3.2. (Golub [9]) The Gauss-adau nodes τ = a, τ 1,...,τ n in (3.4) are the eigenvalues of Jn+1, and the Gauss-adau weights λ ν are given by (3.7) λ ν = β [u ν,1] 2, ν =, 1, 2,...,n, where u ν is the normalized eigenvector of Jn+1 corresponding to the eigenvalue τν (i.e., [u ν ]T u ν =1) and u ν,1 its first component. As previously for the Gauss formula, the Q algorithm, now applied to Jn+1,isagain the method of choice to compute Gauss-adau formulae. Their positivity follows from (3.7). Theorem 3.2 remains in force if a inf supp dλ. If the right end point is the prescribed node, τn+1 = b, orifτ n+1 = b sup supp dλ, a theorem analogous to Theorem 3.2 holds with obvious changes The Gauss-Lobatto formula. We now replace n by n +2in (2.9) and write the Gauss-Lobatto formula in the form f(t)dλ(t) =λl f(a)+ λ L ν f(τ ν L )+λl n+1 f(b)+l n+2 (3.8) (f), n+2 L (P 2n+1) =. The Jacobi-Lobatto matrix of order n +2is defined by J G n (dλ) βn e n (3.9) Jn+2 L = Jn+2(dλ) L = βn e T n α n β L n+1 T β L n+1 α L n+1 with Jn G (dλ) and e n as before, and α L n+1, βn+1 L the solution of the 2 2 linear system [ ][ ] [ ] πn+1 (a) π n (a) α L n+1 aπn+1 (a) (3.1) π n+1 (b) π n (b) βn+1 L =. bπ n+1 (b) We now have THEOEM 3.3. (Golub [9]) The Gauss-Lobatto nodes τ L = a, τ 1 L,...,τL n, τ n+1 L = b in (3.8) are the eigenvalues of Jn+2 L, and the Gauss-Lobatto weights λl ν are given by, (3.11) λ L ν = β [u L ν,1] 2, ν =, 1, 2,...,n,n+1, where u L ν is the normalized eigenvector of Jn+2 L corresponding to the eigenvalue τν L (i.e., [u L ν ]T u L ν =1) and ul ν,1 its first component. Also Gauss-Lobatto formulae are therefore computable by the Q algorithm, now applied to Jn+2 L, and by (3.11) we still have positivity of the quadrature rule. Theorem 3.3 holds without change for a inf supp dλ and b sup supp dλ The Gauss-Kronrod formula. It has only recently been discovered by Laurie that an eigenvalue/eigenvector characterization similar to those in Theorems holds also for the Gauss-Kronrod rule (2.11). The Jacobi-Kronrod matrix of order 2n +1now has the form J G J2n+1 K = J 2n+1 K n βn e n (dλ) = (3.12) βn e T n α n βn+1 e T 1, βn+1 e 1 J n
6 7 W. Gautschi where e T 1 =[1,,...,] n and Jn is a real symmetric tridiagonal matrix which has different forms depending on whether n is odd or even: [ J G ] Jn odd = n+1:(3n 1)/2 β(3n+1)/2 e (n 1)/2 (3.13), β(3n+1)/2 e T (n 1)/2 J(3n+1)/2:2n (3.14) J Jn even = n+1:3n/2 G β (3n+2)/2 e T n/2 β (3n+2)/2 e n/2. J (3n+2)/2:2n Here, Jp:q G is the principal minor matrix of the (infinite) Jacobi matrix J G having diagonal elements α p,α p+1,...,α q,andj (3n+2)/2 :2n are symmetric tridiagonal matrices, and β(3n+2)/2 an element yet to be determined. THEOEM 3.4. (Laurie [14]) Let λ K ν > and λ K µ > in (2.11). ThenJn in (3.12) has the same eigenvalues as Jn G. Moreover, the nodes τ ν G and τ µ K are the eigenvalues of J 2n+1 K, and the Gauss-Kronrod weights are given by (3.15) λ K ν = β [u K ν,1] 2,,...,n; λ K µ = β [u K µ+n,1] 2,µ=1,...,n,n+1, where u K 1,uK 2,...,uK 2n+1 are the normalized eigenvectors of J 2n+1 K corresponding to the eigenvalues τ1 G,...,τG n,τk 1,...,τK n+1, and uk 1,1,uK 2,1,...,uK 2n+1,1 their first components. Conversely, if the eigenvalues of Jn G and J n are the same, then (2.11) exists with real nodes and positive weights. We remark that according to a result of Monegato [16] the positivity of λ K µ implies the reality of the Kronrod nodes τµ K and their interlacing with the Gauss nodes τ ν G. Once the trailing tridiagonal blocks in (3.13), (3.14), and β(3n+2)/2 if n is even, are known, the Gauss-Kronrod formula can be computed in much the same way as the Gauss formula in terms of eigenvalues and eigenvectors of the symmetric tridiagonal matrix J2n+1 K. This in fact is the way Laurie s algorithm proceeds. Note, however, that when the nodes τν G are already known, there is a certain redundancy in this algorithm in as much as these Gauss nodes are recomputed along with the Kronrod nodes τµ K. This redundancy is eliminated in a more recent algorithm of Calvetti, Golub, Gragg, and eichel, which bypasses the computation of the trailing block in (3.12) and focuses directly onto the Kronrod nodes and weights of the Gauss-Kronrod formula Laurie s algorithm. ([14]) Assume that the hypotheses of Theorem 3.4 are fulfilled. Let, as before, {π k } n k= denote the (monic) polynomials belonging to the Jacobi matrix Jn G (dλ), and thus orthogonal with respect to the measure dλ,andlet{πk }n k= be the (monic) orthogonal polynomials belonging to the matrix Jn in (3.12) and measure dµ (in general unknown). Define mixed moments by (3.16) σ k,l =(π k,π l) dµ, where (, ) dµ is the inner product relative to the measure dµ. Although the measure dµ is unknown, a few things about the moments σ k,l are known. For example, (3.17) σ k,l = for l<k, which is an immediate consequence of orthogonality. Also, (3.18) σ k,n = for k<n,
7 Orthogonal polynomials and quadrature 71 again by orthogonality, since π n = πn by Theorem 3.4. It is easy, moreover, to derive the recurrence relation (3.19) σ k,l+1 σ k+1,l (α k α l)σ k,l β k σ k 1,l β l σ k,l 1 =, where {α k }n 1 k= are the diagonal elements of J n and { βk }n 1 k=1 the elements on the side diagonals, while {α l } n 1 l=, { β l } n 1 l=1 are the analogous elements of J n G. Some of the elements α k, β k are known according to the structure of the matrices in (3.13) and (3.14). Indeed, if we assume for definiteness that n is odd, then (3.2) α k = α n+1+k for k (n 3)/2, β k = β n+1+k for k (n 1)/2. Using the facts that σ, = dµ (t) =β = β n+1 and σ 1,l =for l =, 1,...,n 1, σ, 1 =,andσ k,k 2 = σ k,k 1 =for k =1, 2,...,(n 1)/2, one can solve (3.19) for σ k,l+1 and compute the entries σ k,l in the triangular array indicated by black dots in Fig. 3.1 (drawn for n =7), since the α k and β k required are known by (3.2) except for the top element in the triangle. For this element the α k for k =(n 1)/2 is not yet known, but, by good fortune, it multiplies the element σ (n 1)/2,(n 3)/2, which is zero by (3.17). This mode of recursion from left to right has already been used by Salzer [21] in another context. k n -1 ( n-3)/2 l n ( =7) n FIG.3.1.Computation of the mixed moments At this point, one can switch to a recursion from bottom up, using the recurrence relation (3.19) solved for σ k+1,l. This computes all entries indicated by a cross in Fig. 3.1, proceeding from the very bottom to the very top of the array. For each k with (n 1)/2 k n 1, the entries in Fig. 3.1 surrounded by boxes are those used to compute the as yet unknown α k, βk according to α k = α k + σ k,k+1 σ k 1,k, βk = σ k,k (3.21). σ k,k σ k 1,k 1 σ k 1,k 1 This is the way Sack and Donovan [2] proceeded to generate modified moments in the modified Chebyshev algorithm (cf. [6, 5.2]). In the present case, crucial use is made of the
8 72 W. Gautschi property (3.18) of mixed moments, which provides the zero boundary values at the right edge of the σ-tableau. Once in possession of all the α k, β k required to compute J n, one proceeds as described earlier The algorithm of Calvetti, Golub, Gragg, and eichel. ([2]) Here we assume for simplicity that n is even. There is a similar, though slightly more complicated, algorithm when n is odd. The symmetric tridiagonal matrix Jn in (3.12) determines its own set of orthogonal polynomials, in particular, an n-point Gauss quadrature rule relative to some (unknown) measure dµ. Since by Theorem 3.4 the eigenvalues of Jn are the same as those of Jn G, the Gauss rule in question has nodes τν G and certain positive weights µ ν, ν =1, 2,...,n. Likewise, the (known) matrix Jn+1:3n/2 G of order n/2 in (3.14) determines another Gauss rule whose nodes and weights we denote respectively by τ κ and λ κ, κ =1, 2,...,n/2. Letv =[v 1,v 2,...,v n ] and v =[v1,v 2,...,v n ] be the matrices of normalized eigenvectors of J n G and J n, respectively. The algorithm requires the last components v 1,n,v 2,n,...,v n,n of the eigenvectors v 1,v 2,...,v n and the first components v1,1,v 2,1,...,v n,1 of the eigenvectors v1,v 2,...,v n. The latter, according to Theorem 3.1, are related to the Gauss weights µ ν by means of the relation (3.22) β [v ν,1] 2 = µ ν, ν =1, 2,...,n, and can therefore be computed as the positive square roots of µ ν/β. It can be easily shown, on the other hand, that the weights µ ν are computable with the help of the second Gauss rule above as (3.23) n/2 µ ν = l ν ( τ κ ) λ κ, κ=1 ν =1, 2,...,n, where l ν are the elementary Lagrange polynomials associated with the nodes τ G 1,τG 2,...,τ G n. With these auxiliary quantities computed, the remainder of the algorithm consists of the consolidation phase of a divide-and-conquer algorithm due to Borges and Gragg ([1]). The spectral decomposition of J n is (3.24) Define (3.25) J n = v D τ [v ] T, D τ = diag (τ G 1,τ G 2,...,τ G n ). V := v 1, v a matrix of order 2n +1. We then have from (3.12) that D τ λi βn v T e n V T (J2n+1 K λi)v = βn e T n v α (3.26) n λ βn+1 e T 1 v βn+1 [v ] T e 1 D τ λi, where the matrix on the right is a diagonal matrix plus a Swiss cross containing the elements of e T n v and et 1 v previously computed. By orthogonal similarity transformations involving
9 Orthogonal polynomials and quadrature 73 a permutation and a sequence of Givens rotations, the matrix in (3.26) can be reduced to the form D τ λi (3.27) Ṽ T (J2n+1 K λi)ṽ = D τ λi c, c T α n λ where Ṽ is the transformed matrix V and c a vector containing the entries in positions n +1 to 2n of the transformed vector [ β n e T n v, β n+1 e T 1 v,α n ]. It is now evident from (3.27) that one set of eigenvalues of J2n+1 K is {τ1 G,τ2 G,...,τn G }. The remaining eigenvalues are those of the trailing block in (3.27). To compute them, observe that [ ] [ Dτ λi c = α n λ c T where in terms of the components c ν of c, I c T (D τ λi) 1 1 ][ Dτ λi c T f(λ) ], (3.28) f(λ) =λ α n + c 2 ν τ G ν λ. Thus, the remaining eigenvalues of J2n+1 K are the zeros of f(λ), which can be seen to interlace with the nodes τν G. The first components of the normalized eigenvectors u K 1,uK 2,...,uK 2n+1 needed according to Theorem 3.4 are also computable from the columns of V by keeping track of the orthogonal transformations. 4. Quadrature rules with multiple nodes. We now consider quadrature rules of the form f(t)dλ(t) = r 1 (4.1) λ (ρ) ν f (ρ) (τ ν )+ n (f), ρ= where each τ ν is a node of multiplicity r. The underlying interpolation process is the one of Hermite, which is exact for polynomials of degree r n 1. We therefore call (4.1) interpolatory if it has degree of exactness d = r n 1. As before, the node polynomial is defined by (4.2) n ω n (t) = (t τ ν ). In analogy to Theorem 2.1, we now have the following theorem. THEOEM 4.1. The quadrature rule (4.1) has degree of exactness d = r n 1+k, k, if and only if both of the following conditions are satisfied: (a) the formula (4.1) is interpolatory; (b) the node polynomial ω n satisfies (4.3) [ω n(t)] r p(t)dλ(t) = for all p Pk 1. The case k =corresponds to the Newton-Cotes-Hermite formula, which requires no extra condition beyond that of being interpolatory. If k >, the condition (b) is again a condition involving only the nodes τ ν, namely orthogonality of the rth power of ω n to all
10 74 W. Gautschi polynomials of degree k 1. It is immediately clear from (4.3) that for positive measures dλ the multiplicity r must be an oddinteger, since otherwise we couldnot have orthogonality of ω r n to a constant, let alone to P k 1. It is customary, therefore, to write (4.4) r =2s +1, s. It then follows that k n, since otherwise k = n +1would imply orthogonality of ωn r to all polynomials of degree n, in particular, orthogonality to ω n,which,r being odd, is impossible. Thus, k = n is optimal; the corresponding interpolatory quadrature rule is called is orthogonal to all polynomials of degree n 1, the polynomial ω n thus the so-called s-orthogonal polynomial of degree n. There is an extensive theory of these polynomials, for which we refer to the book of Ghizzetti and Ossicini [8, 3.9,4.13]. We mention here only the elegant result of Turán [22], according to which ω n is the extremal polynomial of the Gauss-Turán rule for the measure dλ ([22]). For it, the polynomial ω 2s+1 n (4.5) [ω(t)]2s+2 dλ(t) = min, where the minimum is sought among all monic polynomials ω of degree n. Fromthis,there follows in particular the existence and uniqueness of real Gauss-Turán formulae, but not necessarily their positivity. Nevertheless, it has been shown by Ossicini and osati [17] that >, ν =1, 2,...,n,ifρ is even. λ (ρ) ν 5. Computation of Gauss-Turán quadrature rules. (Gautschi and Milovanović[7]) The computation of the Gauss-Turán formula f(t)dλ(t) = 2s λ ν (σ)t f (σ) (τν T )+n T (f), (5.1) σ= n T (P 2(s+1)n 1) =, involves two stages: (i) The generation of the s-orthogonal polynomial π n = π n,s, i.e., the polynomial π n satisfying the orthogonality relation (5.2) [π n(t)] 2s+1 p(t)dλ(t) = for all p Pn 1; the zeros of π n are the desired nodes τν T in (5.1). (ii) The computation of the weights λ ν (σ)t. Since the latter stage requires knowledge of the nodes τν T, we begin with the computation of the s-orthogonal polynomial π n,s. We reinterpret power orthogonality in (5.2) as ordinary orthogonality with respect to the measure (5.3) dλ n,s (t) =[π n (t)] 2s dλ(t), which, like dλ, is also positive, (5.4) π n(t)p(t)dλ n,s (t) = for all p Pn 1. Since the measure dλ n,s involves the unknown π n, one also talks about implicit orthogonality.
11 Orthogonal polynomials and quadrature 75 The measure dλ n,s being positive, it defines a sequence π k (t) = π k (t; dλ n,s ), k =, 1,...,n, of orthogonal polynomials, of which only the last one for k = n is of interest to us. They satisfy a three-term recurrence relation (3.1), with coefficients α k, β k given by the well-known inner product formulae α k = tπ2 k (t)dλ n,s(t) π2 k (t)dλ, k =, 1,...,n 1, n,s(t) (5.5) β = dλ n,s(t), β k = π2 k (t)dλ n,s(t) π2 k 1 (t)dλ n,s(t), k =1,...,n 1. This constitutes a system of 2n nonlinear equations for the 2n coefficients α,α 1,..., α n 1 ; β,β 1,...,β n 1, which can be written in the form (5.6) φ =, φ T =[φ,φ 1,...,φ 2n 1 ], where, in view of (5.3), φ := β π2s n (t)dλ(t), (5.7) φ 2ν := [β νπν 1(t) 2 πν(t)]π 2 n 2s (t)dλ(t),,...,n 1, φ 2ν+1 := (α ν t)πν 2 (t)π2s n (t)dλ(t), ν=, 1,...,n 1. Here, each π k is to be considered a function of α,...,α k 1 ; β 1,...,β k 1 by virtue of the three-term recurrence relation (3.1) which it satisfies. Since all integrands in (5.7) are polynomials of degree 2(s +1)n 1, the integrals in (5.7) can be computed exactly by an (s +1)n-point Gaussian quadrature rule relative to the measure dλ. Any method for solving systems of nonlinear equations, such as Newton s method or quasi-newton methods, can now be applied to solve (5.6). (For Newton s method, in this context, see [7].) We now turn to the computation of the quadrature weights. Here we note that (5.8) ω ρ,ν (t) =(t τ T ν ) ρ µ ν(t τ T µ ) 2s+1, ρ =, 1,...,2s; ν =1,...,n, are polynomials of degree (2s +1)n 1, hence, by (5.1), n T (ω ρ,ν) =. This can be written as 2s (5.9) λ κ (σ)t ω ρ,ν (σ) (τκ T )=b ρ,ν, κ=1 σ= with (5.1) b ρ,ν = ω ρ,ν(t)dλ(t). By virtue of the definition (5.8) of ω ρ,ν,wehaveω ρ,ν (σ) (τκ T )=for κ ν, so that the system (5.9) breaks up into n separate linear systems for the weights λ ν (σ)t, σ =, 1,...,2s, (5.11) 2s σ= λ ν (σ)t ω ρ,ν (σ) (τ ν T )=b ρ,ν, ρ =, 1,...,2s.
12 76 W. Gautschi Each of these systems further simplifies since ω (σ) ρ,ν (τ T ν )=for ρ>σ. That is, for each ν, we obtain the upper triangular system (5.12) ω,ν (τν T ) ω,ν (τ ν T ) ω(2s),ν (τ ν T ) ω 1,ν(τ ν T ) ω (2s) 1,ν (τ ν T ).... ω (2s) 2s,ν (τ ν T ) λ ()T ν λ (1)T ν.. λ (2s)T ν = b,ν b 1,ν. b 2s,ν The matrix elements can easily be computed by linear recursions (cf. [7]) and the elements of the right-hand vector by (s +1)n-point Gauss quadrature as before.. EFEENCES [1] BOGES, C.F. AND GAGG, W.B A parallel divide and conquer algorithm for the generalized real symmetric definite tridiagonal eigenproblem. In Numerical Linear Algebra (L. eichel, A. uttan, and.s. Varga, eds.), de Gruyter, Berlin, pp [2] CALVETTI,D.,GOLUB, G.H., GAGG, W.B., AND EICHEL, L. Computation of Gauss-Kronrod quadrature rules, to appear. [3] CHISTOFFEL, E.B Über die Gaußische Quadratur und eine Verallgemeinerung derselben. J. eine Angew. Math. 55, [Ges. Math. Abhandlungen I, ] [4] GAUSS, C.F Methodus nova integralium valores per approximationem inveniendi. Commentationes Societatis egiae Scientarium Gottingensis ecentiores 2. [Werke III, ] [5] GAUTSCHI, W Gauss-Kronrod quadrature a survey. In Numerical Methods and Approximation Theory III (G.V. Milovanović, ed.), Faculty of Electronic Engineering, Univ. of Niš, Niš, pp [6] GAUTSCHI, W Orthogonal polynomials: applications and computation. In Acta Numerica 1996 (A. Iserles, ed.), Cambridge University Press, Cambridge, pp [7] GAUTSCHI, W. AND MILOVANOVIĆ, G.V s-orthogonality and construction of Gauss-Turán-type quadrature formulae. J. Comput. Appl. Math. 86, [8] GHIZZETTI, A. AND OSSICINI, A Quadrature formulae, Academic Press, New York. [9] GOLUB, G.H Some modified matrix eigenvalue problems. SIAM ev. 15, [1] GOLUB, G.H. AND WELSCH, J.H Calculation of Gauss quadrature rules. Math. Comp. 23, Loose microfiche suppl. A1 A1. [11] JACOBI, C.G.J Ueber Gaußs neue Methode, die Werthe der Integrale näherungsweise zu finden. J. eine Angew. Math. 1, [12] KONOD, A.S Integration with control of accuracy (ussian), Dokl. Akad. Nauk SSS 154, [13] KONOD, A.S Nodes and weights for quadrature formulae. Sixteen-place tables (ussian), Izdat. Nauka, Moscow. [Engl. transl.: Consultants Bureau, New York, 1965.] [14] LAUIE, D.P Calculation of Gauss-Kronrod quadrature rules. Math. Comp. 66, [15] LOBATTO, Lessen over de Differentiaal- en Integraal-ekening. Part II. Integraal-ekening, Van Cleef, The Hague. [16] MONEGATO, G A note on extended Gaussian quadrature rules. Math. Comp. 3, [17] OSSICINI, A. AND OSATI, F Sulla convergenza dei funzionali ipergaussiani. end. Mat. (6) 11, [18] PALETT, B.N The symmetric eigenvalue problem, Prentice-Hall Ser. Comput. Math., Prentice-Hall, Englewood Cliffs, NJ. [Corr. repr. in Classics in Applied Mathematics 2, SIAM, Philadelphia, PA, 1998.] [19] ADAU, Étude sur les formules d approximation qui servent à calculer la valeur numérique d une intégrale définie. J. Math. Pures Appl. (3) 6, [2] SACK,.A. AND DONOVAN, A.F. 1971/72. An algorithm for Gaussian quadrature given modified moments. Numer. Math. 18, [21] SALZE, H.E A recurrence scheme for converting from one orthogonal expansion into another. Comm. ACM 16, [22] TUÁN, P On the theory of the mechanical quadrature. Acta Sci. Math. Szeged 12, [Collected Papers 1, ]
GENERALIZED GAUSS RADAU AND GAUSS LOBATTO FORMULAE
BIT 0006-3835/98/3804-0101 $12.00 2000, Vol., No., pp. 1 14 c Swets & Zeitlinger GENERALIZED GAUSS RADAU AND GAUSS LOBATTO FORMULAE WALTER GAUTSCHI 1 1 Department of Computer Sciences, Purdue University,
More informationCALCULATION OF GAUSS-KRONROD QUADRATURE RULES. 1. Introduction A(2n+ 1)-point Gauss-Kronrod integration rule for the integral.
MATHEMATICS OF COMPUTATION Volume 66, Number 219, July 1997, Pages 1133 1145 S 0025-5718(97)00861-2 CALCULATION OF GAUSS-KRONROD QUADRATURE RULES DIRK P. LAURIE Abstract. The Jacobi matrix of the (2n+1)-point
More informationNumerical quadratures and orthogonal polynomials
Stud. Univ. Babeş-Bolyai Math. 56(2011), No. 2, 449 464 Numerical quadratures and orthogonal polynomials Gradimir V. Milovanović Abstract. Orthogonal polynomials of different kinds as the basic tools play
More informationCOMPUTATION OF GAUSS-KRONROD QUADRATURE RULES
MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1035 1052 S 0025-5718(00)01174-1 Article electronically published on February 17, 2000 COMPUTATION OF GAUSS-KRONROD QUADRATURE RULES D.CALVETTI,G.H.GOLUB,W.B.GRAGG,ANDL.REICHEL
More informationA Note on Extended Gaussian Quadrature
MATHEMATICS OF COMPUTATION, VOLUME 30, NUMBER 136 OCTOBER 1976, PAGES 812-817 A Note on Extended Gaussian Quadrature Rules By Giovanni Monegato* Abstract. Extended Gaussian quadrature rules of the type
More informationApplicable Analysis and Discrete Mathematics available online at
Applicable Analysis and Discrete Mathematics available online at http://pefmath.etf.rs Appl. Anal. Discrete Math. x (xxxx), xxx xxx. https://doi.org/10.2298/aadm180730018m QUADATUE WITH MULTIPLE NODES,
More informationMONOTONICITY OF THE ERROR TERM IN GAUSS TURÁN QUADRATURES FOR ANALYTIC FUNCTIONS
ANZIAM J. 48(27), 567 581 MONOTONICITY OF THE ERROR TERM IN GAUSS TURÁN QUADRATURES FOR ANALYTIC FUNCTIONS GRADIMIR V. MILOVANOVIĆ 1 and MIODRAG M. SPALEVIĆ 2 (Received March 19, 26) Abstract For Gauss
More informationSzegő-Lobatto quadrature rules
Szegő-Lobatto quadrature rules Carl Jagels a,, Lothar Reichel b,1, a Department of Mathematics and Computer Science, Hanover College, Hanover, IN 47243, USA b Department of Mathematical Sciences, Kent
More informationTHE CIRCLE THEOREM AND RELATED THEOREMS FOR GAUSS-TYPE QUADRATURE RULES. Dedicated to Ed Saff on the occasion of his 60th birthday
THE CIRCLE THEOREM AND RELATED THEOREMS FOR GAUSS-TYPE QUADRATURE RULES WALTER GAUTSCHI Dedicated to Ed Saff on the occasion of his 6th birthday Abstract. In 96, P.J. Davis and P. Rabinowitz established
More informationNumerische Mathematik
Numer. Math. (2000) 86: 617 633 Digital Object Identifier (DOI) 10.1007/s002110000186 Numerische Mathematik Quadrature rules for rational functions Walter Gautschi 1, Laura Gori 2, M. Laura Lo Cascio 2
More informationThe Hardy-Littlewood Function
Hardy-Littlewood p. 1/2 The Hardy-Littlewood Function An Exercise in Slowly Convergent Series Walter Gautschi wxg@cs.purdue.edu Purdue University Hardy-Littlewood p. 2/2 OUTLINE The Hardy-Littlewood (HL)
More informationFully Symmetric Interpolatory Rules for Multiple Integrals over Infinite Regions with Gaussian Weight
Fully Symmetric Interpolatory ules for Multiple Integrals over Infinite egions with Gaussian Weight Alan Genz Department of Mathematics Washington State University ullman, WA 99164-3113 USA genz@gauss.math.wsu.edu
More informationESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM
ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM GÉRARD MEURANT In memory of Gene H. Golub Abstract. In this paper we study how to compute an estimate
More informationOrthogonal polynomials
Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed
More informationOn an Inverse Problem for a Quadratic Eigenvalue Problem
International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationGeneralized Gaussian Quadratures for Integrals with Logarithmic Singularity
Filomat 3:4 (216), 1111 1126 DOI 1.2298/FIL164111M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Generalized Gaussian Quadratures
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationOrthogonal Polynomials, Quadrature, and Approximation: Computational Methods and Software (in Matlab)
OPQA p. 1/9 Orthogonal Polynomials, Quadrature, and Approximation: Computational Methods and Software (in Matlab) Walter Gautschi wxg@cs.purdue.edu Purdue University Orthogonal Polynomials OPQA p. 2/9
More informationOn Gauss-type quadrature formulas with prescribed nodes anywhere on the real line
On Gauss-type quadrature formulas with prescribed nodes anywhere on the real line Adhemar Bultheel, Ruymán Cruz-Barroso,, Marc Van Barel Department of Computer Science, K.U.Leuven, Celestijnenlaan 2 A,
More informationSummation of Series and Gaussian Quadratures
Summation of Series Gaussian Quadratures GRADIMIR V. MILOVANOVIĆ Dedicated to Walter Gautschi on the occasion of his 65th birthday Abstract. In 985, Gautschi the author constructed Gaussian quadrature
More informationNUMERICAL MATHEMATICS AND SCIENTIFIC COMPUTATION. Series Editors G. H. GOLUB Ch. SCHWAB
NUMEICAL MATHEMATICS AND SCIENTIFIC COMPUTATION Series Editors G. H. GOLUB Ch. SCHWAB E. SÜLI NUMEICAL MATHEMATICS AND SCIENTIFIC COMPUTATION Books in the series Monographs marked with an asterix ( ) appeared
More informationORTHOGONAL POLYNOMIALS FOR THE OSCILLATORY-GEGENBAUER WEIGHT. Gradimir V. Milovanović, Aleksandar S. Cvetković, and Zvezdan M.
PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 8498 2008, 49 60 DOI: 02298/PIM0898049M ORTHOGONAL POLYNOMIALS FOR THE OSCILLATORY-GEGENBAUER WEIGHT Gradimir V Milovanović, Aleksandar S Cvetković,
More informationOn the remainder term of Gauss Radau quadratures for analytic functions
Journal of Computational and Applied Mathematics 218 2008) 281 289 www.elsevier.com/locate/cam On the remainder term of Gauss Radau quadratures for analytic functions Gradimir V. Milovanović a,1, Miodrag
More informationLanczos tridiagonalization, Krylov subspaces and the problem of moments
Lanczos tridiagonalization, Krylov subspaces and the problem of moments Zdeněk Strakoš Institute of Computer Science AS CR, Prague http://www.cs.cas.cz/ strakos Numerical Linear Algebra in Signals and
More informationGauss Hermite interval quadrature rule
Computers and Mathematics with Applications 54 (2007) 544 555 www.elsevier.com/locate/camwa Gauss Hermite interval quadrature rule Gradimir V. Milovanović, Alesandar S. Cvetović Department of Mathematics,
More informationCONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA
Hacettepe Journal of Mathematics and Statistics Volume 40(2) (2011), 297 303 CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Gusein Sh Guseinov Received 21:06 :2010 : Accepted 25 :04 :2011 Abstract
More informationZERO DISTRIBUTION OF POLYNOMIALS ORTHOGONAL ON THE RADIAL RAYS IN THE COMPLEX PLANE* G. V. Milovanović, P. M. Rajković and Z. M.
FACTA UNIVERSITATIS (NIŠ) Ser. Math. Inform. 12 (1997), 127 142 ZERO DISTRIBUTION OF POLYNOMIALS ORTHOGONAL ON THE RADIAL RAYS IN THE COMPLEX PLANE* G. V. Milovanović, P. M. Rajković and Z. M. Marjanović
More informationRecurrence Relations and Fast Algorithms
Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations
More informationON LINEAR COMBINATIONS OF
Física Teórica, Julio Abad, 1 7 (2008) ON LINEAR COMBINATIONS OF ORTHOGONAL POLYNOMIALS Manuel Alfaro, Ana Peña and María Luisa Rezola Departamento de Matemáticas and IUMA, Facultad de Ciencias, Universidad
More informationERROR BOUNDS FOR GAUSS-TURÁN QUADRATURE FORMULAE OF ANALYTIC FUNCTIONS
MATHEMATICS OF COMPUTATION Volume, Number, Pages S 5-57(XX)- ERROR BOUNDS FOR GAUSS-TURÁN QUADRATURE FORMULAE OF ANALYTIC FUNCTIONS GRADIMIR V. MILOVANOVIĆ AND MIODRAG M. SPALEVIĆ This paper is dedicated
More informationGAUSS-KRONROD QUADRATURE FORMULAE A SURVEY OF FIFTY YEARS OF RESEARCH
Electronic Transactions on Numerical Analysis. Volume 45, pp. 371 404, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA GAUSS-KRONROD QUADRATURE FORMULAE A SURVEY OF FIFTY YEARS OF RESEARCH SOTIRIOS E. NOTARIS
More informationVariable-precision recurrence coefficients for nonstandard orthogonal polynomials
Numer Algor (29) 52:49 418 DOI 1.17/s1175-9-9283-2 ORIGINAL RESEARCH Variable-precision recurrence coefficients for nonstandard orthogonal polynomials Walter Gautschi Received: 6 January 29 / Accepted:
More informationPositive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract
Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm
More informationSOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.
SOLUTIONS: ASSIGNMENT 9 66 Use Gaussian elimination to find the determinant of the matrix det 1 1 4 4 1 1 1 1 8 8 = det = det 0 7 9 0 0 0 6 = 1 ( ) 3 6 = 36 = det = det 0 0 6 1 0 0 0 6 61 Consider a 4
More information1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let
SENSITIVITY ANALYSIS OR SZEGŐ POLYNOMIALS SUN-MI KIM AND LOTHAR REICHEL Abstract Szegő polynomials are orthogonal with respect to an inner product on the unit circle Numerical methods for weighted least-squares
More informationBulletin T.CXLV de l Académie serbe des sciences et des arts 2013 Classe des Sciences mathématiques et naturelles Sciences mathématiques, No 38
Bulletin T.CXLV de l Académie serbe des sciences et des arts 213 Classe des Sciences mathématiques et naturelles Sciences mathématiques, No 38 FAMILIES OF EULER-MACLAURIN FORMULAE FOR COMPOSITE GAUSS-LEGENDRE
More informationQuadrature rules with multiple nodes
Quadrature rules with multiple nodes Gradimir V. Milovanović and Marija P. Stanić Abstract In this paper a brief historical survey of the development of quadrature rules with multiple nodes and the maximal
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information(1-2) fx\-*f(x)dx = ^^ Z f(x[n)) + R (f), a > - 1,
MATHEMATICS OF COMPUTATION, VOLUME 29, NUMBER 129 JANUARY 1975, PAGES 93-99 Nonexistence of Chebyshev-Type Quadratures on Infinite Intervals* By Walter Gautschi Dedicated to D. H. Lehmer on his 10th birthday
More informationThe restarted QR-algorithm for eigenvalue computation of structured matrices
Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi
More informationOrthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals
1/31 Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals Dr. Abebe Geletu May, 2010 Technische Universität Ilmenau, Institut für Automatisierungs- und Systemtechnik Fachgebiet
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationA trigonometric orthogonality with respect to a nonnegative Borel measure
Filomat 6:4 01), 689 696 DOI 10.98/FIL104689M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A trigonometric orthogonality with
More informationOn orthogonal polynomials for certain non-definite linear functionals
On orthogonal polynomials for certain non-definite linear functionals Sven Ehrich a a GSF Research Center, Institute of Biomathematics and Biometry, Ingolstädter Landstr. 1, 85764 Neuherberg, Germany Abstract
More informationOn the Vorobyev method of moments
On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationSimultaneous Gaussian quadrature for Angelesco systems
for Angelesco systems 1 KU Leuven, Belgium SANUM March 22, 2016 1 Joint work with Doron Lubinsky Introduced by C.F. Borges in 1994 Introduced by C.F. Borges in 1994 (goes back to Angelesco 1918). Introduced
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationDeterminant evaluations for binary circulant matrices
Spec Matrices 014; :187 199 Research Article Open Access Christos Kravvaritis* Determinant evaluations for binary circulant matrices Abstract: Determinant formulas for special binary circulant matrices
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 35, pp. 148-163, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SPHERICAL QUADRATURE FORMULAS WITH EQUALLY SPACED NODES ON LATITUDINAL CIRCLES DANIELA
More informationSensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data
Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationQuadrature Rules With an Even Number of Multiple Nodes and a Maximal Trigonometric Degree of Exactness
Filomat 9:10 015), 39 55 DOI 10.98/FIL151039T Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Quadrature Rules With an Even Number
More informationA Note on the Recursive Calculation of Incomplete Gamma Functions
A Note on the Recursive Calculation of Incomplete Gamma Functions WALTER GAUTSCHI Purdue University It is known that the recurrence relation for incomplete gamma functions a n, x, 0 a 1, n 0,1,2,..., when
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationNumerical integration formulas of degree two
Applied Numerical Mathematics 58 (2008) 1515 1520 www.elsevier.com/locate/apnum Numerical integration formulas of degree two ongbin Xiu epartment of Mathematics, Purdue University, West Lafayette, IN 47907,
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationMoments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems
Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationNUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING
NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationSplit Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices
Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationComparison of Clenshaw-Curtis and Gauss Quadrature
WDS'11 Proceedings of Contributed Papers, Part I, 67 71, 211. ISB 978-8-7378-184-2 MATFYZPRESS Comparison of Clenshaw-Curtis and Gauss Quadrature M. ovelinková Charles University, Faculty of Mathematics
More informationThe Hardy-Littlewood Function: An Exercise in Slowly Convergent Series
The Hardy-Littlewood Function: An Exercise in Slowly Convergent Series Walter Gautschi Department of Computer Sciences Purdue University West Lafayette, IN 4797-66 U.S.A. Dedicated to Olav Njåstad on the
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationMATHEMATICAL METHODS AND APPLIED COMPUTING
Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department
More informationCOMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE
BIT 6-85//41-11 $16., Vol. 4, No. 1, pp. 11 118 c Swets & Zeitlinger COMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE WALTER GAUTSCHI Department of Computer Sciences,
More informationGauss-type Quadrature Rules for Rational Functions
Gauss-type Quadrature Rules for Rational Functions arxiv:math/9307223v1 [math.ca] 20 Jul 1993 Walter Gautschi Abstract. When integrating functions that have poles outside the interval of integration, but
More informationPOLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS
Electronic Transactions on Numerical Analysis. Volume 44, pp. 660 670, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA POLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS GERHARD OPFER Abstract. Algorithms
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationPADÉ INTERPOLATION TABLE AND BIORTHOGONAL RATIONAL FUNCTIONS
ELLIPTIC INTEGRABLE SYSTEMS PADÉ INTERPOLATION TABLE AND BIORTHOGONAL RATIONAL FUNCTIONS A.S. ZHEDANOV Abstract. We study recurrence relations and biorthogonality properties for polynomials and rational
More informationNumerical Integration (Quadrature) Another application for our interpolation tools!
Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified
More informationETNA Kent State University
F N Electronic Transactions on Numerical Analysis Volume 8, 999, pp 5-20 Copyright 999, ISSN 068-963 ON GERŠGORIN-TYPE PROBLEMS AND OVALS OF CASSINI RICHARD S VARGA AND ALAN RAUTSTENGL Abstract Recently,
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationON A PROBLEM OF GEVORKYAN FOR THE FRANKLIN SYSTEM. Zygmunt Wronicz
Opuscula Math. 36, no. 5 (2016), 681 687 http://dx.doi.org/10.7494/opmath.2016.36.5.681 Opuscula Mathematica ON A PROBLEM OF GEVORKYAN FOR THE FRANKLIN SYSTEM Zygmunt Wronicz Communicated by P.A. Cojuhari
More informationCharacteristic Polynomial
Linear Algebra Massoud Malek Characteristic Polynomial Preleminary Results Let A = (a ij ) be an n n matrix If Au = λu, then λ and u are called the eigenvalue and eigenvector of A, respectively The eigenvalues
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationRETRACTED On construction of a complex finite Jacobi matrix from two spectra
Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional
More informationPreface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:
Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationPart IB Numerical Analysis
Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationINTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE
INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE C M DA FONSECA Abstract We extend some interlacing properties of the eigenvalues of tridiagonal matrices to Hermitian matrices
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationPart II NUMERICAL MATHEMATICS
Part II NUMERICAL MATHEMATICS BIT 31 (1991). 438-446. QUADRATURE FORMULAE ON HALF-INFINITE INTERVALS* WALTER GAUTSCHI Department of Computer Sciences, Purdue University, West Lafayette, IN 47907, USA Abstract.
More informationNumerische MathemalJk
Numer. Math. 44, 53-6 (1984) Numerische MathemalJk 9 Springer-Verlag 1984 Discrete Approximations to Spherically Symmetric Distributions* Dedicated to Fritz Bauer on the occasion of his 6th birthday Walter
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationthat determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that
Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex
More informationDeterminant formulas for multidimensional hypergeometric period matrices
Advances in Applied Mathematics 29 (2002 137 151 www.academicpress.com Determinant formulas for multidimensional hypergeometric period matrices Donald Richards a,b,,1 andqifuzheng c,2 a School of Mathematics,
More information