Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract
|
|
- Prosper Bruce
- 5 years ago
- Views:
Transcription
1 Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm of a symmetric positive denite matrix is developed in this paper. It is based on reducing the original matrix to a tridiagonal matrix by orthogonal similarity transformations and applying Pade approximations to the logarithm of the tridiagonal matrix. Theoretical studies and numerical experiments indicate that the method is quite ecient when the matrix is not very ill-conditioned. Introduction In this paper, we describe a new method for computing the logarithm of a real symmetric positive denite matrix. The standard method for computing a matrix function (logarithm included) of a symmetric matrix is to use the spectral decomposition A = V V T ; () where is the diagonal matrix of the eigenvalues, V is the orthogonal matrix of the corresponding eigenvectors. The matrix function, say ln(a), is then given by ln(a) = V ln()v T : (2) The computation of the eigenvalues and eigenvectors is a relatively expensive step. The number of required arithmetic operations is around 9n 3 [9]. After calculating the spectral decomposition (), about n 3 additional operations are required to compute ln(a) by Equation (2). This has taken into account the fact that the matrix function ln(a) is also symmetric, and it is only necessary to calculate its lower or upper triangular part. Therefore, the total number of arithmetic operations is around 0n 3. The situation is more complicated for a non-symmetric matrix. A number of methods have been developed [4, 7]. Typically, the computation is based on the Schur decomposition of the matrix [7]. Pade approximations can also be used [4]. The purpose of this paper is to develop a special method for symmetric positive denite matrices that avoids the computation of the spectral decomposition. Our method This research was partially supported by the CityU research grant #
2 for computing ln(a) requires about 0n 3 =3 + 5n 2 m=2 operations, where m is an integer that depends on the desired accuracy and the spectral condition number of the p p matrix A. An upper bound for m is 4 4 ln( =). When the matrix A is not very illconditioned, the new method is more ecient than the spectral decomposition method above. It turns out that the new method is also much easier for parallel implementation. Our method is based on the Pade approximation of ln( + x). It is, however, quite dierent from a straight forward evaluation of the rational function (the Pade approximant) of the matrix. We rst use orthogonal similarity transformations to reduce the matrix A to a tridiagonal matrix T, then use the prime fraction form of a diagonal Pade approximant of ln( + x) to approximate ln(t ). The [m=m] diagonal Pade approximant of ln( + x) can be written as ln( + x) = x + b (m) x ; (3) where, b (m) are the weights and nodes of the Gauss-Legendre quadrature formula on (0; ). For a symmetric tridiagonal matrix X, the matrix P m = a(m) (I + b (m) X)? X can be evaluated in about 5n 2 m=2 operations. We also scale the matrix T by and write T= as I + X. It turns out the = p n (where, n are the largest and smallest eigenvalues of A) is the best choice for minimizing the error. Fortunately, and n can be easily calculated when T is available. For any given error tolerance, we also nd the minimum integer m, such that the [m=m] diagonal Pade approximant satises the accuracy requirement. The details of our method will be presented in section 3. In the next section, we discuss some useful properties of the diagonal Pade approximants of ln(+x). Numerical examples are presented in section 4. We end our paper with a short discussion in the last section. 2 Diagonal Pade approximants of ln( + x) The function ln( + x) has the following continued fraction expansion [, 3] where c = ; c 2 = ln( + x) = 2(2? ) ; c 2+ = c x + c 2x + c 3 x + c 4 x +::: 2(2 + ) ; for = ; 2; 3; :::: (4) The expansion is valid for any x in the complex plane, except for real x satisfying x?. The k-th approximant c x f k (x) = c + 2 x c k x 2
3 matches the rst k derivatives of ln( + x), that is d f k (x) dx x=0 = d ln( + x) dx x=0 for = ; 2; :::; k: If k is an even integer, say k = 2m, it is clear that f k (x) = p m (x)=q m (x), where p m (x), q m (x) are polynomials of degree m. This is exactly the [m=m] diagonal Pade approximant of ln( + x). Gauss [8] used the continued fraction of ln[( + x)=(? x)] to derive the quadrature formulas on the interval (?; ). For the unit interval (0; ), the corresponding function is ln( + x). We have Lemma The [m=m] diagonal Pade approximant f 2m (x) has the following prime fraction form f 2m (x) = = x + b (m) x where, b (m) are the weights and nodes of the m-point Gauss-Legendre quadrature formula on (0; ). In [7], a proof is given for the above result that links the approximant f 2m with the application of the Gauss-Legendre quadrature to the integral representation of ln( + x). Z 0 x dt = ln( + x): + xt Since the m-point Gauss-Legendre quadrature formula on (0; ) Z 0 F (t) dt = ; F (b (m) ) is exact for polynomials of degree less than 2m, we have = (b (m) ) i? = This gives rise to the expansion = Z 0 x + b (m) x t i? dt = ; for i = ; 2; :::; 2m: i =? 2 i= (?x) i + ::: i that matches the Taylor expansion of ln( + x) for the rst 2m terms. Let r (m) be the zeros of the Legendre polynomial of degree m, we have b (m) = (r (m) + )=2: The zeros of Legendre polynomials are pairs of real numbers with opposite signs within the interval (?; ), except the zero 0 when the degree is an odd integer. If we order fb (m) g as an increasing sequence, we have b (m) + b (m) m?+ = ; = m?+ for = ; 2; :::; b m c. If m is odd, then b(m) = =2: With these properties, if we group 2 (m+)=2 the two terms with indexes and m? + together for the expression P m = a(m) x=( + b (m) x), it is easy to verify that the following lemma is true. 3
4 Lemma 2 The diagonal Pade approximant of ln(+x) preserves the logarithmic identity ln y =? ln(=y) for any y > 0. Namely, f 2m (y? ) =?f 2m y?! : Let the error of the m-th diagonal Pade approximant be E m (x) = ln( + x)? f 2m (x) = ln( + x)? We establish the following result = x + b (m) x : Lemma 3 The error E m (x) is an increasing function of x satisfying E m (x) 8 >< >: < 0 if? < x < 0 = 0 if x = 0 > 0 if x > 0. Proof: An exact formula of the error has been found by Kenney and Laub [4]: E m (x) =? my = + b (m) + b (m) x X n=2m (n? m)! 2 (n + )!(n? 2m)! (?x)n+ : Since b (m) 2 (0; ), it is clear that for? < x < 0, E m (x) is a negative and increasing function of x. For x > 0, we let x = y? for y > and observe that from Lemma 2,! E m (y? ) =?E m y? : Since =y? is negative, we obtain that E m (x) is a positive increasing function of x for x > 0. Related to the approximation of ln(a), where A is a symmetric positive denite matrix, we consider the approximation of ln for 0 < n < <, where, n are two positive constants. For a given small positive constant, we determine the smallest integer m, such that s ln( + )? f 2m () ; for =? > 0; (5) n then approximate ln by ln = ln! q q n p = ln n + ln( + x) n 2 ln( n ) + We have the following theorem = x + b (m) x ; for x = p? : n 4
5 Theorem For given > n > 0 and > 0, if condition (5) is satised for some positive integer m, then ln? 2 ln( n )? = x + b (m) x ; for n ; where and b (m) are the weights and nodes of the m-point Gauss-Legendre quadrature formula on (0; ), and x = = p n?. Proof: We have introduced a scaling factor = p n, so q that the approximation of ln q for 2 [ n ; ] becomes the approximation of ln(+x) for n =? x = n?. From Lemma 3, it is clear that the maximum of E m (x) on the interval is reached at the two end points and it is less than or equal to because of condition (5) and Lemma 2. This concludes our proof. To determine the minimum integer m, such that (5) is satised, we use the forward recurrence formula [3] for evaluating continued fraction approximants. We have f k () = k = k and " # 0 = 0 " # 0 ; " # " c = # ; " # = " # " #??2 + c??2 for 2, where the continued fraction expansion coecients are given in (4). (6) 3 The new method The new method in this section for computing ln(a) is quite similar to our method [6] for p A, where A is a symmetric positive denite matrix. It starts with an orthogonal reduction to a tridiagonal matrix T, then applies Pade approximations. The matrix T is written as T = (I + X) for some and ln(i + X) is approximated by P m = a(m) (I + b (m) X)? X. The main dierences are related to the scaling parameter, the integer m and the coecients f ; b (m) g. For p A, the coecients have simple explicit formulas, while for ln(a), these coecients are the weights and nodes of the m-point Gauss-Legendre quadrature rule. With a given error tolerance, the scaling parameter and the integer m are determined from and the extreme eigenvalues of A (also of T ). For ln(a), we have the exact formula = p n and m can be calculated by a forward recurrence formula. For p A, both and m must be computed numerically. However, they can be determined from the solution of a nonlinear equation in one variable. Our method calculates the approximation S m to ln(a) for a given error tolerance, such that ln(a)? S m 2 ; (7) where 2 is the matrix 2-norm. It involves the following steps:. Reduce the matrix A to a tridiagonal matrix T by orthogonal similarity transformations, i.e., A = QT Q T, where Q is an orthogonal matrix. 5
6 2. Calculate the largest and smallest eigenvalues of T, say, > n > 0, where n is the size of the matrix A. 3. Determine the minimum integer m such that! 2 ln? 2m ; (8) n 2m where f 2m ; 2m g are calculated from (6), with = q = n?. 4. Compute the weights f g and nodes fb (m) g of the m-point Gauss-Legendre quadrature formula on (0,): 5. Evaluate R m given by: R m = = Z 0 f(x) dx = f(b (m) ): (I + b (m) X)? X; for X = p n T? I: 6. Evaluate S m for the approximation of ln(a): S m = 2 ln( n ) I + QR m Q T : (9) The rst step is a standard procedure for calculating all the eigenvalues (and eigenvectors) of a symmetric matrix [9]. When the matrix A is dense, Householder reectors can be used. The number of operations required in this step is about 4n 3 =3. In our implementation, this step is a simple call to the LAPACK [2] routine xsytrd. The second step involves the calculation of the two extreme eigenvalues of T. It requires O(n) operations, if the bisection method is used. We call the LAPACK routine xstebz in our program. The third step is to nd the minimum integer m satisfying the condition (5), i.e. (8), based on the forward recurrence formula (6). The number of operations required in this step is O(m). The following scaling has been used to avoid over-ow: := =? ; := =? ;? :=? =? ;? := : This procedure appears to be numerically stable. Theoretically, the sequence f = g converges to ln( + ). In practice, we observe that the calculated values are always close to ln( + ) even when is large. However, when is large and is small, the small error in the numerically computed value of = could lead to a dierent m. For example, a single precision algorithm for Step 3 gives m = 07 for = 0 6 and = 0?5. Actually, m should be 05. 6
7 In Step 4, we determine the weights and nodes of the m-point Gauss-Legendre quadrature formula on (0; ). This can be achieved in O(m 2 ) operations [6]. An ecient routine can be found in [7]. We assume that the nodes are ordered as 0 < b (m) < b (m) 2 < ::: < b (m) m < : Step 5 involves the summation of the matrices Y satisfying (I + b (m) X)Y = X for = ; 2; :::; m: Since the matrix Y is symmetric and the right hand side above is a tridiagonal matrix, it is possible to calculate Y in about 2n 2 operations. The total number of operations required in this step is 5=2 n 2 m, because there are m such matrices and one has to add them together. Since b (m) 2 (0; ) and T is positive denite, it is easy to see that I + b (m) X is also positive denite. In fact, I + b (m) X = (? b (m) )I + b(m) p T = b (m) n The spectral condition number of I + b (m) X is m?+ I + = b(m) m?+ + b (m) p b (m) m?+ + b (m) = p ; where = = n : b(m) p n T: It is clear that is always less than and it increases when increases. If is not close to m, we have = O( p ). For large and large m (so that the error is less than a given tolerence), we have b (m) c m 2 ; b(m) m? c m 2 and m + c p =m 2 where c is some positive constant. In section 4, an upper bound for m is derived which is O( 4p ln( p =)) for large. It seems from numerical experiments that it is also the correct asymptotic formula for m. If that is the case, we have m = O(). This indicates that when the matrix A is ill-conditioned, we will not be able to calculate Y accurately if is close or equal to m. However, the magnitude of Y is related to the coecient which is small when is close or equal to m. Therefore, it is more natural to consider the absolute errors in Y and compare the sum of these errors with the sum of Y (i.e. R m ). For this purpose, we consider a small perturbation to the right hand side of the equation of Y. That is (I + b (m) X)(Y + Y ~ ) = (X + X): ~ We have ~R m = = ~Y = = (I + b (m) X)? X: ~ 7
8 This gives rise to ~ R m 2 (m; ) ~ X 2 ; where (m; ) = = b (m) m?+ + b (m) = p : The number is quite small. As an example, we consider = 0 6 and = 0?5. This leads to m = 05 and = 6:9466. Therefore, the error Rm ~ is still quite small, if X ~ is small. Let us assume that X ~ is O(uX 2 ) = O(u p ), where u is the unit round o. Then, we can at best hope that R ~ m = O(u p ). Since R m ln(i + X), we have R m 2 = O(ln p ). Therefore, Rm ~ could be large compared with R m. This suggests that we may not be able to obtain good relative accuracy when the matrix A is illconditioned, even though the absolute error could be O(u p ). However, this also applies to the standard method based on the spectral decomposition. If the small eigenvalues are not calculated to high relative accuracy, we can not expect the obtained result for ln(a) to be accurate. Consider the special case where the matrix A has = p and n = = p. If an error of O(u p ) = O(uA 2 ) is introduced to n, then the smallest eigenvalue becomes n ( + ), where = O(u). Assuming u is still small, this gives rises to the error term ln( + ). Therefore, we might have O(u) errors in the approximation of ln(a), while the 2-norm of ln(a) is ust ln p. When the matrix A is reduced to the tridiagonal matrix T in step, the matrix Q is obtained as a sequence of Householder reectors. To compute QR m Q T, we multiply the reectors from both the left and the right sides following the reversed sequence. Using similar techniques as in the reduction step, the matrix QR m Q T can be calculated in 2n 3 operations. The matrix S m is obtained by adding the diagonals by ln = 2 ln( n ). Our method is dominated by step, 5 and 6. The leading terms of the total number of operations are 0 3 n n2 m. The time spent on the other steps is q negligible. q Since the eigenvalues of the symmetric matrix X lie in the interval [ n =?; = n? ], from Theorem, we have ln(i + X)? This gives rise to Equation (7). = (I + b (m) X)? X 2 : 4 An upper bound For a given error tolerance, the third step of our method determines the minimum integer m, such that condition (5), i.e., ln( + )? f 2m () is satised, where = q = n? and f 2m () is the [m=m] diagonal Pade approximant of ln( + ). In the actual implementation, we use the forward recurrence formula [3] 8
9 of the continued fraction of ln( + ) and calculate f 2 () for = ; 2; :::. The minimum integer m satisfying (5) can be easily found. When the integer m is calculated, we can decide whether to use the method developed in this paper instead of the standard method based on the spectral decomposition, by comparing the required numbers of operations of the two methods. If m is too large for the new method to have any advantage, we can continue the spectral decomposition method. The rst step of orthogonal reduction to a tridiagonal matrix is not wasted, since it is needed in both cases. An estimate for the minimum integer m for the given and = = n is useful, it allows us to determine the number of arithmetic operations required by this method without going through the calculation of f 2m (). It also gives us a general idea on when the new method is suitable if the size of the matrix n, the error tolerance and the spectral condition number are known. In this section, we derive an upper bound for the minimum m satisfying (5). Unfortunately, the upper bound is still not very tight. As a special case of the general error estimate established by Gragg [0] for g-fractions, the continued fraction of ln( + x)=x satises ln( + x)? F n (x) x x + x? p + x + p + x n? for x > 0, where F n (x) = c + c 2 x : + c 3 x +::: +cnx Notice that F 2m (x) = f 2m (x)=x. Now, for x = = p?, we have ln( + )? f 2m () (p? ) 2 p 4? 2m? p p + An upper bound m for the minimum m is established, if we let This gives rise to For large, we have ( p? ) 2 p 4? 2m? p p = : 4 + h m = ln p i p (?) ln p +? 4p 2 : + m p 4 4 ln p! 4 : (0) In Figure, we plot the actual minimum integer m and the upper bound m for up to 0 8 and = 0?5. We realize that m over-estimates the minimum m by more than 50% for large. Therefore, the upper bound m can only be used for a rough estimate of the total number of required operations. 9
10 m epsilon=.0e kappa x 0 7 Figure : The minimum m and upper bound m for = 0?5. 5 Numerical examples In this section, we demonstrate the accuracy and eciency of the new method by numerical examples. We choose = ln(a) 2, where = 0?5 is the relative error tolerance in the matrix 2-norm. Since the extreme eigenvalues of A are calculated in step 2, the 2-norm of ln(a) can be obtained without any extra work. The integer m calculated in step 3 depends on = = n and. However, its values are typically smaller than the size of the matrix n. This is consistent with the theoretical result in section 4 that its dependence on is mainly 4p. Our numerical results are calculated in single precision, then compared with the double precision \exact" solutions obtained from the standard spectral decomposition method. The following notations are used for the relative errors in various matrix norms: e f = ln(a)? S m f ln(a) f ; e = ln(a)? S m ln(a) ; e 2 = ln(a)? S m 2 ln(a) 2 : We also calculate the relative errors for the single precision solutions obtained from the spectral decomposition method (based on the same double precision \eaxct" solutions). These relative errors are denoted by s f, s and s 2. The numerical experiments are carried out on a SUN Ultra (model 70) workstation. 0
11 Example : The n n matrix is A = (a i ), where a i = 2 + (i? ) 2 : As we can see from the following table, the integer m is quite small for = 0?5. The relative errors in dierent matrix norms are listed in the table for various values of n. Theoretically, the entries in the e 2 column should be less than or equal to. This has n m e f e e 2 s f s s E E E E-07.0E E E E E E E E E E E-06.2E E E E E E-06.2E E E E-06.3E E-06.7E E E E-06.6E E E-06.6E E E E E E E E E E-05.E-05 3.E E-05.2E E E-05.E E E-05.4E E E-05.5E E E-05.9E-05 Table : Example relative errors for approximating ln(a) by the Pade method with = 0?5 and the standard method, both calculated in single precision. not been the case for n 300, due to roundo errors. Nevertheless, our numerical results have the same level of accuracy as the single precision solutions calculated by the standard spectral decomposition method when n is not small. In fact, our numerical approximations are often more accurate than the solutions obtained by the standard method. For small n, we can easily increase the accuracy of our solutions by decreasing the relative error tolerance. As an example, for n = 0 and = 0?6, we have m = 8; e f = 3.2E-07; e = 6.4E-07; e 2 = 4.E-07: Example 2: The n n matrix A is obtained from a second order nite dierence discretization of the negative Laplacian on a unit square with Dirichlet boundary conditions. The (i; ) entry is given by a i = 8 >< >: 4 if i =,? if i? = p;? if i? = and (i + ) mod (2p) 6=, 0 otherwise, where n = p 2. From the studies in section 4, we know that m is mainly proportional to p 4. Since the condition number of this matrix is known to be proportional to n2, we
12 n m e f e e 2 s f s s E E E E E E E E-06 2.E E E E E E-06 4.E E-07.8E-06.E E E E-06.4E E E E-06 5.E-06 5.E E-06 7.E E E E E E E E E-06.E E E-06.3E-05.E E-06.2E E E E E E-06.6E E E-06 2.E E E-06.3E E-06 4.E-06 2.E E E-06.9E E E E-05.8E-05 Table 2: Example 2 relative errors for approximating ln(a). have 4p p. It is clear from Table 2 that the integer m is indeed quite small compared with n, or even with p n = p. The relative errors of the two methods are also listed in that table for the given relative error tolerance = 0?5. We observe that the entries in the e 2 column are less than, consistent with the theoretical result (7). From Table 2, it is clear that with this moderate choice of, our numerical solutions have already achieved the same level of accuracy as the full (but single) precision solutions calculated by the standard method. For small n, the accuracy of our solutions can be improved by choosing a smaller. Here are the rst two cases for = 0?6 : n = 4; m = 4; e f =.3E-07; e =.9E-07; e 2 =.8E-07 n = 6; m = 6; e f = 3.6E-07; e = 7.0E-07; e 2 = 4.8E-07: Example 3: We consider the matrix A = BB T, where the entries of B are random numbers from the uniform distribution on [?=2; =2]. In Table 3, the relative errors for the approximate solutions of ln(a) are listed for eleven 0000 random matrices. These eleven cases are ordered according to their spectral condition numbers. The integer m is related to and, but (equals ln(a) 2 ) also depends the extreme eigenvalues. Therefore, for the given = 0?5, two matrices with the same spectral condition number could have dierent values of m. From Table 3, we observe that our method gives more accurate solutions for 6 matrices out of the total of. In some cases, the numerical solutions by both methods do not have a good relative accuracy. In all these calculations, our method is faster than the standard procedure (in single precision). The latter is implemented with a call to the LAPACK routine xsyev for the spectral decomposition () and a routine for evaluating ln(a) from Equation (2) which requires n 3 operations. A signicant reduction in the total execution time is observed for 2
13 m e f e e 2 s f s s E E E E E E E E-05.3E E E E E-04 2.E E E E E E E E E E E E E-05.3E E E-05.E E-05.5E E E E E E E-05.0E-04.4E E E E E E E-05.4E-04.8E E E-05.E-04.5E E E E E-04.E-03.0E-03.9E E E E E E E E-04 Table 3: Example 3 relative errors for random matrices. the rst two examples. In the following table, we list the total execution time in seconds required by the two methods. For the matrices in Example 3, the standard method Example n T new T old Table 4: Execution time in seconds for Example and 2 by the new method (T new ) and the spectral decomposition method (T old ). requires about 0.3 seconds. Our method requires to 0.0 seconds depending on the value of m. These results are obtained on a SUN Ultra (model 70) workstation using the compiler f77 (version 4.0) from SUN Microsystems. All programs including LAPACK are compiled with the option \-fast". 6 Conclusion We have developed and implemented a new method for calculating the logarithm of a symmetric positive denite matrix in this paper. When the matrix is not very illconditioned, the new method is more ecient than the standard approach based on the spectral decomposition. The total number of operations required by the new method is 0 3 n mn2, where m depends on the spectral condition number and the desired accuracy. An upper bound for m is 4p =4 ln( p =). This compares favorably with the spectral decomposition method which requires about 0n 3 operations, when the condition 3
14 number if not very large. Theoretically, the approximation S m satises the condition ln(a)? S m 2. References [] Lambert, J. H., \Memoire sur quelques proprietes remarquables des quantites transcendantes circulaires et logarithmiques," Memoires de l'acad. de Berlin, Annee 76, , 768. [2] Anderson, E., Bai, Z., Bischof, C., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Ostrouchov, S. & Sorensen, D., LAPACK Users' Guide, SIAM, 992. [3] Baker, Jr., G. A. & Graves-Morris, P., Pade Approximations, 2nd edition, Cambridge University Press, 996. [4] Borck, A. & Hammarling S., \A Schur method for the square root of a matrix", Linear Algebra Appl. 52/53, 27-40, 983. [5] Dieci, L., \Considerations on computing real logarithms of matrices, Hamiltonian logarithms, and skew-symmetric logarithms", Linear Algebra Appl., 244, 35-54, 996. [6] Davis, P. J. & Rabinowitz, P., Methods of Numerical Integration. 2nd ed. New York, Academic Press, 984. [7] Dieci, L., Morini, B. & Papini, A., \Computing techniques for real logarithms of matrices", SIAM J. Matrix Anal. Appl., 7, , 996. [8] Gauss, C. F., \Methodus nova integralium valores per approximationem inveniendi", (84); Werke, Vol. 3, Gottingen, 65-96, 876. [9] Golub, G. H. & Van Loan C. F., Matrix Computations, 2nd edition, The Johns Hopkins University Press, 989. [0] Gragg, W. B., \Truncation error bounds for g-fractions", Numer. Math.,, , 068. [] Higham, N.J., \Newton's method for the matrix square root", Math. Comp. 46, , 986. [2] Higham, N.J., \Computing real square roots of a real matrix", Linear Algebra Appl. 88/89, , 987. [3] Jones, W. B. & Thron, W. J., Continued Fractions, Aanlytic Theory and Applications, Addison-Wesley Publishing Company,
15 [4] Kenney, C. & Laub, A. L., \Pade error estimates for the logarithm of a matrix", Int. J. Control, 50, , 989. [5] Kenney, C. & Laub, A. L., \Condition estimates for matrix functions", SIAM J. Matrix Anal. Appl., 0, 9-209, 989. [6] Lu, Y. Y., \A Pade approximation method for square roots of symmetric positive denite matrices", submitted to SIAM J. Matrix Anal. Appl. [7] Press, W. H., Flannery, B. P., Teukolsky, S. A. & Vetterling, W. T., Numerical Recipes in C, Cambridge University Press, 988. [8] Stickel, E. U., \Fast computation of matrix exponential and logarithm", Analysis, 5, 63-73, 985. [9] Stickel, E. U., \An algorithm for fast high precision computation of matrix exponential and logarithm", Analysis, 0, 85-95,
Exponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationAutomatica, 33(9): , September 1997.
A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm
More informationUsing Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.
Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationAlgorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems
Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for
More informationA note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda
Journal of Math-for-Industry Vol 3 (20A-4) pp 47 52 A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Aio Fuuda Received on October 6 200 / Revised on February 7 20 Abstract
More informationD. Gimenez, M. T. Camara, P. Montilla. Aptdo Murcia. Spain. ABSTRACT
Accelerating the Convergence of Blocked Jacobi Methods 1 D. Gimenez, M. T. Camara, P. Montilla Departamento de Informatica y Sistemas. Univ de Murcia. Aptdo 401. 0001 Murcia. Spain. e-mail: fdomingo,cpmcm,cppmmg@dif.um.es
More informationModule 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents
Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized
More informationGeneralized interval arithmetic on compact matrix Lie groups
myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University
More informationBaltzer Journals April 24, Bounds for the Trace of the Inverse and the Determinant. of Symmetric Positive Denite Matrices
Baltzer Journals April 4, 996 Bounds for the Trace of the Inverse and the Determinant of Symmetric Positive Denite Matrices haojun Bai ; and Gene H. Golub ; y Department of Mathematics, University of Kentucky,
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationNAG Library Routine Document F08JDF (DSTEVR)
F08 Least-squares and Eigenvalue Problems (LAPACK) NAG Library Routine Document (DSTEVR) Note: before using this routine, please read the Users Note for your implementation to check the interpretation
More informationRecurrence Relations and Fast Algorithms
Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations
More information2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique
STABILITY OF A METHOD FOR MULTIPLYING COMPLEX MATRICES WITH THREE REAL MATRIX MULTIPLICATIONS NICHOLAS J. HIGHAM y Abstract. By use of a simple identity, the product of two complex matrices can be formed
More informationc 2007 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 9, No. 3, pp. 07 6 c 007 Society for Industrial and Applied Mathematics NUMERICAL QUADRATURES FOR SINGULAR AND HYPERSINGULAR INTEGRALS IN BOUNDARY ELEMENT METHODS MICHAEL CARLEY
More informationThe Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In
The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed
More informationNAG Library Routine Document F08UBF (DSBGVX)
NAG Library Routine Document (DSBGVX) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent
More informationNAG Library Routine Document F08FNF (ZHEEV).1
NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent
More informationNAG Library Routine Document F08FAF (DSYEV)
NAG Library Routine Document (DSYEV) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent
More informationComputing the matrix cosine
Numerical Algorithms 34: 13 26, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. Computing the matrix cosine Nicholas J. Higham and Matthew I. Smith Department of Mathematics, University
More informationEigenvalue problems and optimization
Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we
More informationThe restarted QR-algorithm for eigenvalue computation of structured matrices
Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi
More informationTaylor series based nite dierence approximations of higher-degree derivatives
Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba
More informationOn Orthogonal Block Elimination. Christian Bischof and Xiaobai Sun. Argonne, IL Argonne Preprint MCS-P
On Orthogonal Block Elimination Christian Bischof and iaobai Sun Mathematics and Computer Science Division Argonne National Laboratory Argonne, IL 6439 fbischof,xiaobaig@mcs.anl.gov Argonne Preprint MCS-P45-794
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More informationThe geometric mean algorithm
The geometric mean algorithm Rui Ralha Centro de Matemática Universidade do Minho 4710-057 Braga, Portugal email: r ralha@math.uminho.pt Abstract Bisection (of a real interval) is a well known algorithm
More informationImproved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2011.
Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm Al-Mohy, Awad H. and Higham, Nicholas J. 2011 MIMS EPrint: 2011.83 Manchester Institute for Mathematical Sciences School of Mathematics
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationA MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY
A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,
More informationNAG Library Routine Document F08FPF (ZHEEVX)
NAG Library Routine Document (ZHEEVX) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationƒ f(x)dx ~ X) ^i,nf(%i,n) -1 *=1 are the zeros of P«(#) and where the num
ZEROS OF THE HERMITE POLYNOMIALS AND WEIGHTS FOR GAUSS' MECHANICAL QUADRATURE FORMULA ROBERT E. GREENWOOD AND J. J. MILLER In the numerical integration of a function ƒ (x) it is very desirable to choose
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationOn aggressive early deflation in parallel variants of the QR algorithm
On aggressive early deflation in parallel variants of the QR algorithm Bo Kågström 1, Daniel Kressner 2, and Meiyue Shao 1 1 Department of Computing Science and HPC2N Umeå University, S-901 87 Umeå, Sweden
More informationThe Matrix Sign Function Method and the. Computation of Invariant Subspaces. November Abstract
The Matrix Sign Function Method and the Computation of Invariant Subspaces Ralph Byers Chunyang He y Volker Mehrmann y November 5. 1994 Abstract A perturbation analysis shows that if a numerically stable
More informationSolving Orthogonal Matrix Differential Systems in Mathematica
Solving Orthogonal Matrix Differential Systems in Mathematica Mark Sofroniou 1 and Giulia Spaletta 2 1 Wolfram Research, Champaign, Illinois, USA. marks@wolfram.com 2 Mathematics Department, Bologna University,
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationOn a quadratic matrix equation associated with an M-matrix
Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK
More informationBeresford N. Parlett Λ. April 22, Early History of Eigenvalue Computations 1. 2 The LU and QR Algorithms 3. 3 The Symmetric Case 8
The QR Algorithm Beresford N. Parlett Λ April 22, 2002 Contents 1 Early History of Eigenvalue Computations 1 2 The LU and QR Algorithms 3 3 The Symmetric Case 8 4 The Discovery of the Algorithms 9 Λ Mathematics
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More informationQR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:
(In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More information1 Ordinary points and singular points
Math 70 honors, Fall, 008 Notes 8 More on series solutions, and an introduction to \orthogonal polynomials" Homework at end Revised, /4. Some changes and additions starting on page 7. Ordinary points and
More informationNon-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error
on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research
More informationIntroduction to Numerical Analysis
J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface
More informationJacobian conditioning analysis for model validation
Neural Computation 16: 401-418 (2004). Jacobian conditioning analysis for model validation Isabelle Rivals and Léon Personnaz Équipe de Statistique Appliquée École Supérieure de Physique et de Chimie Industrielles
More informationNAG Toolbox for Matlab nag_lapack_dggev (f08wa)
NAG Toolbox for Matlab nag_lapack_dggev () 1 Purpose nag_lapack_dggev () computes for a pair of n by n real nonsymmetric matrices ða; BÞ the generalized eigenvalues and, optionally, the left and/or right
More informationPARTIAL DIFFERENTIAL EQUATIONS
MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL
More informationEXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS
EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS JIANWEI DIAN AND R BAKER KEARFOTT Abstract Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous
More informationTesting Linear Algebra Software
Testing Linear Algebra Software Nicholas J. Higham, Department of Mathematics, University of Manchester, Manchester, M13 9PL, England higham@ma.man.ac.uk, http://www.ma.man.ac.uk/~higham/ Abstract How
More informationNumerical Methods in Physics and Astrophysics
Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation
More information1.1. Contributions. The most important feature of problem (1.1) is that A is
FAST AND STABLE ALGORITHMS FOR BANDED PLUS SEMISEPARABLE SYSTEMS OF LINEAR EQUATIONS S. HANDRASEKARAN AND M. GU y Abstract. We present fast and numerically stable algorithms for the solution of linear
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationRoundoff Error. Monday, August 29, 11
Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate
More informationNAG Library Routine Document F07HAF (DPBSV)
NAG Library Routine Document (DPBSV) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationINTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES
Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and
More informationMatrix Shapes Invariant under the Symmetric QR Algorithm
NUMERICAL ANALYSIS PROJECT MANUSCRIPT NA-92-12 SEPTEMBER 1992 Matrix Shapes Invariant under the Symmetric QR Algorithm Peter Arbenz and Gene H. Golub NUMERICAL ANALYSIS PROJECT COMPUTER SCIENCE DEPARTMENT
More informationComputation of a canonical form for linear differential-algebraic equations
Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,
More informationNAG Library Routine Document F08PNF (ZGEES)
F08 Least-squares and Eigenvalue Problems (LAPACK) F08PNF NAG Library Routine Document F08PNF (ZGEES) Note: before using this routine, please read the Users Note for your implementation to check the interpretation
More informationSome bounds for the logarithmic function
Some bounds for the logarithmic function Flemming Topsøe University of Copenhagen topsoe@math.ku.dk Abstract Bounds for the logarithmic function are studied. In particular, we establish bounds with rational
More informationTABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9
TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More information1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)
Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationNAG Toolbox for MATLAB Chapter Introduction. F02 Eigenvalues and Eigenvectors
NAG Toolbox for MATLAB Chapter Introduction F02 Eigenvalues and Eigenvectors Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Standard Eigenvalue Problems... 2 2.1.1 Standard
More informationSome Notes on Least Squares, QR-factorization, SVD and Fitting
Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationLuca Dieci, ** Benedetta Morini and Alessandra Papini. *** In this work, we consider computing the real logarithm of a real matrix.
COMPUTATIONAL TECHNIQUES FOR REAL LOGARITHMS OF MATRICES. by Luca Dieci, ** Benedetta Morini and Alessandra Papini. *** AMS Subject Classication: 65F3, 65F35, 65F99 Key Words: real logarithm of a matrix,
More informationproblem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e
A Parallel Solver for Extreme Eigenpairs 1 Leonardo Borges and Suely Oliveira 2 Computer Science Department, Texas A&M University, College Station, TX 77843-3112, USA. Abstract. In this paper a parallel
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationThe Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem
The Godunov Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem Anna M. Matsekh a,1 a Institute of Computational Technologies, Siberian Branch of the Russian
More informationNUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING
NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationWhat is the meaning of the graph energy after all?
What is the meaning of the graph energy after all? Ernesto Estrada and Michele Benzi Department of Mathematics & Statistics, University of Strathclyde, 6 Richmond Street, Glasgow GXQ, UK Department of
More informationCyclic Reduction History and Applications
ETH Zurich and Stanford University Abstract. We discuss the method of Cyclic Reduction for solving special systems of linear equations that arise when discretizing partial differential equations. In connection
More informationA New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.
1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL
More informationThe Scaling and Squaring Method for the Matrix Exponential Revisited. Nicholas J. Higham
The Scaling and Squaring Method for the Matrix Exponential Revisited Nicholas J. Higham 2005 MIMS EPrint: 2006.394 Manchester Institute for Mathematical Sciences School of Mathematics The University of
More informationQUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS *
QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * WEI-CHANG SHANN AND JANN-CHANG YAN Abstract. Scaling equations are used to derive formulae of quadratures involving polynomials and scaling/wavelet
More information216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are
Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationReduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves
Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract
More informationLinearized methods for ordinary di erential equations
Applied Mathematics and Computation 104 (1999) 109±19 www.elsevier.nl/locate/amc Linearized methods for ordinary di erential equations J.I. Ramos 1 Departamento de Lenguajes y Ciencias de la Computacion,
More informationResearch CharlieMatters
Research CharlieMatters Van Loan and the Matrix Exponential February 25, 2009 Nick Nick Higham Director Schoolof ofresearch Mathematics The School University of Mathematics of Manchester http://www.maths.manchester.ac.uk/~higham
More information