Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
|
|
- Candace Jennings
- 5 years ago
- Views:
Transcription
1 Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL T, where L is unit lower triangular and B is block diagonal with each diagonal block having dimension 1 or 2. The stability of this factorization and its application to solving linear systems has been well-studied in the literature. In this paper we give a condition under which the LBL T factorization will run to completion in inexact arithmetic with inertia preserved. We also analyze the stability of rank estimation for symmetric indefinite matrices by LBL T factorization using the Bunch-Parlett (complete pivoting), fast Bunch-Parlett or bounded Bunch-Kaufman (rook pivoting) pivoting strategy. 1 Introduction A symmetric possibly indefinite matrix A R n n can be factored into LBL T, where L is unit lower triangular and B is block diagonal with each block of order 1 or 2. This is a generalization of the Cholesky factorization, which requires positive semidefiniteness. The process is described as follows. Assuming A is non-zero, there exists a permutation matrix Π such that ΠAΠ T = s n s [ s n s ] A11 A T 21, A 21 A 22 where A 11 is nonsingular, and s = 1 or 2 denoting that A 11 is 1 1 or 2 2. The decomposition in outer product form is [ ] [ ][ ] [ A11 A T 21 I = s 0 A11 0 Is A 1 ] A 21 A 22 A 21 A 1 11 AT 21, (1) 11 I n s 0 Ā 0 I n s This work was supported by the National Science Foundation under Grant CCR Department of Computer Science and Engineering; University of Minnesota; Minneapolis, MN 55455, USA. 1
2 where Ā = A 22 A 21 A 1 11 AT 21 R(n s) (n s) is the Schur complement. Iteratively applying the reduction to the Schur complement, we obtain the factorization in the form PAP T = LBL T, where P is a permutation matrix, L is unit lower triangular, and B is block diagonal with each block of order 1 or 2. In LBL T factorization, choosing the permutation matrix Π and pivot size s at each step is called diagonal pivoting. In the literature, there are four major pivoting methods for general symmetric matrices: Bunch-Parlett (complete pivoting) [7], Bunch-Kaufman (partial pivoting) [4], bounded Bunch-Parlett and fast Bunch-Kaufman (rook pivoting) [1]. In addition, there are two pivoting strategies specifically for symmetric tridiagonal matrices. One is by Bunch [3] and the other is by Bunch and Marcia [5]. Slapničar analyzed the Bunch-Parlett pivoting [15, section 7]. Higham proved the stability of the Bunch-Kaufman pivoting strategy and Bunch s method in [12] and [13], respectively. Bunch and Marcia [5] showed the normwise backward stability of their method. In this paper, we give a sufficient condition under which the LBL T factorization of a symmetric matrix is guaranteed to run to completion numerically and preserve inertia. With complete pivoting, the Cholesky factorization can be applied to rank estimation. The stability is analyzed by Higham [11]. Given A R n n of rank r < n, the LU factorization needs 2(3n 2 r 3nr 2 +r 3 )/3 flops, whereas the Cholesky factorization needs (3n 2 r 3nr 2 +r 3 )/3 flops but requires symmetric positive semidefiniteness 1. To estimate the rank of a symmetric indefinite matrix, we can use LBL T factorization with the Bunch-Parlett pivoting strategy (complete pivoting), fast Bunch-Parlett or bounded Bunch-Kaufman pivoting strategy (rook pivoting). The number of required flops is still (3n 2 r 3nr 2 +r 3 )/3. We analyze the stability in this paper. The organization of this paper is as follows. Section 2 gives the basics of rounding error analysis. Section 3 discusses the conditions under which the LBL T factorization is guaranteed to run to completion with inertia preserved. Section 4 analyzes rank estimation for symmetric indefinite matrices by LBL T factorization with proper pivoting strategies. The concluding remarks are given in Section 5. Throughout the paper, without loss of generality, we assume the required interchanges for any diagonal pivoting are done prior to the factorization, so that A := PAP T, where P is the permutation matrix for pivoting. [ We ] denote 1 0 the column vector of ones by e, and the 2 2 identity matrix by I Basics of rounding error analysis We use fl( ) to denote the computed value of a given expression, and follow the standard model fl(x op y) = (x op y)(1 + δ), δ u, op = +,,, /, (2) 1 In this paper we consider the highest order terms while counting the number of flops. 2
3 where u is the unit roundoff. This model holds in most computers, including those using IEEE standard arithmetic. Lemma 1 gives a basic tool for rounding error analysis [14, Lemma 3.1]. Lemma 1 If δ i u and σ i = ±1 for i = 1,...,k, then if ku < 1, and k (1 + δ i ) σi = 1 + θ k, θ k ku 1 ku =: ǫ k. i=1 The function ǫ k defined in Lemma 1 has two useful properties 2 : ǫ m + ǫ n + 2ǫ m ǫ n ǫ m+n, m, n 0, cǫ n ǫ cn, c 1. Since we assume ku < 1 for all practical k, ǫ k = ku + kuǫ k = ku + O(u 2 ). These properties are used frequently to derive inequalities in this paper. Lemma 2 is helpful to our backward error analysis of LBL T factorizations. Lemma 2 Let y = (s n 1 a kb k c k )/d. No matter what the order of evaluation, the computed ŷ satisfies n 1 n 1 ŷd + a k b k c k = s + s, s ǫ n ( ŷd + a k b k c k ). Proof The proof is similar to the derivation leading to [14, Lemma 8.4]. Let r = s n 1 a kb k c k and assume that it is computed in the order (((s a 1 b 1 c 1 ) a 2 b 2 c 2 ) a n 1 b n 1 c n 1 ). Following the model (2), the computed ˆr satisfies n 1 ˆr = s(1 + δ 1 ) (1 + δ n 1 ) a i b i c i (1 + ξ i )(1 + η i )(1 + δ i ) (1 + δ n 1 ), i=1 where δ i, ξ i, η i u. Then the computed ŷ = fl(ˆr/d) = (ˆr/d)(1 + δ n ), δ n u. Therefore, ŷd = ˆr(1 + δ n ). Dividing both sides by (1 + δ 1 ) (1 + δ n ), we obtain By Lemma 1, i=1 n 1 1 ŷd (1 + δ 1 )...(1 + δ n ) = s (1 + ξ i )(1 + η i ) a i b i c i (1 + δ 1 ) (1 + δ i 1 ). n 1 ŷd(1 + θ n (0) ) = s a k b k c k (1 + θ n (k) ), where θ (k) n ǫ n for k = 0, 1,..., n 1. The last equation still holds for any order of evaluation with a little thought. The result follows immediately. 2 The two properties were listed in [12] and [14, Lemma 3.3], but with 2ǫ mǫ n replaced by ǫ mǫ n. Here we give a slightly tighter inequality. 3
4 3 Conditions for inertia preservation We use A to denote a symmetric matrix consisting of floating point numbers. Due to potential rounding errors in forming or storing A, A may not be exactly the same as the matrix under consideration. Hence we write A = à + Ã, where à is the matrix under consideration. If each element in A is rounded to the closest floating point number, then a ij ã ij uã ij with u the unit roundoff. Therefore, u A à u à 1 u A = ǫ 1 A. Here and throughout this paper, we use a hat to denote computed quantities; inequality and absolute value for matrices are defined elementwise. The overall backward error of the LBL T factorization is à ˆL ˆB ˆL T à A + A ˆL ˆB ˆL T ǫ 1 A + A ˆL ˆB ˆL T. In general, we assume A is close to à and therefore à is relatively negligible. Hence we consider only A ˆL ˆB ˆL T for simplicity. 3.1 Componentwise analysis Assuming the LBL T factorization of A R n n runs to completion, we denote B 1 L 11 B 2 B =..., L = L 21 L Rn n. Bm L m1 L m2 L mm Each B i is a 1 1 or 2 2 block, with L ii = 1 or L ii = I 2, respectively. The rest of L is partitioned accordingly. Algorithm 1 gives the LBL T factorization in inner product form. Algorithm 1 LBL T factorization of A R n n in inner product form. for all i = 1,...,m do for all j = 1,...,i 1 do L ij = (A ij j 1 L ikb k L T jk )B 1 j (*) end for B i = A ii i 1 L ikb k L T ik (**) end for Two remarks on Algorithm 1 are worth noting. First, in practice, we store L ik B k in an array for the computations in (*). Hence the factorization requires only n 3 /3 flops, the same as the Cholesky factorization. Second, in outer product form, the Schur complement Ā in (1) can be computed by either Ā = A 22 A 21 A11 1 AT 21 or Ā = A 22 LA LT 11, where L = A 21 A The latter is computationally equivalent (including equivalence of rounding errors) to 4
5 the inner product formulation in Algorithm 1, based on which is our stability analysis. If the Schur complement is computed by Ā = A 22 A 21 A 1 11 AT 21, computationally it involves fewer arithmetic operations. Hence our analysis remains valid. These two remarks also apply to Algorithm 2. In Algorithm 1, each multiplication by B 1 j in ( ) with B j R 2 2 can be computed by solving a 2 2 linear system, denoted by Ey = z (i.e., E is B j ). We assume the linear system is solved successfully with computed ŷ satisfying E ǫ c E, (E + E)ŷ = z (3) for some constant ǫ c. Higham [12] proved that using Bunch-Kaufman pivoting with pivoting argument α = , if the system is solved by GEPP, then ǫ c = ǫ 12 ; if it is solved by explicit inverse of E with scaling (as implemented in both LA- PACK and LINPACK), ǫ c = ǫ 180. Since Bunch-Parlett, fast Bunch-Parlett and bounded Bunch-Kaufman pivoting strategies satisfy stronger conditions than the Bunch-Kaufman, condition (3) still holds. This remark remains valid with pivoting argument α = suggested for symmetric triadic matrices [10]. Here a matrix is called triadic if it has at most two nonzero off-diagonal elements in each column. In addition, Higham [13] also showed that with Bunch s pivoting strategy for symmetric tridiagonal matrices [3], if a 2 2 linear system is solved by GEPP, then ǫ c = ǫ 6 5. For the 2 2 linear system solved by explicit inverse with scaling, a constant ǫ c can also be obtained. For the Bunch-Marcia algorithm, ǫ c = ǫ 500 with the 2 2 linear system solved by explicit inverse with scaling [5]. We conclude that all pivoting strategies in the literature [1, 3, 4, 5, 7, 10] satisfy condition (3). Lemma 4 gives a componentwise backward error analysis of LBL T factorization, with proof invoking Lemma 3. Lemma 3 Let Y = (S A kb k C k )E 1, where E and each B k are either 1 1 or 2 2, such that the matrix operations are well-defined. If E is a 2 2 matrix, we assume condition (3) holds. Then the computed Ŷ satisfies where Ŷ E + A k B k C k = S + S, S max{ǫ c, ǫ 4m 3 }( S + Ŷ E + A k B k C k ). If E is an identity, then max{ǫ c, ǫ 4m 3 } can be replaced by ǫ 4m 3, since ǫ c = 0. Proof Let Z = S A kb k C k and hence Y = ZE 1. If B k is a 2 2 matrix, then each element in A k B k C k can be represented in the form 4 i=1 a ib i c i. Therefore, each element in A kb k C k sums at most 4() terms. By 5
6 Lemma 2, Ẑ = Z + Z, Z ǫ 4m 3 ( S + A k B k C k ). (4) We use X (i) to denote the row vector formed by the ith row of matrix X. If E is a 2 2 matrix, then applying (3) by substituting y = Ŷ (i) and z = Ẑ(i), we obtain Ŷ (i) (E + E i ) = Ẑ(i), E i ǫ c E (5) for i = 1 and i = 2 (if any). By (4) and (5), S = Ŷ E (S and then A k B k C k ) = Ŷ E Ẑ + Ẑ Z = Ŷ E Ẑ + Z, S (i) = Ŷ (i) E i + Z (i) Ŷ (i) E i + Z (i) ǫ c Ŷ (i) E + ǫ 4m 3 ( S (i) + A (i) k B k C k ) max{ǫ c, ǫ 4m 3 }( S (i) + Ŷ (i) E + A (i) k B k C k ). Combining the cases i = 1, 2 (if any), we obtain the result for E being 2 2. If E is 1 1, then we apply Lemma 2 and obtain S ǫ 4m 3 ( Ŷ E + A k B k C k ). Lemma 4 If the LBL T factorization of a symmetric matrix A R n n runs to completion, then the computed ˆL ˆB ˆL T satisfies A ˆL ˆB ˆL T max{ǫ c, ǫ 4m 3 }( A + ˆL ˆB ˆL T ), where we assume condition (3) holds for all linear systems involving 2 2 pivots, and m is the number of blocks in B, m n. Proof Applying Lemma 3 to ( ) and ( ) in Algorithm 1, we obtain j 1 A ij ˆL ij ˆBj ˆL ik ˆBk ˆLT jk max{ǫ c, ǫ 4j 3 }( A ij + ˆL ij ˆB j 1 j + ˆL ik ˆB k ˆL T jk ). Therefore, A ij j ˆL ik ˆBk ˆLT jk max{ǫ c, ǫ 4j 3 }( A ij + j ˆL ik ˆB k ˆL T jk ) (6) 6
7 for 1 j i m, where ˆL jj = 1 or I 2, depending on whether B j is 1 1 or 2 2 for i = 1,..., m. The result is obtained by collecting terms in (6) into one matrix presentation. Lemma 4 coincides with [12, Eqn. (4.9)] by Higham, but with max{ǫ c, ǫ 4m 3 } replaced by p(n)u + O(u 2 ), where p(n) is a linear polynomial. From [12, Eqn. (4.9)] Higham obtained [12, Theorem 4.1], a componentwise backward error bound for solving linear systems by LBL T factorizations. Similarly, from Lemma 4 we can get a corresponding bound, as presented in [9, Theorem 3.4]. 3.2 Normwise analysis Lemma 4 expects a bound on ˆL ˆB ˆL T relative to A. For simplicity, we consider L B L T instead of ˆL ˆB ˆL T, since ˆL ˆB ˆL T = L B L T + O(u). (7) Therefore, the objective is to find a modest c(n) such that L B L T c(n) A (8) in some proper norm. Higham [12] showed that using Bunch-Kaufman pivoting strategy with the pivoting argument α = , the LBL T factorization of a symmetric matrix A R n n satisfies L B L T M max{ 1 α, (3 + α2 )(3 + α) (1 α 2 ) 2 }nρ n A M nρ n A M, (9) where ρ n is the growth factor and M is the largest magnitude element in the given matrix. These bounds also hold for the suggested pivoting argument α = for triadic matrices [10]. The Bunch-Parlett, fast Bunch-Parlett and bounded Bunch-Kaufman pivoting strategies satisfy a stronger condition than the Bunch-Kaufman, so they also satisfy (9). Also, Higham [13] proved that using Bunch s pivoting strategy with pivoting argument α = , L B L T M (7α + 11)(α + 2) A M 42 A M, (10) where A is symmetric tridiagonal. Bunch and Marcia [5] also showed that the normwise backward error bound (10) is preserved with their pivoting method. We conclude that all pivoting strategies for LBL T factorization in the literature [1, 3, 4, 5, 7, 10] satisfy condition (8). Wilkinson showed that the Cholesky factorization of a symmetric positive definite matrix A R n n is guaranteed to run to completion if 20n 3/2 κ 2 (A)u 1 [18]. We give a sufficient condition for the success of LBL T factorization with inertia preserved in Theorem 2, with proof invoking a theorem by Weyl [17, Corollary 4.9]. For symmetric tridiagonal matrices the corresponding condition is given in Theorem 3. 7
8 Theorem 1 (Weyl) Let A, B be two n n Hermitian matrices and λ k (A), λ k (B), λ k (A+B) be the eigenvalues of A, B, and A+B arranged in increasing order for k = 1,..., n. Then for k = 1,..., n, we have λ k (A) + λ 1 (B) λ k (A + B) λ k (A) + λ n (B). Theorem 2 With the Bunch-Parlett, Bunch-Kaufman, fast Bunch-Parlett or bounded Bunch-Kaufman pivoting strategy, the LBL T factorization of a symmetric matrix A R n n succeeds with inertia preserved if 36n 2 max{c, 4n 3}ρ n κ 2 (A)u < 1, where the constant c is from (3) and ρ n is the growth factor. Proof The proof is by finite induction. Consider Algorithm 1. Let A + A = ˆL ˆB ˆL T,  k = A k + A k, where A k = A(1 : k, 1 : k) and A k = A(1 : k, 1 : k). The process of LBL T factorization is to iteratively factor A k, increasing k from 1 to n. Obviously, the first stage succeeds for a nonsingular matrix. Suppose the factorization of A k s is successfully completed with inertia preserved (i.e., the inertia of Âk s is the same as that of A k s ), where s = 1 or 2 denotes whether the next pivot is 1 1 or 2 2. Since the inertia of  k s is preserved, all the pivots in Âk s are full rank, so the factorization of A k succeeds (i.e., with no division by zero). The rest of the proof is to show that the inertia is preserved in Âk so the induction can be continued. By Lemma 4, (7) and (9), the componentwise backward error satisfies A k k 2 max{c, 4k 3}ρ k u A k 2 + O(u 2 ) for all possible 1 < k n. Assuming the sum of the O(u 2 ) terms is smaller than 0.075ρ k max{c, 4k 3}u A k 2, we have Now let A k 2 36k 2 max{c, 4k 3}ρ k u A k 2 =: f(k) A k 2. By Theorem 1, if λ i (A k ) > 0, λ (A k ) = min 1 i k λ i(a k ). λ i (A k + A k ) λ i (A k ) A k 2 λ (A k ) f(k) A k 2 = λ (A k )(1 f(k)κ 2 (A k )) λ (A n )(1 f(n)κ 2 (A)) > 0, where we have assumed f(n)κ 2 (A) < 1 for the last inequality. Under the same assumption, if λ i (A k ) < 0, λ i (A k + A k ) λ i (A k ) + A k 2 λ (A k ) + f(k) A k 2 < 0. 8
9 We conclude that if f(n)κ 2 (A) < 1, then λ i (A k + A k ) and λ i (A k ) have the same sign for i = 1,..., k. In other words, the inertia of  k is preserved. By induction, the factorization is guaranteed running to completion with inertia preserved. In Theorem 2, c = 12 if all 2 2 systems are solved by GEPP, and c = 180 if all 2 2 systems are solved by explicit inverse with scaling, as shown in [12]. Table 1 displays the bounds on the growth factor ρ n, collected from literature. The bounds for general symmetric matrices are from [2] and [4]. The bounds for symmetric triadic matrices are given in [10]. Table 1: Bounds on growth factor ρ n. Symmetric Matrix General Triadic Pivoting Argument α = α = Bunch-Kaufman 2.57 n 1 O(1.28 n ) Bounded Bunch-Kaufman/Fast Bunch-Parlett 1+(ln n)/4 O(n 1.7 ) Bunch-Parlett 5.4n Sorensen and Van Loan [8, Section 5.3.2] suggested a variant of Bunch- Kaufman pivoting strategy. As discussed in [12], their variant preserves the properties (3) and (9). Hence Theorem 2 remains valid for this variant. Theorem 3 With the Bunch s pivoting method or the Bunch-Marcia pivoting strategy, the LBL T factorization of a symmetric tridiagonal matrix A R n n succeeds with inertia preserved if where the constant c is from (3). 42n max{c, 4n 3}κ 2 (A)u < 1, Proof The proof is much the same as that of Theorem 2, but invoking (10) instead of (9). In Theorem 3, using Bunch s pivoting method with the 2 2 systems solved by GEPP, c = 6 5 [13]. Using the Bunch-Marcia pivoting method with the 2 2 systems solved by explicit inverse with scaling, c = 500 [5]. In [6], Bunch and Marcia suggested a variant of their method with no effect on the properties (3) and (10). Hence Theorem 3 remains valid for this variant. Theorems 2 and 3 say that the LBL T factorization is guaranteed to run to completion with inertia preserved if the matrix is not too ill-conditioned. 4 Rank estimation Cholesky factorization for symmetric positive semidefinite matrices with complete pivoting can be used for rank estimation. The stability is well studied 9
10 by Higham [11]. In this section, we discuss the stability of the Bunch-Parlett, fast Bunch-Parlett, or bounded Bunch-Kaufman pivoting strategy applied to the rank estimation of symmetric indefinite matrices. 4.1 Componentwise analysis Recall that we use A R n n to denote a symmetric matrix of floating point numbers. It is unrealistic to assume A is singular because of the potential rounding errors. Let à be the exact singular matrix of rank r < n under consideration and A = à + à be its stored matrix of floating point numbers. The overall backward error is à ˆL ˆB ˆL T, where ˆL R n r and ˆB R r r. For simplicity of discussion we consider only A ˆL ˆB ˆL T and say that A is rank r, assuming A is close to Ã. Without loss of generality, we assume that the necessary interchanges are done so that A(1:r, 1:r) has rank r, as they would be with the Bunch-Parlett, fast Bunch-Parlett, or bounded Bunch-Kaufman pivoting. The factorization is denoted by A = LBL T, where L B 11 1 B 2 L 21 L 22 B =... Rr r, L = L,1 L,2 L, B L m1 L m2 L m, R n r. Each B i is either 1 1 or 2 2, with L ii = 1 or L ii = I 2, respectively. This factorization is guaranteed if A(1: r, 1: r) satisfies the condition in Theorem 2. The pseudo-code is given in Algorithm 2. Algorithm 2 LBL T factorization of A R n n of rank r < n. for all j = 1,..., do B j = A jj j 1 L jkb k L T jk (*) for all i = j+1,...,m do end for end for L ij = (A ij j 1 L ikb k L T jk )B 1 j (**) Algorithm 2 can be regarded as the incomplete LBL T factorization after processing r rows/columns. Theorem 4 gives a bound on its componentwise backward error. In rank estimation we are concerned with a symmetric matrix A R n n of rank r < n. Theorem 4 We consider the incomplete LBL T factorization of a symmetric matrix A R n n after processing r rows/columns (r < n), assume condition (3) holds for all linear systems involving 2 2 pivots, and define the backward error A by A + A = ˆL ˆB ˆL T + Â(r+1), (11) 10
11 where  (r+1) = r n r [ r n r ] 0 0, 0 Ŝ r (A) with Ŝr(A) the computed Schur complement. Then A max{ǫ c, ǫ 4r+1 }( A + ˆL ˆB ˆL T + Â(r+1) ). (12) Proof To simplify the notation, we let ˆL ii = 1 or I 2 depending on whether B i is 1 1 or 2 2 for i = 1,...,. By Lemma 3 with the assumption that condition (3) holds, A ij j ˆL ik ˆBk ˆLT jk max{ǫ c, ǫ 4j 3 }( A ij + j ˆL ik ˆB k ˆL T jk ), (13) for j = 1,..., and i = j,..., m. Note that though A mm can be larger than 2 2, the error analysis in Lemma 3 for Ŝr(A) is still valid. Therefore, A mm Ŝr(A)+ ˆL mk ˆBk ˆLT mk ǫ 4m 3 ( A mm + Ŝr(A) + ˆL mk ˆB k ˆL T mk ). (14) The result is obtained by collecting terms in (13) (14) into one matrix representation. 4.2 Normwise analysis Now we bound A ˆL ˆBˆL T F for analyzing the stability of the Bunch-Parlett, fast Bunch-Parlett, and bounded Bunch-Kaufman pivoting strategies applied to rank estimation for symmetric indefinite matrices. We begin with a perturbation theorem for LBL T factorization. Theorem 5 Let S k (A) be the Schur complement appearing in an LBL T factorization of a symmetric matrix A R n n after processing the first k rows/columns, k < n. Suppose there is a symmetric perturbation in A, denoted by E. Partition A as [ ] A11 A A = T 21, A 21 A 22 where A 11 R k k, and partition E accordingly. Assume that both A 11 and A 11 + E 11 are nonsingular. Then S k (A + E) S k (A) E (1 + W ) 2 + O( E 2 ), where W = A 1 11 AT 21 and is a p-norm or the Frobenius norm. Theorem 5 is from [10, Theorem 5.1], a generalization of a lemma by Higham [11, Lemma 2.1][14, Lemma 10.10] for the Cholesky factorization. 11
12 Theorem 6 With the Bunch-Parlett, fast Bunch-Parlett, or bounded Bunch- Kaufman pivoting on a symmetric indefinite matrix A of rank r, A ˆL ˆB ˆL T F max{c, 4r+1}(τ(A) + 1)(( W F +1) 2 + 1)u A F + O(u 2 ), where c is from condition (3), τ(a) = L B L T F / A F, and W = A(1:r, 1: r) 1 A(1 : r, r+1 : n). Note that we have assumed the interchanges of rows and columns by pivoting are done so A(1:r, 1:r) has full rank. Proof Since (3) holds, so does (12). The growth factor and τ(a) are wellcontrolled, so that A = O(u), where A is defined in (11). Here and later in (16), we use the property (7) for derivation. Note that in exact arithmetic, ˆL ˆB ˆL T is the partial LBL T factorization of A + A with Â(r+1) the Schur complement. By Theorem 5, we obtain the perturbation bound Â(r+1) F ( W F + 1) 2 A F + O(u 2 ), (15) where W = A(1 : r, 1 : r) 1 A(1 : r, r+1 : n). By (11), (15) and A = O(u), we have ˆL ˆB ˆL T = A + O(u). By Theorem 4, A F max{ǫ c, ǫ 4r+1 }( ˆL ˆB ˆL T F + A F + Â(r+1) F ) = max{c, 4r+1}(τ(A) + 1)u A F + O(u 2 ). (16) Substituting (16) into (15), we obtain Â(r+1) F max{c, 4r+1}(τ(A) + 1)( W F + 1) 2 u A F + O(u 2 ). (17) The result is concluded from (11), (16) and (17). By Theorem 6, the bound on A ˆL ˆB ˆL T F / A F is governed by W F and τ(a). A bound on W F is given in [10, Lemma 5.3] that γ W F γ+2 (n r)((1+γ)2r 1), (18) where γ = max{ 1 α, 1 1 α } is the element bound of L. With the suggested pivoting argument α = , γ = To bound τ(a), we apply the analysis of Higham for (9), and then obtain τ(a) 36n(r + 1)ρ r+1 (19) for symmetric A R n n of rank r < n, where ρ r+1 is the growth factor, whose bounds are displayed in Table 1. We conclude that with Bunch-Parlett, fast Bunch-Parlett, or bounded Bunch-Kaufman pivoting, LBL T factorization is normwise backward stable for rank-deficient symmetric matrices. 12
13 4.3 Numerical experiments Our experiments were on a laptop with machine epsilon , with implementation based on newmat library 3. For rank estimation, the important practical issue is when to stop the factorization. In (17), the bound on Â(r+1) F / A F is governed by W F and τ(a). However, both bounds (18) and (19) are pessimistic. To investigate the typical ranges of W F and τ(a) in practice, we used the random matrices described as follows. Each indefinite matrix was constructed as QΛQ T R n n, where Q, different for each matrix, is a random orthogonal matrix generated by the method of G. W. Stewart [16], and Λ = diag(λ i ) of rank r. The following three test sets were used in our experiments: λ 1 = λ 2 = = λ r 1 = 1, λ r = σ, λ 1 = λ 2 = = λ r 1 = σ, λ r = 1, λ i = β i, i = 1,...,r 1, λ r = 1, where 0 < σ 1 and β r 1 = σ. We assigned the sign of λ i randomly for i = 1,..., r 1. Let t denote the number of negative eigenvalues. For each test set, we experimented with all combinations of n = 10, 20,..., 100, r = 2, 3,..., n, t = 1, 2,..., r 1, and σ = 1, 10 3,..., 10 12, for a total of 94, 875 indefinite matrices in each set. In addition to W F and τ(a), we also measured the growth factor ρ r+1, and the relative backward error ξ r = A ˆL ˆB ˆL T F /(u A F ), and the relative Schur residual η r = Â(r+1) F /(u A F ). Their maximum values for the Bunch- Parlett, fast Bunch-Parlett, and bounded Bunch-Kaufman pivoting strategies are displayed in Table 2. All these numbers are modest, except that the bounded Bunch-Kaufman pivoting strategy introduced relatively large backward errors. The assessment showed that the rank estimation by LBL T factorization with Bunch-Parlett and fast Bunch-Parlett pivoting strategies is potentially stable in practice. Table 2: Experimental maximum ρ r+1, W F, τ(a), ξ r, η r for assessment of stability of rank estimation. Algorithm ρ r+1 W F τ(a) ξ r η r Bunch-Parlett Fast Bunch-Parlett Bounded Bunch-Kaufman Note that the bounds (18) and (19) depend on r more than n, as does (17). Experimenting with several different stopping criteria, we suggest Â(k+1) F (k+1) 3/2 u A F, (20) 3 newmat is a library of matrix operations written in C++ by Robert Davies. 13
14 or the less expensive ˆB i F (k+1) 3/2 u B 1 F, (21) where ˆB i is the block pivot chosen in Â(k+1). With the Bunch-Parlett pivoting strategy, A M B 1 M and Â(k+1) M B i M. Therefore, (20) and (21) are related. One potential problem is that continuing the factorization on Â(r+1) could be unstable for rank estimation. However, the element growth is well-controlled by pivoting. The dimensions of Schur complements are reduced, whereas the right-hand sides in (20) and (21) are increased during the decomposition. These properties safeguard the stability of rank estimation. We experimented on the three sets of matrices described above, in a total of 284, 625 matrices. The results are reported as follows. Using Bunch-Parlett or fast Bunch-Parlett pivoting strategy with stopping criterion (20), the estimated ranks were all correct. Using the Bunch-Parlett and fast Bunch-Parlett pivoting strategy with stopping criterion (21), there were 26 (0.009%) and 53 (0.019%) errors, respectively. Using bounded Bunch-Kaufman pivoting strategy with stopping criteria (20) and (21), there were 15 (0.005%) and 1,067 (0.375%) errors, respectively. The further experiment with σ = and stopping criterion (21) exhibited the minor instability since the conditioning of the nonsingular part of a matrix affects the stability of rank estimation. Using the Bunch-Parlett, fast Bunch-Parlett, and bounded Bunch-Kaufman pivoting strategies, there were 503 (0.884%), 488 (0.857%), and 939 (1.651%) errors, respectively. Forcing all the nonzero eigenvalues to be positive in the three test sets, we also experimented with rank estimation of positive semidefinite matrices by LDL T factorization. For these positive semidefinite matrices, the Bunch-Parlett and fast Bunch-Parlett pivoting strategies are equivalent to complete pivoting. With stopping criteria (20) and (21), all the estimated ranks were accurate for σ = 1, 10 3,..., Minor instability (less than 5% errors) occurred with σ = The stopping criteria suggested by Higham [11] for rank estimation of positive semidefinite matrices by Cholesky factorization are in the same form as (20) and (21), but replacing (k+1) 3/2 by n. Using his stopping criteria with complete pivoting all the estimated ranks of the semidefinite matrices were accurate for σ = 1, 10 3,..., and also σ = However, his stopping criteria resulted in less accuracy than (20) and (21) for indefinite matrices. We summarize that with σ = 1, 10 3,..., (i.e., when the rank was not ambiguous), both stopping criteria (20) and (21) worked well while using the Bunch-Parlett or fast Bunch-Parlett pivoting strategies. However, they may not be the best for all matrices. A priori information about the matrix, such as the size of W, the growth factor and distribution of nonzero eigenvalues, may help adjust the stopping criterion. 14
15 5 Concluding remarks We have studied the stability analysis LBL T factorizations. Our concluding remarks are listed below. 1. We gave in Theorem 2 a sufficient condition under which an LBL T factorization of a symmetric matrix is guaranteed to run to completion with inertia preserved. The corresponding result for symmetric tridiagonal matrices was given in Theorem We also analyzed the numerical stability of rank estimation of symmetric indefinite matrices using LBL T factorization with Bunch-Parlett, fast Bunch-Parlett or bounded Bunch-Kaufman pivoting. The results are presented in Theorems 4 and Based on our analysis and experiments, we suggest the stopping criteria (20) and (21) for rank estimation by LBL T factorization. In our experiments, both of them worked well with the Bunch-Parlett and fast Bunch- Parlett pivoting strategies when the rank was not ambiguous. We recommend (21) for its low cost, and the fast Bunch-Parlett pivoting strategy for its efficiency without losing much accuracy. Acknowledgment This paper is from Chapter 3 of the author s doctoral thesis [9]. Thanks to Dianne P. O Leary (thesis advisor) for constructive discussions and her help in editing an early manuscript, to Howard Elman for his careful reading, and to Nick Higham for his valuable comments and providing the Wilkinson reference [18]. It is also a pleasure to thank Roger Lee for the discussions on eigenvalue problems. References [1] C. Ashcraft, R. G. Grimes, and J. G. Lewis. Accurate symmetric indefinite linear equation solvers. SIAM J. Matrix Anal. Appl., 20(2): , [2] J. R. Bunch. Analysis of the diagonal pivoting method. SIAM J. Numer. Anal., 8(4): , [3] J. R. Bunch. Partial pivoting strategies for symmetric matrices. SIAM J. Numer. Anal., 11(3): , [4] J. R. Bunch and L. Kaufman. Some stable methods for calculating inertia and solving symmetric linear systems. Math. Comp., 31: , [5] J. R. Bunch and R. F. Marcia. A pivoting strategy for symmetric tridiagonal matrices. Numer. Linear Algebra Appl., 12(9): ,
16 [6] J. R. Bunch and R. F. Marcia. A simplified pivoting strategy for symmetric tridiagonal matrices. Numer. Linear Algebra Appl., 13(10): , [7] J. R. Bunch and B. N. Parlett. Direct methods for solving symmetric indefinite systems of linear equations. SIAM J. Numer. Anal., 8(4): , [8] J. J. Dongarra, I. S. Duff, D. C. Sorensen, and H. A. van der Vorst. Solving Linear Systems on Vector and Shared Memory Computers. SIAM, [9] H.-r. Fang. Matrix factorizations, triadic matrices, and modified Cholesky factorizations for optimization. PhD thesis, Univ. of Maryland, [10] H.-r. Fang and D. P. O Leary. Stable factorizations of symmetric tridiagonal and triadic matrices. SIAM J. Matrix Anal. Appl., 28(2): , [11] N. J. Higham. Analysis of the Cholesky decomposition of a semi-definite matrix. In M. G. Cox and S. J. Hammarling, editors, Reliable Numerical Computation, pages Oxford University Press, New York, [12] N. J. Higham. Stability of the diagonal pivoting method with partial pivoting. SIAM J. Matrix Anal. Appl., 18(1):52 65, [13] N. J. Higham. Stability of block LDL T factorization of a symmetric tridiagonal matrix. Linear Algebra Appl., 287: , [14] N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM, second edition, [15] I. Slapničar. Componentwise analysis of direct factorization of real symmetric and Hermitian matrices. Linear Algebra Appl., 272: , [16] G. W. Stewart. The efficient generation of random orthogonal matrices with an application to condition estimation. SIAM J. Numer. Anal., 17: , [17] G. W. Stewart and J.-g. Sun. Matrix Perturbation Theory. Academic Press, [18] J. H. Wilkinson. A priori error analysis of algebraic processes. In I. G. Petrovsky, editor, Proc. International Congress of Mathematicians, Moscow 1966, pages Mir Publishers,
s s n s A11 A T 21 A 21 A 22 Is A T 0 I n s
BACKWARD ERROR ANALYSIS OF FACTORIZATION ALGORITHMS FOR SYMMETRIC AND SYMMETRIC TRIADIC MATRICES HAW-REN FANG Abstract. We consider the LBL T factorization of a symmetric matrix where L is unit lower triangular
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationDiagonal pivoting methods for solving tridiagonal systems without interchanges
Diagonal pivoting methods for solving tridiagonal systems without interchanges Joseph Tyson, Jennifer Erway, and Roummel F. Marcia Department of Mathematics, Wake Forest University School of Natural Sciences,
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationFACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS
MATHEMATICS OF COMPUTATION Volume 67, Number 4, October 1998, Pages 1591 1599 S 005-5718(98)00978-8 FACTORIZING COMPLEX SYMMETRIC MATRICES WITH POSITIVE DEFINITE REAL AND IMAGINARY PARTS NICHOLAS J. HIGHAM
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationA simplified pivoting strategy for symmetric tridiagonal matrices
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 6 [Version: 2002/09/18 v1.02] A simplified pivoting strategy for symmetric tridiagonal matrices James R. Bunch 1 and Roummel
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationChapter 12 Block LU Factorization
Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationModified Cholesky algorithms: a catalog with new approaches
Math. Program., Ser. A (2008) 5:39 349 DOI 0.007/s007-007-077-6 FULL LENGTH PAPER Modified Cholesky algorithms: a catalog with new approaches Haw-ren Fang Dianne P. O Leary Received: 8 August 2006 / Accepted:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationKey words. modified Cholesky factorization, optimization, Newton s method, symmetric indefinite
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 4, pp. 197 111, October 1998 16 A MODIFIED CHOLESKY ALGORITHM BASED ON A SYMMETRIC INDEFINITE FACTORIZATION
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationSparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices
Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationLecture 2 Decompositions, perturbations
March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and
More informationUsing Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.
Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors
More informationLecture Note 2: The Gaussian Elimination and LU Decomposition
MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationNumerical methods for solving linear systems
Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)
More informationc 2013 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 34, No. 3, pp. 1401 1429 c 2013 Society for Industrial and Applied Mathematics LU FACTORIZATION WITH PANEL RANK REVEALING PIVOTING AND ITS COMMUNICATION AVOIDING VERSION
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationModule 5.2: nag sym lin sys Symmetric Systems of Linear Equations. Contents
Linear Equations Module Contents Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations nag sym lin sys provides a procedure for solving real or complex, symmetric or Hermitian systems of linear
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationLU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version
LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version Amal Khabou James Demmel Laura Grigori Ming Gu Electrical Engineering and Computer Sciences University of California
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationFAST VERIFICATION OF SOLUTIONS OF MATRIX EQUATIONS
published in Numer. Math., 90(4):755 773, 2002 FAST VERIFICATION OF SOLUTIONS OF MATRIX EQUATIONS SHIN ICHI OISHI AND SIEGFRIED M. RUMP Abstract. In this paper, we are concerned with a matrix equation
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationTesting Linear Algebra Software
Testing Linear Algebra Software Nicholas J. Higham, Department of Mathematics, University of Manchester, Manchester, M13 9PL, England higham@ma.man.ac.uk, http://www.ma.man.ac.uk/~higham/ Abstract How
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationMatrix Multiplication Chapter IV Special Linear Systems
Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-09-14 1. There was a goof in HW 2, problem 1 (now fixed) please re-download if you have already started looking at it. 2. CS colloquium (4:15 in Gates G01) this Thurs is Margaret
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationAx = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationPOSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS
POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationBlock Lanczos Tridiagonalization of Complex Symmetric Matrices
Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationRELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationIntel Math Kernel Library (Intel MKL) LAPACK
Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationReducing the Amount of Pivoting in Symmetric Indefinite Systems
Reducing the Amount of Pivoting in Symmetric Indefinite Systems Dulceneia Becker 1, Marc Baboulin 4, and Jack Dongarra 1,2,3 1 University of Tennessee, USA [dbecker7,dongarra]@eecs.utk.edu 2 Oak Ridge
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationLecture 2: Numerical linear algebra
Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra
More informationSherman-Morrison-Woodbury
Week 5: Wednesday, Sep 23 Sherman-Mrison-Woodbury The Sherman-Mrison fmula describes the solution of A+uv T when there is already a factization f A. An easy way to derive the fmula is through block Gaussian
More information11.5 Reduction of a General Matrix to Hessenberg Form
476 Chapter 11. Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice. By contrast,
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More information