THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION
|
|
- Piers Stone
- 5 years ago
- Views:
Transcription
1 SIAM J. NUMER. ANAL. c 1997 Society for Industrial and Applied Mathematics Vol. 34, No. 3, pp , June THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION NILOTPAL GHOSH, WILLIAM W. HAGER, AND PURANDAR SARMAH Abstract. An algorithm presented in Hager [Comput. Math. Appl., 14 (1987), pp ] for diagonalizing a matrix is generalized to a bloc matrix setting. It is shown that the resulting algorithm is locally quadratically convergent. A global convergence proof is given for matrices with separated eigenvalues and with relatively small off-diagonal elements. Numerical examples along with comparisons to the QR method are presented. Key words. bloc diagonalization, eigenpair stability, eigenvalues, eigenvectors AMS subject classifications. 65F05, 65F15, 65F35 PII. S Introduction. Let A denote an n n bloc matrix; that is, the (i, j) element A ij of A is itself a matrix of dimension n i n j, where r i=1 n i = n for some r n. In this paper, we develop and analyze an algorithm for computing a bloc diagonalization XΛX 1 of A, assuming one exists. Here Λ is a bloc diagonal matrix with diagonal blocs Λ i,i =1tor,andXis an invertible matrix whose ith bloc of columns is denoted by X i. Hence, the equation A = XΛX 1 is equivalent to the relation AX i = X i Λ i, i =1tor. The algorithm developed in this paper is based on a stability result, Proposition 1, for a perturbation A(ε)X(ε) =X(ε)Λ(ε) of the original eigenequation. We show that if the spectrum of Λ i and Λ j are disjoint for each i j, then there exist continuously differentiable solutions X(ε) andλ(ε) to the perturbed equation. After differentiating the perturbed equation and applying Taylor s theorem, we obtain the following algorithm (throughout the paper, the subscript denotes the iteration number while the subscripts i and j denote elements or submatrices of larger matrices). BLOCK DIAGONALIZATION ALGORITHM. If X Λ X 1 is the current approximate diagonalization of A, then (1) Λ +1 = diag X 1 AX and X +1 = X (I + D), where (2) diag D = 0 and DΛ +1 Λ +1 D =off X 1 AX. The notation diag and off above are defined in the following way: given a bloc matrix B, diag B denotes the bloc diagonal matrix whose diagonal blocs coincide with the diagonal blocs of B, and off B denotes the matrix that coincides with B except for the diagonal blocs which are replaced by blocs of zeros. Received by the editors January 4, 1995; accepted for publication (in revised form) September 22, This research was supported by U.S. Army Research Office contract DAAL03-89-G-0082 and by the National Science Foundation. Department of Mathematics, Catonsville Community College, Baltimore, MD Visiting Professor, Department of Mathematics, University of Florida, Gainesville, FL, Department of Mathematics, University of Florida, Gainesville, FL (hager@math.ufl.edu, puran@swamp.bellcore.com). 1255
2 1256 N. GHOSH, W. W. HAGER, AND P. SARMAH In section 3 we show that if the spectrum of Λ i and Λ j are disjoint for i j, then this algorithm is locally quadratically convergent. Moreover, in the case of 1 1 diagonal blocs, we establish a global convergence result in section 4 when the diagonal elements of A are separated and the off-diagonal elements are relatively small. Some related Newton-type methods for computing or refining invariant spaces are presented in [2], [4], [5], [7], and [15]. The wor of Anselone and Rall [2] justifies the application of Newton s method to the system (3) Ax λx =0, y T x =1, where the vector x and the scalar λ are the unnowns and y is a normalization vector. This scheme, which typically converges locally quadratically for a simple eigenvalue, involves the inversion of a matrix of the following form in each iteration: [ ] A λ I x y T. 0 In [4] Chatelin examines the problem of approximating an invariant space spanned by the columns of some matrix Z. She applies Newton s method to the equation (4) AZ Z(Y T AZ) =0, where Y is a normalization matrix. Observe that at a solution to (4), Y T Z = I assuming Y T AZ is invertible. The condition Y T Z = I is the generalization of the condition y T x = 1 in (3). Chatelin obtains a local quadratic convergence result for Newton s method applied to (4) assuming that zero is not an eigenvalue of A and that the number of eigenvalues of A, counting multiplicities, associated with the columns of Z is equal to the number of columns of Z. Each iteration of this scheme involves inverting a linear operator L, where L acting on a matrix M is defined by L(M) =(I ZY T )AM M(Y T AZ). In [7] Dongarra, Moler, and Wilinson also consider Newton s method applied to (3); however, they focus on the special case where all the components of y are zero except for one component which is set to one. In [7] the authors consider both the modified Newton s method and the standard Newton scheme. In the modified Newton s method (see [12]), the Jacobian is evaluated at some given point, and in each Newton iteration, this fixed Jacobian is used instead of the Jacobian at the new iterate. Hence, each iteration of the modified scheme involves inverting a matrix of the form [ ] A λi x y T, 0 where λ and x are fixed approximations to an eigenpair. Generalizations of this idea for invariant spaces are also presented in [7]. In [5] Demmel examines the schemes [4] and [7], as well as a scheme that he attributes to Stewart [15], and he shows that in some sense, each of these schemes tries to solve an underlying Riccati equation. In comparing the schemes of Anselone and Rall, Chatelin, and Dongarra, Moler, and Wilinson with the scheme proposed in this paper, we mae the following observation: each iteration of any of these schemes
3 BLOCK DIAGONALIZATION 1257 essentially involves the inversion of a matrix. With the bloc diagonalization scheme, one inversion yields a first-order correction to all the eigenvectors and eigenvalues while one inversion with any of the other methods yields a first-order correction to an invariant space. On the other hand, with these other schemes, one only needs an approximation to an invariant space to get an improved approximation while the scheme in this paper requires an approximation to a bloc diagonalization to get an improved approximation. For another comparison between the bloc algorithm and Newton s method applied to (3), suppose that an approximate bloc diagonalization has been determined and we wish to obtain a first-order correction. For simplicity, suppose that all the eigenvalues are simple and separated so that Newton s method applied to (3) converges nicely. With the bloc algorithm, the flop count for 1 step is essentially 10 3 n3 there are n 3 flops involved in multiplying A and X, 4 3 n3 flops to compute X 1 (AX ), and n 3 flops to compute X (I + D). Now consider Newton s method applied to (3); to be specific, let us consider the choice of Dongarra, Molder, and Wilinson for y: each component is zero except for one component. To implement the Newton approach, let us initially reduce A to upper Hessenberg form by an orthogonal similarity transformation; without this reduction, the Newton approach would require on the order of n 4 flops. The flop count (see [10]) for this reduction is about 10 3 n3 flops. As described in [7], the Newton system can be solved using a series of Givens rotations, followed by an LU factorization. The flop count for this process in the worst case is 3n 2,and since there are n eigenvectors to update, the total flop count is 3n 3. Finally, we need to expend n 3 flops to multiply each eigenvector by the orthogonal factor to obtain an eigenvector of the original matrix. The total flop count for the Newton approach is 22 3 n3 flops. Hence, the bloc algorithm appears to be preferable relative to a flop count; in addition, the bloc algorithm is much easier to implement, especially when the blocs are larger than The algorithm. The algorithm developed in this paper is based on a stability result for the bloc diagonalization of a perturbation of the original matrix A. PROPOSITION 1. Let A(ε) denote an n n matrix which is continuously differentiable at ε =0and which has the property A = A(0). If for each i j the spectrum of Λ i and Λ j is disjoint, then there exist continuously differentiable functions X(ε) and Λ(ε), where Λ(ε) is bloc diagonal, X(0) = X, Λ(0) = Λ, and for each ε near 0 we have (5) A(ε)X(ε) =X(ε)Λ(ε). Proof. Our proof of this result is based on the implicit function theorem. Instead of evaluating X(ε) directly, we introduce an auxiliary matrix C(ε) which is chosen such that X(ε) =XC(ε). Since X is invertible, there is a one-to-one correspondence between C(ε) andx(ε). It turns out that there are an infinite number of solutions to (5) with the desired properties. In order to achieve uniqueness, we impose the following constraints: C jj (ε) =Ifor each j. With these side constraints, (5) taes the form (6) A(ε)XC(ε)=XC(ε)Λ(ε), C jj (ε)=i for each j. Abstractly, (6) has the form F (ε, C(ε), Λ(ε)) = 0, where the solutions C(ε) andλ(ε) depend on ε and where the number of equations is equal to the number of unnown elements of C(ε) andλ(ε). At ε = 0, there is the trivial solution C(0) = I, and
4 1258 N. GHOSH, W. W. HAGER, AND P. SARMAH Λ(0) = Λ. By the implicit function theorem (see [1, Thm. 13.7] or [8, Thm. 10.8]), there exists a continuously differentiable solution for ε near zero if the derivative operator of the system (6) with respect to C and Λ is invertible at ε = 0. Establishing invertibility of the derivative is equivalent to showing that the only solution (δc, δλ), with δλ bloc diagonal, to the following linear system is δc = 0 and δλ =0: F F F (0,C(0), Λ(0))δC + (0,C(0), Λ(0))δΛ = F (0,I,Λ)δC + (0,I,Λ)δΛ =0. C Λ C Λ In the context of (6), this reduces to (7) AXδC = XδCΛ+XδΛ, δc jj =0, j =1tor. Premultiplying (7) by X 1 yields (8) ΛδC = δcλ+δλ, δc jj =0, j =1tor. Since Λ and δλ are bloc diagonal and δc jj = 0, we deduce that δλ = 0. For i j, (8) implies that (9) Λ i δc ij δc ij Λ j =0. Referring to [13, sect. 12.5, Thm. 1], we see that the unique solution to (9) is δc ij =0 when the spectrum of Λ i and Λ j is disjoint. Since δc = δλ = 0, we conclude that the derivative of (6) with respect to C andλatε= 0 is invertible. The implicit function theorem completes the proof. Proposition 1 is surprising since Λ(ε) is differentiable even though its eigenvalues may be nondifferentiable. To evaluate X (0) and Λ (0), we differentiate (6) to obtain the relation A (0)X + AXC (0) = XC (0)Λ + XΛ (0), C jj(0)=0, j =1tor. Premultiplying by X 1 yields (10) X 1 A (0)X +ΛC (0) = C (0)Λ + Λ (0), C jj(0)=0, j =1tor. If Z T denotes X 1, (10) is equivalent to the following relations: (11) Λ i(0) = Z T i A (0)X i, i =1tor, and (12) C ij(0)λ j Λ i C ij(0) = Z T i A (0)X j for i j, C jj(0)=0, j =1tor. With the notation diag and off of section 1, (11) and (12) can be expressed (13) Λ (0) = diag X 1 A (0)X, diag C (0)=0, C (0)Λ ΛC (0)=off X 1 A (0)X. After obtaining C (0) satisfying (13), we have X (0) = XC (0). When the spectrum of Λ i and Λ j is disjoint, (12) can be solved in the following way (see [3] or [10, p. 387] for details): replace Λ i and Λ j by their Schur decompositions and obtain an equivalent equation with upper triangular matrices in place of Λ i and Λ j. This upper triangular system can be solved directly by a substitution procedure.
5 BLOCK DIAGONALIZATION 1259 As in [11], we can utilize these formulas for X (0) and Λ (0) in an iterative algorithm for bloc diagonalizing a matrix. In particular, given a matrix A and an approximate bloc diagonalization XΛX 1, let us consider the matrix A(ε) =XΛX 1 +ε(a XΛX 1 ). Observe that A(1) = A. Suppose that the spectrum of Λ i and Λ j is disjoint for i j and that for ε between zero and one, the continuously differentiable functions X(ε) andλ(ε) of Proposition 1 exist. Evaluating the first-order Taylor expansions of X(ε) andλ(ε) around ε = 0 and putting ε = 1, we obtain the following first-order approximations: and Λ(1) Λ(0) + Λ (0) = Λ + diag(x 1 (A XΛX 1 )X) = diag X 1 AX, X(1) X(0) + X (0) = X + XC (0) = X(I + C (0)), where C (0) satisfies (13). Based on these expansions, we arrive at the algorithm (1) and (2), where D corresponds to the matrix C (0) above. As another variation of (2), the matrix Λ +1 can be replaced by the previous iterate Λ in the evaluation of D; that is, the matrix D satisfies (14) diag D = 0 and DΛ Λ D =off X 1 AX. It turns out that with the modified formula (14) for D, the bloc diagonalization algorithm is 2-step quadratically convergent while the original formula (2) yields a 1-step quadratically convergent algorithm. 3. Local quadratic convergence. This section analyzes the local convergence of the bloc diagonalization algorithm while the next section analyzes one situation where convergence is guaranteed for the starting guess X 0 = I. Throughout this analysis, denotes any matrix norm with the following properties: P1. AB A B for each A and B. P2. A B whenever a ij b ij for each i and j. P3. I =1. Below, subscripts i and j are used to denote elements of matrices while subscripts, l, andmare used to denote the iteration number. Finally, let B ρ (X) denote the ball with center X and radius ρ: B ρ (X) ={Y : Y X ρ}. Recall that there is an eigenspace associated with each eigenvalue of a matrix. At best, the bloc diagonalization algorithm converges to elements of the eigenspace associated with a bloc diagonalization of A. Given a bloc diagonalization XΛX 1 of A, lets(x) denote the set of all matrices of the form XD for some invertible bloc diagonal matrix D. THEOREM 1. Suppose that A = XΛX 1, where Λ is bloc diagonal and the spectrum of Λ i and Λ j is disjoint for all i j. Then for all X 0 in a neighborhood of X, the iterates X and Λ generated by the bloc diagonalization algorithm approach limits X and Λ such that A = X Λ X 1, where X lies in S(X) and where the
6 1260 N. GHOSH, W. W. HAGER, AND P. SARMAH root convergence order is at least two. That is, there exists a constant C, independent of, such that X X + Λ Λ C2 2. Proof. Let F Λ (Z) denote the linear operator defined by F Λ (Z) =off (ZΛ ΛZ), where the domain and range are the set of n n bloc matrices with diagonal blocs equal to zero. As noted previously, F Λ is an invertible linear opeator since the spectrum of Λ i and Λ j is disjoint for all i j. Since F Λ is a continuous function of Λ, there exist constants γ and σ such that F 1 M γfor every bloc diagonal M B σ(λ). Choose ρ small enough that Y is invertible whenever Y B ρ (X), and let β be defined by β = max{ Y 1 : Y B ρ (X)}. Decrease ρ further if necessary to ensure that (15) ρβ 1 and Λ Y 1 AY + βρ Y 1 AY σ for every Y B ρ (X). We will show that there exist a constant c and a Y +1 S(X) such that (16) Y +1 Y c X Y and Y +1 X +1 c X Y 2, where c is independent of X and Y in B ρ (X), and where X +1 is generated by the bloc diagonalization algorithm. Assuming, for the moment, the existence of the constant c in (16), the proof is completed in the following way: define Y 0 = X and choose X 0 close enough to X such that (17) X 0 X minimum { 1 (4c), ρ (4c), ρ }. 4 Since Y 0 = X, both X 0 and Y 0 lie in B ρ (X). Proceeding by induction, suppose that X 1,Y 1,...,X,Y all lie in B ρ (X). We use (16) to show that X +1 and Y +1 also lie in B ρ (X). The second inequality in (16) leads to the relation c Y X (c X X 0 ) 2 or Y X X X 0 (c X X 0 ) 2 1. Combining this with the bound (17), we have 1 (18) Y X X X 0 4 =4 X X 0 ρ Utilizing the first inequalities in (16) and (18), and the bound in (17), we get Y +1 X Y +1 Y + Y X c X Y + Y X 4c 4 2 X X 0 + Y X ρ Y X.
7 Repeated application of this inequality yields BLOCK DIAGONALIZATION 1261 Y +1 X ρ l= l < ρ 2. Thus, Y +1 B ρ/2 (X). By (18), with replaced by +1,wehave Y +1 X +1 ρ Hence, X +1 lies in B ρ (X) and the induction step is complete. By the first inequality in (16) and by (18) we have (19) 1 Y Y l m=l cρ 1 Y m+1 Y m c m=l 1 2cρ 4 2m 4 2l m=l 1 Y m X m cρ 1 4 2m m=l for any l. Thus, for X 0 sufficiently close to X, the Y form a Cauchy sequence approaching some limit X, and by (18) the X approach X as well. The triangle inequality X X X Y + Y X combined with (18) and (19) complete the proof of quadratic convergence for the X. Since Λ +1 = diag X 1 AX, we conclude that the Λ approach Λ = X 1 AX at the same rate that the X approach X. To establish (16), we proceed in the following way. Let E be a matrix chosen so that X = Y (I + E); that is, E = Y 1 X I = Y 1 (X Y ). By (15) and (18) (20) E = Y 1 (X Y ) Y 1 X Y β X Y βρ It follows that I + E is invertible and (21) (I + E) E 2. Defining M = Y 1 AY, which is bloc diagonal since Y S(X), observe that X 1 AX =(I+E) 1 Y 1 AY (I + E) =[I E+E 2 (I+E) 1 ]M (I+E) (22) =M EM + M E EM E + E 2 (I + E) 1 M (I + E) = M EM + M E + L, where By (15) with Y = Y we have L = E 2 (I + E) 1 M (I + E) EM E. (23) Λ M = Λ Y 1 AY σ,
8 1262 N. GHOSH, W. W. HAGER, AND P. SARMAH and combining this with (21) we have ( (24) L E 2 M E )= 2 M E 2 1 E 1 E 4(σ + Λ ) E 2. Defining Λ=Λ +1 = diag X 1 AX and referring to (22), the matrix D involved in the evaluation of X +1 satisfies off (DΛ ΛD) =DΛ ΛD=off X 1 AX =off (M E EM + L), which implies that (25) off ((D + E)Λ Λ(D + E))=off ((M Λ)E E(M Λ) + L). By (21), (22), (23), and the next-to-last inequality in (24) we have (26) Λ M = diag X 1 AX M X 1 AX M = M E EM + L 2 M E + L 2 M E 4(σ + Λ ) E, 1 E from which it follows that (27) off ((M Λ)E E(M Λ) + L) (M Λ)E E(M Λ) + L 2 E Λ M + L 12(σ + Λ ) E 2. By (26) we have (28) Λ Λ Λ M + M Λ 2 M E 1 E Referring to (20), we see that + M Λ. (29) E 1 2 Combining (28) and (29) gives and E βρ 4. Λ Λ M Λ +βρ M. By (15) Λ Λ σ, which ensures that F 1 γ. This bound in conjunction with Λ (25) and (27) implies that (30) off (D + E) 12γ(σ + Λ ) E 2. Let G be defined by G =off(d+e). Substituting D =offd=g off E in (1) and utilizing the identity X = Y (I + E) yields Hence, we have X +1 = X (I + D) =Y (I+E)(I + G off E). (31) X +1 = Y (I + diag E)+Y (I+E)G Y Eoff E.
9 Defining Y +1 = Y (I + diag E), observe that BLOCK DIAGONALIZATION 1263 (32) Y +1 Y Y diag E (ρ+ X ) E since Y lies in B ρ (X). Referring to (20), E β X Y, and combining this with (32) gives (33) Y +1 Y β( X +ρ) X Y, which yields the first inequality in (16). Moreover, by (31), we have X +1 Y +1 Y ( I+E G + E 2 ) (ρ+ X )( I + E G + E 2 ). Combining this with the bound for G =off(d+e) in (30) and the relation I +E 1+ E 3 2 from (29) gives (34) X +1 Y +1 (ρ+ X )(18γ(σ + Λ )+1) E 2 β 2 (ρ+ X )(18γ(σ + Λ )+1) X Y 2. The constant c in (16) is the maximum of the constants in (33) and (34) 4. A priori convergence. This section analyzes one situation where the bloc diagonalization algorithm is guaranteed to converge. Define A 0 = A and for >0, let A denote X 1 AX. With this notation, the bloc diagonalization algorithm, starting from X 0 = I, can be expressed in the following way: (35) Λ +1 = diag A, A +1 =(I+D ) 1 A (I+D ), X +1 = X (I + D ), where diag D = 0 and D Λ +1 Λ +1 D =off A. Throughout this section, we assume that the diagonal blocs of A are all 1 1. In addition to the three properties imposed for the norm in section 3, we assume the following: P4. diag A = maximum { A ii : i =1ton}for each A. Let σ(a) be the minimum separation of diagonal elements defined by σ(a) = minimum i j A jj A ii. In the case of 1 1 blocs, the equation for D can be solved explicitly. In particular, we have A,ij (36) D,ij = A,jj A,ii for all i j. Taing absolute values yields It follows that D,ij A,ij /σ(a ). (37) D off A σ(a ). These observations are the basis for the following theorem.
10 1264 N. GHOSH, W. W. HAGER, AND P. SARMAH THEOREM 2. If off A <σ(a)( 3 1)/2, then A is diagonalizable, and the iterates Λ and X generated by the bloc diagonalization algorithm, with the starting guess X 0 = I, approach limits Λ and X, respectively, where A = X Λ X 1. Proof. Let α denote the ratio off A /σ(a), which is less than or equal to ( 3 1)/2 by assumption. Hence, the inequality (38) off A ασ(a ) holds trivially at = 0. Proceeding by induction, assume that (38) holds at iteration. By (37) and (38), (39) D off A α. σ(a ) Letting B denote I + D, it follows that (40) B 1 = (I + D ) 1 By (35) we have A +1 Λ +1 = B 1 A B Λ D 1 1 α. = B 1 (off A +Λ +1 )B Λ +1 Combining this with (38), (39), and (40) yields (41) Also note that = B 1 off A B + B 1 Λ +1 B Λ +1 = B 1 off A B + B 1 (Λ +1 B BΛ +1 ) = B 1 off A B B 1 off A = B 1 off A (B I) = B 1 off A D. off A +1 A +1 Λ +1 B 1 off A D α 1 α off A α2 1 α σ(a ). diag (A +1 A ) A +1 diag A = A +1 Λ +1 (42) α 1 α off A α2 1 α σ(a ). Applying the diag operator to the identity A +1 = A +(A +1 A ) and referring to (42) we have σ(a +1 ) σ(a ) 2 diag (A +1 A ) σ(a ) 2α2 σ(a ) (43) 1 α = (1 α 2α2 )σ(a ). 1 α Combining this with (41) gives off A +1 σ(a +1 ) α 2 1 α 2α 2 α for α ( 3 1)/2. This completes the induction step, and (38) holds for all.
11 Observe that in (41) we establish the relation BLOCK DIAGONALIZATION 1265 off A +1 α 1 α off A. Repeated application of this inequality gives ( ) α (44) off A off A. 1 α Since α< 1 2, the off-diagonal elements of A all tend to zero. Also, by (42) and (44), ( ) +1 α (45) diag A +1 diag A off A. 1 α Thus, the diagonal of A forms a Cauchy sequence approaching some limit Λ, while the off-diagonal elements tend to zero. We now show that no two diagonal elements of Λ are equal. By the first inequality in (43) and by (45) we have ( ) +1 ( ) +1 α α σ(a +1 ) σ(a ) 2 off A = σ(a ) 2ασ(A). 1 α 1 α Repeated application of this inequality yields ( ( ) ) ( i α ( ) ) i α σ(a ) σ(a) 1 2α σ(a) 1 2α 1 α 1 α i=1 i=1 ( ( )) ( ) α/(1 α) 1 2α 2α 2 = σ(a) 1 2α = σ(a). 1 α/(1 α) 1 2α Since 1 2α 2α 2 > 0forα<( 3 1)/2, σ(a ) is uniformly bounded away from zero. Combining the lower bound for σ(a ) with (37) and (44) yields (46) D cr, where r = c = By the equation for X +1 α (1 α) 1 3 and off A (1 2α) α(1 2α) σ(a)(1 2α 2α 2 = ) 1 2α 2α 2. X +1 = X (I + D ) (1 + D ) X (1 + cr ) X X 0 Since ( ) log (1 + cr l c ) (1 r), l=0 (1 + cr l ). X is uniformly bounded by some constant β independent of (see [1, p. 209, Thm. 8.55]). The estimate X +1 X = X D X D βcr l=0
12 1266 N. GHOSH, W. W. HAGER, AND P. SARMAH TABLE 1 n off A TABLE 2 off A implies that the X form a Cauchy sequence that approaches some limit X. Since D < 1 for each by (39), the X are invertible and (47) X 1 +1 (I+D ) 1 X D X cr X 1. The relation 1/(1 cr ) 1+2cr for sufficiently large along with (47) shows that the sequence X 1 is uniformly bounded. Thus, X is invertible and X 1 approaches X 1 ; moreover, Λ +1 = diag X 1 AX approaches X 1 AX =Λ. This completes the proof. During the proof of Theorem 2, it was shown that no two diagonal elements of Λ are equal. Hence, Theorem 1 implies that the convergence in Theorem 2 is locally quadratic. 5. Numerical experiments. To illustrate Theorems 1 and 2, we consider the matrix A defined by A ij =3 i j for i j, anda ii = i. We apply the iteration (35), stopping when the following inequality holds: off A = off X 1 AX Here denotes the matrix -norm (maximum absolute row sum). By Gerschgorin s theorem the diagonal elements of A approximate the eigenvalues of A with error at most off A. The number of iterations needed for convergence appears in Table 1 for various values of the dimension n of A. In Table 2 we show how the error depends on the iteration number for n = 10. Observe that the convergence appears to be at least quadratic as the theory predicts. In Table 3 we compare the execution times (in seconds on a Sun Sparc 20 worstation, compiled with f77 O) for the bloc diagonalization algorithm with the corresponding times of both Dongarra s SICEDR [6] (an implementation of the Newton algorithm contained in ACM algorithm 589) and the QR algorithm as implemented in EISPACK [9], [14]. Recall that the flop count for one iteration of the bloc algorithm is 10 3 n3 while the Newton approach requires about 22 3 n3 flops. As a rough estimate, a diagonalization computed by the QR method requires 9n 3 flops about 5n 3 flops to reduce the matrix to Hessenberg form and to form the orthogonal matrix used in the reduction, 2n 3 flops to update the orthogonal matrix during the reduction to Schur form, n 3 flops to reduce the Hessenberg matrix to triangular form, and n 3 flops
13 BLOCK DIAGONALIZATION 1267 TABLE 3 n Bloc QR SICEDR TABLE 4 ε Bloc QR to diagonalize the triangular matrix and multiply by the orthogonal factor. Hence, three iterations of the bloc algorithm are comparable to the QR algorithm, which agrees with the observed computing times in Table 3. Observe though that the Newton approach was somewhat slower than was predicted by the flop counts. Moreover, some of the Newton iterates converged to the same eigenvalue; hence, one did not actually achieve a diagonalization of the starting matrix but rather achieved a refined approximation to most of the eigenpairs. As both the flop counts and the execution times in Table 3 indicate, one or two iterations of the bloc algorithm execute quicer than the QR algorithm. Hence, on a serial computer, the bloc algorithm is only faster than the QR algorithm when we have a good starting guess for the bloc diagonalization, for example, when a matrix is modified incrementally and rediagonalized after each change. On the other hand, the bloc algorithm is much better suited to parallel computing than the QR method since matrix multiplication and matrix inversion are readily implemented in a parallel computing environment. In comparing code size the bloc algorithm was implemented in 120 lines of Fortran code, SICEDR contains 1000 statements, and the QR code contains 1370 statements (excluding comment and matrix generation statements). In the next sequence of experiments we tae a matrix A whose elements are randomly distributed between zero and one, and we use the QR method to obtain the real Schur decomposition A = QUQ T, where Q is orthogonal and U is quasiupper triangular. We then perturb A by a matrix E whose elements are randomly distributed on the interval [0,ε]. In Table 4 we compare the number of iterations for the bloc diagonalization algorithm with the number of iterations for the QR method applied to the matrix Q T (A + E)Q = U + Q T EQ. The starting X 0 in the bloc algorithm corresponds to the invariant spaces of A, where a pair of real vectors are used to span each complex conjugate pair of eigenvectors. With this choice for X 0, the bloc algorithm could be implemented with real arithmetic. Since the number of QR iterations needed to obtain the original Schur decomposition was 132, it appears that the number of iterations for the QR method applied to a small perturbation of an upper Hessenberg matrix is essentially the same as the number of iterations that were required for the original random matrix. On the other hand, with the bloc diagonalization algorithm, the number of iterations decreases as the perturbation decreases.
14 1268 N. GHOSH, W. W. HAGER, AND P. SARMAH REFERENCES [1] T. M. APOSTOL, Mathematical Analysis, 2nd ed., Addison Wesley, Reading, MA, [2] P. M. ANSELONE AND L. B. RALL, The solution of characteristic value-vector problems by Newton s method, Numer. Math., 11 (1968), pp [3] R. H. BARTELS AND G. W. STEWART, Solution of the equation AX + XB = C, Comm. ACM, 15 (1972), pp [4] F. CHATELIN, Simultaneous Newton s iteration for the eigenproblem, Comput. Suppl., 5 (1984), pp [5] J. W. DEMMEL, Three methods for refining estimates of invariant subspaces, Computing, 38 (1987), pp [6] J. J. DONGARRA, Algorithm 589 SICEDR: A FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues, ACM Trans. Math. Software, 8 (1982), pp [7] J. J. DONGARRA, C. B. MOLER, AND J. H. WILKINSON, Improving the accuracy of computed eigenvalues and eigenvectors, SIAM J. Numer. Anal., 20 (1983), pp [8] L. FLATTO, Advanced Calculus, Waverly Press, Baltimore, MD, [9] B. S. GARBOW, J. M. BOYLE, J. J. DONGARRA, AND C. B. MOLER, Matrix Eigensystem Routines - EISPACK Guide Extension, Springer-Verlag, Berlin, [10] G. H. GOLUB AND C. F. VAN LOAN, Matrix Computations, 2nd ed., The Johns Hopins University Press, Baltimore, MD, [11] W. W. HAGER, Bidiagonalization and diagonalization, Comput. Math. Appl., 14 (1987), pp [12] W. W. HAGER, Applied Numerical Linear Algebra, Prentice Hall, Englewood Cliffs, NJ, [13] P. LANCASTER AND M. TISMENETSKY, The Theory of Matrices, Academic Press, Orlando, [14] B. T. SMITH, J. M. BOYLE, J. J. DONGARRA, B. S. GARBOW, Y. IKEBE, V. C. KLEMA, AND C. B. MOLER, Matrix Eigensystem Routines - EISPACK Guide, Springer-Verlag, Berlin, [15] G. W. STEWART, Error and perturbation bounds for subspaces associated with certain eigenvalue problems, SIAM Rev., 15 (1973), pp
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationOff-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems
Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationthat determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that
Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationInvariant Subspace Computation - A Geometric Approach
Invariant Subspace Computation - A Geometric Approach Pierre-Antoine Absil PhD Defense Université de Liège Thursday, 13 February 2003 1 Acknowledgements Advisor: Prof. R. Sepulchre (Université de Liège)
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationECS130 Scientific Computing Handout E February 13, 2017
ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ
More informationChapter 8 Gradient Methods
Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationIterative Algorithm for Computing the Eigenvalues
Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLec 2: Mathematical Economics
Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationLecture notes: Applied linear algebra Part 2. Version 1
Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationAn Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)
An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationc 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March
SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.
More informationFactorized Solution of Sylvester Equations with Applications in Control
Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationON THE MATRIX EQUATION XA AX = X P
ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationERROR ANALYSIS FOR THE COMPUTATION OF ZEROS OF REGULAR COULOMB WAVE FUNCTION AND ITS FIRST DERIVATIVE
MATHEMATICS OF COMPUTATION Volume 70, Number 35, Pages 1195 104 S 005-5718(00)0141- Article electronically published on March 4, 000 ERROR ANALYSIS FOR THE COMPUTATION OF ZEROS OF REGULAR COULOMB WAVE
More informationSome Algorithms Providing Rigourous Bounds for the Eigenvalues of a Matrix
Journal of Universal Computer Science, vol. 1, no. 7 (1995), 548-559 submitted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Springer Pub. Co. Some Algorithms Providing Rigourous Bounds for the Eigenvalues
More informationNP-hardness of the stable matrix in unit interval family problem in discrete time
Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and
More informationSpectral radius, symmetric and positive matrices
Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationThe restarted QR-algorithm for eigenvalue computation of structured matrices
Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationNon-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error
on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More information