Perturbation of Palindromic Eigenvalue Problems

Size: px
Start display at page:

Download "Perturbation of Palindromic Eigenvalue Problems"

Transcription

1 Numerische Mathematik manuscript No. (will be inserted by the editor) Eric King-wah Chu Wen-Wei Lin Chern-Shuh Wang Perturbation of Palindromic Eigenvalue Problems Received: date / Revised version: date Abstract We investigate the perturbation of the palindromic eigenvalue problem for the matrix quadratic P (λ) λ 2 A T 1 + λa 0 + A 1, with A 0, A 1 C n n and A T 0 = A 0. The perturbation of palindromic eigenvalues and eigenvectors, in terms of general matrix polynomials, palindromic linearizations, (semi-schur) anti-triangular canonical forms, differentiation and Sun s implicit function approach, are discussed. Keywords Anti-triangular form, eigenvalue, eigenvector, implicit function theorem, palindromic linearization, matrix polynomial, palindromic eigenvalue problem, perturbation. Author, Journal Volume, (year) page numbers. Eric King-wah Chu School of Mathematical Sciences, Building 28, Monash University, VIC 3800, Australia. Tel.: Fax: eric.chu@sci.monash.edu.au Wen-Wei Lin Department of Mathematics, National Tsinghua University, Hsinchu 300, Taiwan. Tel.: Fax: wwlin@am.nthu.edu.tw Chern-Shuh Wang Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan. Tel.: Fax: cswang@math.ncku.edu.tw

2 2 E.K.-w. Chu, W.-W. Lin & C.-S. Wang 1 Introduction Consider the matrix quadratic P (λ) λ 2 A T 1 + λa 0 + A 1 (1) where A 0, A 1 C n n and A T 0 = A 0, and the corresponding palindromic quadratic eigenvalue problem P (λ)x = 0, x 0 (2) In general, an eigenvalue problem is described as palindromic if the spectrum contains both λ and λ 1. In other words, the eigenvalue problem of the original matrix polynomial P (λ) (in one of its equivalent forms) has a palindromic linearization [3,8] of the form λz ± Z T. (We can transform λz Z T back to the form ν( Z) + ( Z) T with ν = λ. Similarly, λ 2 A T 1 + λa 0 + A 1 and ν 2 A T 1 νa 0 + A 1 define equivalent palindromic eigenvalue problems.) It is clear that (2) is palindromic, after checking its transpose. A great foundation for the solution of palindromic eigenvalue problems has been laid by Hilliage, Mackey, Mehl and Mehrmann in [6,8,9]. There has been much recent interest in quadratic eigenvalue problems [14]. An important example of palindromic eigenvalue problems can be found in the vibration analysis of fast trains; see [6, 7] for general introductions and [5] for details. For general perturbation of eigenvalues for polynomial eigenvalue problems, see [1, 13]. On results for general matrix polynomials, see the masterpiece [3]. This paper is organized as follows. In Sections 2 5, the perturbation of palindromic eigenvalues, in terms of general matrix polynomials, palindromic linearizations, the anti-triangular canonical form [8 10] and the semi-schur anti-triangular canonical form, is discussed using the Bauer-Fike technique [1]. The perturbation result for a simple eigenvalue and its corresponding eigenvector is obtained by differentiation in Section 6. Sun s implicit function approach [12] is then applied to palindromic linearizations, general matrix quadratics and palindromic eigenvalue problems in Section 7, to obtain perturbation results for eigenvalues and the corresponding deflating subspaces. The paper is concluded in Section 8. Note that the Bauer-Fike technique allows perturbations of arbitrary size and the clustering of eigenvalues (and the corresponding deflating subspaces) may vary greatly. Consequently, it is meaningless to talk about the perturbation of eigenvectors or deflating subspaces in Bauer-Fike type perturbation theorems in Sections Bauer-Fike Theorem for General Matrix Polynomials From [1, Theorem 4.2], we have the following theorem on the perturbation of eigenvalues of a general matrix polynomial:

3 Perturbation of Palindromic Eigenvalue Problems 3 Theorem 21 Consider a regular matrix polynomial and its perturbation L(α, β) L(α, β) l B j α j β l j, j=0 l B j α j β l j, Bj B j + δb j (j = 0,, l). j=0 Let (X, T, Z) be a resolvent triple for L constructed using some finite and infinite Jordan pairs J F and J. For (α i, β i ) σ(l) and (α, β) σ( L), the spectral variation of L from L is defined as s L ( L) max (α,β) {s (α,β)}, s (α,β) min i { αβ i βα i }. Let p be the maximum dimension of the Jordan blocks in J F or J. Then for τ (τ = 1, 2, ), we have s (α,β) max{θ 1, θ 1/p 1 }, θ 1 pfκ, (3) where c 1 = 1 (for τ = 1, 2) or l (for τ = ), and Also, we have F c 1 l + 1 2, κ X Z, [δb 0,, δb l ]. s L ( L) max{θ 1, θ 1/p 1 }. (4) Note that we use the representation (α, β) for λ = α/β in the above theorem. In our palindromic case, we have l = 2 and B 0 = A T 1 = B T 2 and B 1 = A 0 = A T 0. Ultimately, the perturbation of the palindromic eigenvalues is controlled by θ 1 in (4), which is in turn dominated by the error term involving [δa 0, δa 1 ]. Note also that the perturbation in δa 0 may be nonsymmetric, pushing a pair of reciprocal palindromic eigenvalues to one which is only approximately reciprocal. For a symmetric δa 0, we only have to consider the perturbation of half of the eigenvalues, because of the palindromic structure. The unstructured perturbation result in Theorem 21, for general matrix polynomials, may not be applicable or satisfactory for palindromic eigenvalue problems. However, it serves as a reference for the structured perturbation results are in later Sections.

4 4 E.K.-w. Chu, W.-W. Lin & C.-S. Wang 3 Bauer-Fike Theorems for Palindromic Linearizations For the linearization λz + Z T, we can work from the Kronecker canonical form ( λz Z T ) Q 2 = λλ + Λ, Q 1 = [P 1, P 2 ], Q 2 = [P 2, P 1 ] where Q T 1 [ ] [ ] Λ I Λ + =, Λ I =, Λ = diag{j Λ 1,, J N } with J i being the Jordan block for λ i inside the unit circle. Applying the techniques in [1], we consider the singular matrix Q T [ 1 β(z + δz) α(z + δz) T ] Q 2 = (βλ + αλ ) [ I + (βλ + αλ ) 1 Q T 1 (βδz αδz T ] )Q 2 which implies κ 2 (βλ + αλ ) 1 βδz αδz T 1 where κ 2 Q 1 Q 2. Estimation of (βλ + αλ ) 1, with Λ ± containing various Jordan blocks, produces the following theorem. Theorem 31 Consider the palindromic linearization M βz αz T with the above Kronecker canonical form. Let Z = Z + δz, M β Z α ZT, (α i, β i ) σ(m) and (α, β) σ( M). The spectral variation of L from L is defined as s L ( L) max (α,β) {s (α,β)}, Then for any Holder norm, we have s (α,β) min i { αβ i βα i }. s (α,β) max{θ 2, θ 1/p 2 }, θ 2 c 2 κ 2 δz (5) for some p n and c 2 = 2(α 2 + β 2 ) p. Also, we have s L ( L) max{θ 2, θ 1/p 2 }. (6) We have the following three cases: (i) for small perturbations, we have p being the size of the Jordan block associated with (α i, β i ); (ii) for large perturbations, we have p = 1; and (iii) for general perturbations, we have p being the size of the largest Jordan block in Λ. Note that for the 2- or F-norm, we have θ 2 2(α 2 + β 2 ) p δz. With scaling, α 2 + β 2 = 1, s (α,β) becomes the chordal metric and θ 2 2 p δz. From [1], when we consider asymptotically small perturbations, there will be a 1-1 correspondence between the original and perturbed eigenvalues. The above perturbation bounds can be proved for a particular eigenvalue (α i, β i ) with the condition number κ 2 replacable by the product of the norms of the individual left- and right-eigenvectors. Details can be found in [1].

5 Perturbation of Palindromic Eigenvalue Problems 5 4 Perturbation of Palindromic Linearizations in Anti-Triangular Form From [10], we have the following anti-triangular canonical form: Theorem 41 Let Z + λz T be a regular n n palindromic linearization. There exists a unitary U C n n such that U T ZU = (m ij ) with m ij = 0 (i + j n + 1) (i.e., U T ZU is anti-triangular, with zero elements on the upper left corner). The palindromic eigenvalues are: m 1n m n1, m 2,n 1 m n 1,2,, m i,n i+1 m n i+1,i,, m n i+1,i m i,n i+1,, m n 1,2 m 2,n 1, m n1 m 1n Let N be the strict lower right triangular part of U T ZU. Reorganize the anti-triangular in Theorem 41 in lower triangular form: U T (Z + λz T )UP n = (D 1 + N 1 ) + λ(d 2 + N 2 ) (7) with the order-reversing permutation matrix P n = [e n, e n 1,, e 1 ], we have D 1 = diag{m 1n, m 2,n 1,, m n 1,2, m n1 } D 2 = P n D 1 P n = diag{m n1, m n 1,2,, m 2,n 1, m 1n } with N 1 = NP n and N 2 = N T P n being strictly lower triangular. Using the Schur form in (7), we can prove the following perturbation result for a palindromic linearization. Theorem 42 Consider the palindromic linearization M βz αz T with N being the strictly lower right triangular part of its anti-triangular canonical form. Let Z = Z + δz, M β Z α ZT, (α i, β i ) σ(m) and (α, β) σ( M). Assume the scaling α 2 + β 2 = 1 = αi 2 + β2 i. Then for any Holder norm, we have s (α,β) c 0 max{θ 3, θ 1/p 3 }, θ 3 c 3 δz (8) for some p n and c 0 min{2, p}, c 3 = 2 p N. Also, we have s L ( L) c 0 max{θ 3, θ 1/p 3 }. (9) Proof. Consider the singular matrix β(z + δz) α(z + δz) T = U [ β(d 1 + N 1 + U T δzup n ) α(d 2 + N 2 + U T δz T UP n ) ] P n U H = U [β(d 1 + N 1 ) α(d 2 + N 2 )] {I + [β(d 1 + N 1 ) α(d 2 + N 2 )] 1 U H (βδz αδz T )UP n } P n U H It is easy to see that [β(d 1 + N 1 ) α(d 2 + N 2 )] 1 βδz αδz T

6 6 E.K.-w. Chu, W.-W. Lin & C.-S. Wang [β(d 1 + N 1 ) α(d 2 + N 2 )] 1 U H (βδz αδz T )UP n 1 (10) Note that [β(d 1 + N 1 ) α(d 2 + N 2 )] is assumed to be nonsingular, otherwise the results in the theorem become trivial. With z min (αi,β i) βd 1 αd 2 = min (αi,β i) βα i αβ i and M [β(d 1 + N 1 ) α(d 2 + N 2 )] 1 = [ I (βd 1 αd 2 ) 1 (βn 1 αn 2 ) ] 1 (βd1 αd 2 ) 1 Using the Neumann series with D βd 1 αd 2 and Ñ βn 1 αn 2 : [ I (βd1 αd 2 ) 1 (βn 1 αn 2 ) ] 1 = (I Ñ) 1 = I + Ñ + + Ñ p 1 we obtain M Ñ 1 η 1 z 1 (1 + Ñ z Ñ p 1 z p+1 ) With x Ñ 1 z, we have the polynomial x p η(1 + x + + x p 1 ) = 0 (see the result in the Appendix), the technique in [1] bounds M via x Ñ 1 z c 0 max{η, η 1/p } Together with the properties of norms, this completes the proof. With scaling, s (α,β) equals the chordal metric. Note also that p is the integer for which Ñ k 0 (0 k < p) and Ñ p = 0. 5 For Palindromic Linearizations in Semi-Schur Anti-Triangular Form A refinement of Theorem 42, with smaller values for p, can be proved. We first refine the decomposition in Theorem 41. Theorem 51 Let Z + λz T be a regular palindromic linearization. There exists a nonsingular U, V C n n, V T ZU = anti-diag{m 1,, m r }, m j = D j + N j (11) where m j is anti-triangular with anti-diagonal block D j = anti-diag{λ j }. Proof. The proof is similar to the standard transformation of a Schur decomposition to the corresponding Jordan canonical form. It is sufficient to show that it is possible to transform the anti-triangular form (U T ZU, U T Z T U) in Theorem 41 to anti-block diagional form, so that [ ] [ ] [ ] [ ] [ ] [ ] I 0 P T U I T I Q I 0 0 T1 I Q T ZU = 0 I P T = 1 I T 2 T 12 0 I T 2 [ ] [ ] [ ] [ ] [ ] [ ] I 0 P T U I T Z T I Q I 0 0 T T U = 2 I Q T 0 I P T I T2 T T12 T = 2 T 0 I T1 T

7 Perturbation of Palindromic Eigenvalue Problems 7 assuming that the spectra of T 1 and T 2 are nonintersecting. Multiplying out the above equation produces the simultaneous linear matrix equation φ(p, Q) (T 2 Q + P T T 1, T T 1 Q + P T T T 2 ) = (T 21, T T 21) (12) which is uniquely solvable. Similar to Theorem 51 but with p being the bounded by the maximum size of m j, we can now prove the following refined version of Theorem 42: Theorem 52 Consider the palindromic linearization M βz αz T with its semi-schur anti-triangular canonical form (11). Let Z = Z + δz, M β Z α Z T, (α i, β i ) σ(m) and (α, β) σ( M). Assume the scaling α 2 +β 2 = 1 = αi 2 + β2 i. Then for any Holder norm, we have s (α,β) c 0 max{θ 4, θ 1/p 4 }, θ 4 c 4 δz (13) for κ 4 U U 1, and c 0 min{2, p}, c 4 = 2 pκ 4 max j N j. Also, we have s L ( L) c 0 max{θ 4, θ 1/p 4 }. (14) We have the following three cases: (i) for small perturbations, we have p being the size of the Schur block m j associated with (α i, β i ); (ii) for large perturbations, we have p = 1; and (iii) for general perturbations, we have p being the size of the largest Schur block m j. Similar to Theorem 31, when we consider asymptotically small perturbations, there will be a 1-1 correspondence between the original and perturbed eigenvalues. The above perturbation bounds can be proved for a particular eigenvalue (α i, β i ) with the condition number κ 4 replacable by U j V j. In addition, we can consider a group of neighbouring eigenvalues together, instead of one particular eigenvalue. This will increase p, or the size of the corresponding semi-shcur block m j, but will improve the conditioning of the linear operator φ in (12) as well as κ 4. 6 Perturbation by Differentiation Without establishing the existence of asymptotic expansions (which can be achieved using the implicit function approach in Section 7), perturbation results can be obtained by simple differentiation. Consider the palindromic eigenvalue problem P (λ, ρ)x = 0, z T x 1 = 0, P (λ, ρ) λ(ρ) 2 A T 1 (ρ) + λ(ρ)a 0 (ρ) + A 1 (ρ)

8 8 E.K.-w. Chu, W.-W. Lin & C.-S. Wang with ρ being the perturbation parameter and A 0 (0) = A 0, A 1 (0) = A 1. For a simple eigenvalue λ, differentiation produces λ ρ = yt P ρ x y T P λ x = y T P ρ x y T (2λA T 1 + A 0)x (15) and Choosing z = y(0), we have P x ρ = (λ ρ P λ + P ρ )x, z T x ρ = 0 x ρ = P (λ ρ P λ + P ρ )x The usual conclusion can be drawn the right-eigenvector x will be rotated through a big angle, even for a small perturbation, when P is large or when the separation between λ and other eigenvalues is fine. This happens, of course, when the assumption of simplicity for the eigenvalues is nearly violated. 7 Sun s Implicit Function Approach In this Section, we shall abbreviate the development of Sun s approach [2, 12], by establishing the functions to which the implicit function theorem can be applied. Detailed and subtle argument of this approach can be found in [2,12]. 7.1 Palindromic Linearizations Using the anti-triangular form in (41), Sun s approach [2] can be applied to obtain the power series of eigenvalues and deflating subspaces. Without loss of generality, we shall use an upper-triangular form in this section (with the help of the reordering operator P r ), so that Sun s approach can be followed faithfully. Assume that U in (41) is further organized to reflect the symmetry of the eigenvalue pairs {λ j, λ 1 j }, so that and U = [U 1, U 0, U 1 ] = [U 1, U 2 ] = [U 2, U 1 ] UP n = [U 1 P p, U 0 P n 2p, U 1 P p ] = [U 1 P p, U 2 P n p ] = [U 2 P n p, U 1 P p ] The anti-triangular form now becomes U T (Z λz T )UP n [ ] [ ] Λα1 Λβ1 = λ = Λ α1 Λ 0 Λ α2 0 Λ α0 λ Λ β1 Λ β0 β2 0 Λ α, 1 0 Λ β, 1

9 Perturbation of Palindromic Eigenvalue Problems 9 with Λ α1 and Λ β1 being p p (2p < n), Λ β, 1 = P p Λ α1 P p, Λ β,1 = P p Λ α, 1 P p, Λ β0 = P 2n p Λ α0 P 2n p and {U 1, U 1 } spanning the deflating subspaces of Z λz T corresponding to (Λ α1, Λ β1 ). Assume that (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) have nonitersecting spectra. Following the usual step in Sun s approach [2], we assume that Z(ρ) is dependent on a perturbation parameter ρ. We then have [ ] [ ] M U T M11 M Z(ρ)UP n = 12 U T 1 ZU 1 P p U1 T ZU 2 P n p M 21 M 22 U2 T ZU 1 P p U2 T ZU 2 P n p and L U T Z(ρ) T UP n = [ ] L11 L 12 L 21 L 22 From the (2,1)-block of [ ] [ ] Ip 0 Ip 0 (M λl) Ψ I n p Φ I n p [ ] U T 1 Z T U 1 P p U1 T Z T U 2 P n p U T 2 Z T U 1 P p U T 2 Z T U 2 P n p we construct [ ] F (Φ, Ψ, ρ) = G(Φ, Ψ, ρ) [ ΨU1 T ZU 1 P p ΨU1 T ZU 2 P n p Φ + U2 T ZU 1 P p + U2 T ZU 2 P n p Φ ΨU1 T Z T U 1 P p ΨU1 T Z T U 2 P n p Φ + U2 T Z T U 1 P p + U2 T Z T U 2 P n p Φ ] At ρ = 0, we have Φ = 0 = Ψ and F (0, 0, 0) = 0 = G(0, 0, 0). The implicit function theorem can then be applied to (F, G). Differentiation of F and G respect to Ψ and Φ at ρ = 0 yields (after stacking columns and apply Kronecker products) [ ] [ ] F Λ T = α1 I I Λ α2 G Λ T β1 I I Λ β2 (Ψ,Φ) It is easy to see that the above operator is invertible when the spectra of (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) do not intersect (and it will be ill-conditioned when the subspectra are close). Consequently, we have proved that the power series exists for Φ(ρ) and Ψ(ρ) within some small neighbourhood of ρ = 0 and Φ(ρ) = Φ ρ ρ +, Ψ(ρ) = Ψ ρ ρ + Furthermore, the deflation subspace [2] is formed by { ( [ ]) ( [ ])} I I span U, span UP Ψ(p) n Φ(p) = { span ( U 1 + U 2 Ψ(p) ), span (U 1 P p + U 2 P n p Φ(p)) }

10 10 E.K.-w. Chu, W.-W. Lin & C.-S. Wang Differentiation of M 11 U T 1 ZU 1 P p, L 11 U T 1 Z T U 1 P p also yields M 11 = U T 1 Z ρ U 1 P p, L 11 = U T 1 Z T ρ U 1 P p When p = 1 and (Λ α1, Λ β1 ) = (λ α1, λ β1 ) represents the finite eigenvalue λ 1 = λ α1 /λ β1, the above derivatives translate to λα1 λ 1 = λ β1 λ β1 λ α1 λ 2 = β1 λ α1 λ β1 λ 1 = yt 1 Z ρ x 1 λ 1 y1 T Zρ T x 1 λ β1 λ β1 y1 T ZT x 1 producing a result analogous to (15). Lastly, differentiating F and G with respect to ρ at ρ = 0 produces F ρ = Λ α2 Φ ρ Ψ ρ Λ α1 + U T 2 Z ρ (0)U 1 P p = 0 G ρ = Λ β2 Φ ρ Ψ ρ Λ β1 + U T 2 Z ρ (0) T U 1 P p = 0 The derivatives Φ ρ (0) and Ψ ρ (0) can then be retrieved from the above equations, when (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) have nonintersecting spectra. 7.2 General Matrix Quadratics We now apply Sun s approach [2] to the general matrix quadratic Q 2 (λ) = λ 2 M + λd + K similar to the development in the previous subsection. Assume the n 2n matrices X = [x j ] and Y = [y j ] contain, respectively, the right- and lefteigenvectors, with x j and y j corresponding to λ j = α j /β j. For the companion linearization [ L(λ) 0 I K D ] λ [ I M ] (16) it is easy to check that the right- and left-eigenvectors corresponding to λ j = α j /β j are respectively [ ] βj x j, [ β α j x j yj T D + α j yj T M, β j y T ] T j j Notice that by using (α j, β j ) to represent λ j, we have avoided the difficulties involving infinite eigenvalues. We can then scale the eigenvectors to satisfy the biorthogonality equations Λ α Y T MXΛ β + Λ β Y T MXΛ α + Λ β Y T DXΛ β = Λ β (17) Λ α Y T MXΛ α Λ β Y T KXΛ β = Λ α (18)

11 Perturbation of Palindromic Eigenvalue Problems 11 Here Λ α and Λ β are block-diagonal matrices, with blocks being the usual identity or Jordan matrices for the generalized eigenvalue sub-problems. Assume further that X, Y, Λ α and Λ β are organized as follows: X = [X 1, X 2 ], Y = [Y 1, Y 2 ], Λ α = diag{λ α1, Λ α2 }, Λ β = diag{λ β1, Λ β2 } where the spectra of (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) do not intersect. Following the usual steps in Sun s approach [2] in obtaining analytic power series for the eigenvalues and eigenvectors for generalized eigenvalue problems, we assume the matrices M, D and K are dependent on a perturbation parameter ρ. We first transform the linearization in (16) by pre-(post- )multiplying with its left-(right-)eigenvectors: [ ] M11 M M 12 [ Λ M 21 M β Y T D + Λ α Y T M, Λ β Y T ] [ ] [ ] 0 I XΛβ 22 K D XΛ α and L [ ] L11 L 12 [ Λ L 21 L β Y T D + Λ α Y T M, Λ β Y T ] [ ] [ ] I XΛβ 22 M XΛ α From the (2,1)-block of [ ] [ ] Ip 0 Ip 0 (M λl) Ψ I n p Φ I n p we construct F (Φ, Ψ, ρ) Ψ(Λ α1 Y1 T MX 1 Λ α1 Λ β1 Y1 T KX 1 Λ β1 ) Ψ(Λ α1 Y1 T MX 2 Λ α2 Λ β1 Y1 T KX 2 Λ β2 )Φ +(Λ α2 Y2 T MX 1 Λ α1 Λ β2 Y2 T KX 1 Λ β1 ) +(Λ α2 Y2 T MX 2 Λ α2 Λ β2 Y2 T KX 2 Λ β2 )Φ G(Φ, Ψ, ρ) Ψ(Λ β1 Y1 T DX 1 Λ β1 + Λ α1 Y1 T MX 1 Λ β1 + Λ β1 Y1 T MX 1 Λ α1 ) Ψ(Λ β1 Y1 T DX 2 Λ β2 + Λ α1 Y1 T MX 2 Λ β2 + Λ β1 Y1 T MX 2 Λ α2 )Φ +(Λ β2 Y2 T DX 1 Λ β1 + Λ α2 Y2 T MX 1 Λ β1 + Λ β2 Y2 T MX 1 Λ α1 ) +(Λ β2 Y2 T DX 2 Λ β2 + Λ α2 Y2 T MX 2 Λ β2 + Λ β2 Y2 T MX 2 Λ α2 )Φ At ρ = 0, we have Φ = 0 = Ψ and F (0, 0, 0) = 0 = G(0, 0, 0). The implicit function theorem can then be applied to (F, G). Differentiation of F and G respect to Ψ and Φ at ρ = 0 yields (after stacking columns and apply Kronecker products) [ ] [ ] F Λ T = α1 I I Λ α2 G Λ T β1 I I Λ β2 (Ψ,Φ) It is easy to see that the above operator is invertible when the spectra of (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) do not intersect (and it will be ill-conditioned when the subspectra are close). Consequently, we have proved that the power

12 12 E.K.-w. Chu, W.-W. Lin & C.-S. Wang series exists for Φ(ρ) and Ψ(ρ) within some small neighbourhood of ρ = 0 and Φ(ρ) = Φ ρ ρ +, Ψ(ρ) = Ψ ρ ρ + Furthermore, the deflation subspace [2] of the companion linearization is formed by { ( [ ]) ( [ ])} span Ỹ T I I, span X Ψ(p) Φ(p) with Ỹ = [ Λ β Y T D + Λ α Y T M, Λ β Y T ] [ ] T XΛβ, X = XΛ α Differentiation of also yields M 11 Λ α1 Y1 T MX 1 Λ α1 Λ β1 Y1 T KX 1 Λ β1 L 11 Λ β1 Y1 T DX 1 Λ β1 + Λ α1 Y1 T MX 1 Λ β1 + Λ β1 Y1 T MX 1 Λ α1 M 11 L 11 = Λ α1 Y T 1 M ρ X 1 Λ α1 Λ β1 Y T 1 K ρ X 1 Λ β1 = Λ β1y T 1 D ρ X 1 Λ β1 + Λ α1 Y T 1 M ρ X 1 Λ β1 + Λ β1 Y T 1 M ρ X 1 Λ α1 When p = 1 and (Λ α1, Λ β1 ) = (λ α1, λ β1 ) represents the finite eigenvalue λ 1 = λ α1 /λ β1, the above derivatives translate to λα1 λ 1 = λ β1 λ β1 λ α1 λ 2 = β1 λ α1 λ β1 λ 1 λ β1 λ β1 = λ2 β1 yt 1 K ρ x 1 + λ 2 α1y T 1 M ρ x 1 λ 1 (λ 2 β1 yt 1 D ρ x 1 + 2λ α1 λ β1 y T 1 M ρ x 1 ) λ 2 β1 yt 1 Dx 1 + 2λ α1 λ β1 y T 1 Mx 1 = yt 1 (λ 2 1M ρ + λ 1 D ρ + K ρ )x 1 y T 1 (2λ 1M + D)x 1 (19) producing the same formula in (15). Lastly, differentiating F and G with respect to ρ at ρ = 0 produces F ρ = Λ α2 Φ ρ Ψ ρ Λ α1 + Λ α2 Y2 T M ρ (0)X 1 Λ α1 Λ β2 Y2 T K ρ (0)X 1 Λ β1 = 0 G ρ = Λ β2 Φ ρ Ψ ρ Λ β1 + Λ α2 Λ β2 Y2 T D ρ (0)X 1 Λ β1 +Λ α2 Y2 T M ρ (0)X 1 Λ β1 + Λ β2 Y2 T M ρ (0)X 1 Λ α1 = 0 The derivatives Φ ρ (0) and Ψ ρ (0) can then be retrieved from the above equations, when (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) have nonintersecting spectra.

13 Perturbation of Palindromic Eigenvalue Problems Palindromic Case For palindromic eigenvalue problems, the perturbation results look almost identical to those for general matrix quadratics, as shown in the previous section. The only differences are M = A T 1 = K H, D = A 0 = D T, and the palindromic properties of the eigenvalues and eigenvectors. For the spectrum, we have (Λ α1, Λ β1 ) and (Λ α2, Λ β2 ) = (Λ β1, Λ α1 ) representing, respectively, eigenvalues inside and outside the unit circle, and [Y 1, Y 2 ] = [X 2, X 1 ] (as in Section 3.5.1). We shall not rewrite all the general results for palindromic quadratics, leaving the trivial substitution of symbols as an exercise. For the simple palindromic eigenvalues λ 1 and λ 1 1, (19) implies λ 1 = yt 1 (λ 2 1K T ρ + λ 1 D ρ + K ρ )x 1 y T 1 (2λ 1K T + D)x 1 λ 1 1 = yt 1 (K T ρ + λ 1 1 D ρ + λ 2 1 K ρ)x 1 y T 1 (2λ 1K T + D)x 1 Interestingly, the relative errors of the pair of reciprocal eigenvalues equal, asymptotically, to ρ λ ±1 1 λ ±1 1 = ρyt 1 (λ 1 K T ρ + D ρ + λ 1 1 K ρ)x 1 y T 1 (2λ 1K T + D)x 1 (20) and are identical except of the opposite signs. Equation (20) can easily be understood from 1 λ + δλ = 1 λ ( ) 1 + δλ 1 ( 1 δλ ) λ λ λ After applying inequalities of norms, a condition number for both λ ±1 1 can then be produced: κ y 1 2 x 1 2 y1 T (λ λ λ 1K T 1 2 (21) + D)x 1 With appropriate scaling of the eigenvectors, κ can be interpreted as the product of the norms of the left- and the right-eigenvectors, or in terms of the angle between the eigenvectors, as in other algebraic eigenvalue problems. 8 Numerical Example We shall consider the following small example to illustrate the results in the previous Section 7.3: [ ] [ ] ρ A 0 =, A = 0 1 The parameter ρ can be used to vary the conditioning of the eigenvalue problem, but will be fixed to be 0.5 in the following calculations. The matrices are perturbed randomly to the magnitude of , with δ =

14 14 E.K.-w. Chu, W.-W. Lin & C.-S. Wang Table 1 Perturbation of a palindromic eigenvalue problem i 1 2, 3 4 λ i ± i λ i ± i δλ i e e-08 ± e e-08 i r i e e-16 ± e e-08 i r i e e e-08 r (e) i e e-16 ± e-08 i e-08 r (e) i e e e-08 κ i r (κ) i e e e-08 [δa T 1, δa 0, δa 1 ] = The eigenvalues are λ i (i = 1,, 4) with λ 1 = λ 1 4 and λ 2 = λ 1 3. The numerical results are summarized in the following table, with λ i denoting the perturbed eigenvalues, δλ i λ i λ i, r i δλ i /λ i the relative error in λ i, r (e) i estimating r i using (20), κ i the individual condition numbers as in (21), and r (e) i r (κ) i κ i δ estimating r i. All calculations were carried out using MATLAB 7.1 (R14) on a Apple MacIntosh G4 Powerbook, with eps It is obvious from Table 1 that (20) provides accurate approximations to the relative errors of palindromic eigenvalues with small perturbations and the condition number κ in (21) produces tight upper bounds for the (relative) errors. Also, the fact that the relative errors for reciprocal pairs of eigenvalues are negative of each other is confirmed by the example. 9 Conclusions Bauer-Fike type perturbation results for general matrix polynomials, palindromic linearizations, the anti-triangular canonical form and the semi-schur anti-triangular canonical form are discussed, for perturbations of arbitrary size. For asymptotic perturbations, perturbation results for eigenvalues and the corresponding deflating subspaces of palindromic linearizations and (palindromic) matrix quadratics are obtained, Sun s implicit function approach. Consistent results for simple eigenvalues and the corresponding eigenvetors are obtained using simple differentiation. These results indicate, not surprisingly, that the perturbations of an eigenvalue λ and its corresponding deflating subspace S λ, respectively, are proportional to the size of the perturbation and the reciprocal of the gap between S λ and other deflating subspaces. A

15 Perturbation of Palindromic Eigenvalue Problems 15 condition number is typically the product of the norms of the left- and righteigenvectors. Appendix: Bounding the root of x m η(1 + x + + x m 1 ) Consider the polynomial P m (x) x m η(1 + x + + x m 1 ) Descartes sign rule (La Géométrie 1637) then implies that P m (x) has at most one positive real root. As P m (0) = η < 0 and P m (x) > 0 as x, any positive number x for which P m (x ) > 0 is an upper bound of the unique real positive root of P m (x). Simple inspection leads to the upper bounds x = c 0 η when η > 1 and x = c 0 η 1/m when η 1, with c 0 = min{2, p}. Consequently, c 0 = 1 when m = 1 (with x = η) and c 0 = 2 when m > 1. The details are as follows. When c 0 η > η 1 and m > 1, we have P m (c 0 η) = (c 0 η) m η (c 0η) m 1 c 0 η 1 = cm+1 0 η m+1 c m 0 η m c m 0 η m+1 + η c 0 η 1 = (cm 0 η m+1 c m 0 η m ) + (c m+1 0 η m+1 c m 0 η m+1 c m 0 η m+1 ) + η 0 c 0 η 1 as c m+1 0 η m+1 c m 0 η m+1 c m 0 η m+1 = (c 0 2)c m 0 η m+1 = 0. Thus c 0 η is an upper bound of the root x of P m when η 1. When η < 1 and m > 1, we have P m (c 0 η 1/m ) c m 0 η η(1 + c c m 0 ) = c m 0 η η cm 0 1 c 0 1 = 0 as c m+1 0 2c m 0 = 0. Thus c 0 η 1/m is an upper bound of the root x of P m when η < 1. References 1. e.k.-w. chu. Perturbation of eigenvalues for matrix polynomials via the Bauer- Fike theorems, SIAM J. Matrix Anal. Applic., 25 (2003), pp l. elsner and j.g. sun. Perturbation theorems for the generalized eigenvalue problem, Lin. Alg. Applic., 48 (1982), pp i. gohberg, p. lancaster and l. rodman. Matrix Polynomials, Academic Press, New York, g.h. golub and c.f. van loan. Matrix Computations, 3rd Ed., Johns Hopkins University Press, Baltimore, a. hilliges. Numerische Lösung von quadratischen Eigenwertproblemen mit Anwendungen in der Schienendynamik, Master Thesis, Technical University Berlin, Germany, July 2004.

16 16 E.K.-w. Chu, W.-W. Lin & C.-S. Wang 6. a. hilliges, c. mehl and v. mehrmann. On the solution of palindromic eigenvalue problems, Proceedings 4th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS), Jyväskylä, Finland, c.f. ipsen. Accurate eigenvalues for fast trains, SIAM News, 37 (2004). 8. d.s. mackey, n.mackey, c. mehl and v. mehrmann. Linearization spaces for matrix polynomials, in preparation. 9. d.s. mackey, n. mackey, c. mehl and v. mehrmann. Palindromic polynomial eigenvalue problems: good vibrations from good linearizations, in preparation. 10. d.s. mackey, n. mackey, c. mehl and v. mehrmann. Palindromic polynomial eigenvalue problems, Preprint DFG Research Center Mathematics for Key Technologies, (to appear). 11. mathworks. MATLAB User s Guide, g.w. stewart and j.g. sun. Matrix Perturbation Theory, Academic Press, New York, f. tisseur. Backward error and condition of polynomial eigenvalue problems, Lin. Alg. Applic., 309 (2000), pp f. tisseur and k. meerbergen. A survey of the quadratic eigenvalue problem, SIAM Rev., 43 (2001), pp

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Structured Backward Error for Palindromic Polynomial Eigenvalue Problems

Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Ren-Cang Li Wen-Wei Lin Chern-Shuh Wang Technical Report 2008-06 http://www.uta.edu/math/preprint/ Structured Backward Error for

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM

SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM CHUN-HUA GUO AND WEN-WEI LIN Abstract In studying the vibration of fast trains, we encounter a palindromic

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Sensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system

Sensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system Advances in Computational Mathematics 7 (1997) 295 31 295 Sensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system Mihail Konstantinov a and Vera

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise 2011 MIMS EPrint: 2011.86 Manchester Institute for Mathematical

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 236 (212) 3218 3227 Contents lists available at SciVerse ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations

Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations Wen-Wei Lin Shu-Fang u Abstract In this paper a structure-preserving transformation of a symplectic pencil

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems

A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems Tie-xiang Li Eric King-wah Chu Wen-Wei Lin Abstract We propose a structure-preserving doubling

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Eigenpairs and Similarity Transformations

Eigenpairs and Similarity Transformations CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 28, No 4, pp 971 1004 c 2006 Society for Industrial and Applied Mathematics VECTOR SPACES OF LINEARIZATIONS FOR MATRIX POLYNOMIALS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

On rank one perturbations of Hamiltonian system with periodic coefficients

On rank one perturbations of Hamiltonian system with periodic coefficients On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Nonnegative and spectral matrix theory Lecture notes

Nonnegative and spectral matrix theory Lecture notes Nonnegative and spectral matrix theory Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the first part of the course Nonnegative and spectral matrix theory with applications to

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

A Characterization of Distance-Regular Graphs with Diameter Three

A Characterization of Distance-Regular Graphs with Diameter Three Journal of Algebraic Combinatorics 6 (1997), 299 303 c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. A Characterization of Distance-Regular Graphs with Diameter Three EDWIN R. VAN DAM

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 B. N. Datta, IEEE Fellow 2 D. R. Sarkissian 3 Abstract Two new computationally viable algorithms are proposed for

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

Homework 2. Solutions T =

Homework 2. Solutions T = Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

More information

Computing the Norm A,1 is NP-Hard

Computing the Norm A,1 is NP-Hard Computing the Norm A,1 is NP-Hard Dedicated to Professor Svatopluk Poljak, in memoriam Jiří Rohn Abstract It is proved that computing the subordinate matrix norm A,1 is NP-hard. Even more, existence of

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential, Tobias Breu, and Tilo Wettig Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg,

More information

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions Commutants of Finite Blaschke Product Multiplication Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) Universidad de Zaragoza, 5

More information