A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September 22 24, 2005 Technische Universität Dresden Quadratic Matrix... Predictor-Corrector... Algorithms for... Generalized Sylvester... Bartels-Stewart type... Examples Applications Summary and Outlook Acknowledgments References Page 1 of 18
Contents Quadratic Matrix... Predictor-Corrector... Algorithms for... Generalized Sylvester... Bartels-Stewart type... Examples Applications Summary and Outlook Acknowledgments References Page 2 of 18
1. Quadratic Matrix Equation 1.1. Definition F (X, t) = A X 2 + t B X + C = O, A, B, C C n n (1) ( λ 2 A + t λ B + C ) x = 0 (2) The quadratic eigenvalue problem (2) is closely related to the matrix equation (1). λ 2 A + t λ B + C = (t B + A X + λ A) (X λ I) = F (λ I n, t) (3) When (1) has a solution X, the 2 n eigenvalues of F (λ I n, t) can be found by finding the eigenvalues of the matrix X and the matrix pencil (t B + A X, A). This solvent approach has been explored by Davis (1983), Dennis et al. (1976/1978) and more recently by Guo (2004) and Kim & Higham (2000, 2001). Page 3 of 18
1.2. Newton s method The algorithm proposed is Newton s method applied to the matrix function F (X, t). Having chosen an initial guess X 0 at t = t 0 = 0, we produce a sequence of iterates X (k) which will hopefully converge to S such that F (S, t) = O. Successive X (k) are generated by the formula where X (k+1) = X (k) + H (k), k = 0, 1, 2,... (4) H (k) = F (X, t) 1 F (X (k), t), k = 0, 1, 2,... (5) 1.3. Frechét Derivative Examining the form of the Frechét derivative F (X, t) we find F (X + H, t) = F (X, t) + A X H + A H X + t B H +A H }{{} 2 (6) F X (H) Page 4 of 18
2. Predictor-Corrector scheme Here, we assume that a solution X 0 does exist for t 0 = 0. Predictor A X 2 0 + C = O (7) Euler step A X 0 H + A H X 0 + t 0 B H = B X 0 (8) X (0) = X 0 + h H (9) Corrector Each H (k+1) is the solution of the system [ A X (k) H (k+1) + H (k+1) X (k)] + t B H (k+1) = F (X (k), t) (10) Until convergence Stopping Criteria X (k+1) = X (k) + H (k+1), k = 0, 1,... (11) F (X (k) ) F < ε (12) Page 5 of 18
3. Algorithms for Sylvester Equations An overview of different algorithms is given by Antoulas (2005) The Kronecker product method Bartels-Stewart Algorithm Complex integration method Characteristic polynomial methods Invariant subspace method Sign function method Infinite sum method Square root method... Page 6 of 18
4. Generalized Sylvester Equation The brute force way to approach the solution of (8,10) is to rewrite it as a linear matrix vector system in standard form. The system becomes [ ] I (t B + A X 0 ) + (X 0 ) T A vec H = (I B) vec X 0, (13) X (0) = X 0 + h H (14) [ ( I t B + A X (k)) + (X (k)) ] T A vec H (k+1) = vec F (X (k), t) (15) X (k+1) = X (k) + H (k+1) (16) Unfortunately, the coefficient matrix has dimension n 2 n 2, making this approach impractical except for small systems. A second alternative is to cast (8,10) in a form for which effective algorithms already exists. If we premultiply both sides of (8,10) by A 1 we obtain [ t0 A 1 B + X 0 ] H + H X0 = A 1 B X 0 (17) and [ t A 1 B + X (k)] H (k+1) + H (k+1) X (k) = A 1 F (X (k), t) (18) respectively. This method is is not generally satisfactory, if A is singular or illconditioned with respect to inversion. A two equation form could also be used (Kagström and Westin, 1989). Page 7 of 18
5. Bartels-Stewart type algorithm We study the linear matrix equation A X X B = C. (19) where A C n n, B C n n, and C C n n. This equation is now often called a Sylvester equation. This equation has a unique solution if and only if A and B have no eigenvalues in common. First B is reduced to Schur form. Schur form B V = V D, V H V = I (20) A Z Z D = R, R = C V, Z = X V, (21) k 1 (A d kk I) z k = r k + d ik z i (22) i=1 X = Z V H (23) Page 8 of 18
6. Examples 6.1. Example 1 F (X) has no solvents at all. Pereira (2003) F (X) = X 2 + A 1 X + A 2 (24) A 1 = 98/125 108/25 112/25 4/5 24/5 4/5 22/25 /38/25 182/25 6.2. Example 2, A 2 = 89/25 294/25 316/25 7/5 42/5 8/5 46/25 59/25 251/25 Modelling oscillations of an airplane wing - Higham and Kim (2001) F (X) = A X 2 + B X + C (26) (25) A = C = 17.6 1.28 2.89 1.28 0.824 0.413 2.89 0.413 0.725 121 18.9 15.9 0 2.7 0.145 11.9 3.64 15.5, B = 7.66 2.45 2.1 0.23 1.04 0.223 0.6 0.756 0.658, (27) (28) Page 9 of 18
6.3. Example 3 Overdamped eigenvalue problem Higham and Kim (2000) F (X) = A X 2 + B X + C (29) A = I n, B = C = 15 5 5 15 5 20 10 10 30 10 5 15 5. 10..... 10 30 10 10......... 30 10 10 20... 15 5 5 15, (30) (31) Page 10 of 18
7. Applications F (X, t) = M X 2 + t D X + K = O, M, D, K R n n, M = M T (32) Cholesky Factorization Transformed matrices M = L L T (33) X = L T X L T, (34) D = L 1 D L T, (35) K = L 1 K L T (36) [ t 0 D + X0 ] H + H X0 = D X 0, (37) [ t D + X (k)] H(k+1) + H(k+1) X(k) X (0) 0 = X 0 + h H (38) = F ( X (k), t), (39) X (k+1) = X (k) + H (k+1) (40) Page 11 of 18
7.1. Fluid-Conveying Pipes 7.2. Hamilton s principle δ t 1 t 0 t 1 ( L mf v 2 ) w L dt t 0 m f v (ẇ L + v w L) δw L dt = 0, (41) Page 12 of 18 L L T = T p + T f = 1 m p ẇ 2 dx + 1 m f {v 2 + [ẇ + v w ] 2} dx, (42) 2 2 0 0
L L U = U p + U f = 1 EI w 2 dx + 1 2 2 (m p + m f ) g (L x) w 2 dx. (43) 0 0 EI w + { m f v 2 (m f + m p ) g(l x) } w + 2 m f vẇ + (m p + m f )gw + (m p + m f )ẅ = 0. (44) 7.2.1. Nondimensional Quantities ξ = x L, η = w L, τ = EI m f + m p t L 2, u = mf v L, (45) EI γ = m p + m f L 3 m f g, β =, (46) EI m p + m f Page 13 of 18 η + u 2 η + γ (ξ 1) η + 2 βu η + γη + η = 0, (47) 7.3. Fourier-Galerkin Scheme N η(ξ, τ) η N (ξ, τ) = Ψ i (ξ) q i (τ), (48) i=1
7.3.1. Eigenfunctions of the cantilever beam Φ i (ξ) = cosh λ i ξ cos λ i ξ σ i (sinh λ i ξ sin λ i ξ), σ i := sinh λ i sin λ i cosh λ i + cos λ i. (49) 7.4. Second-Order Differential Equation I N q + C(u) q + K(u) q = 0 (50) C(u) = 2 β u B, (51) K(u) = A + ( u 2 γ ) G + γ (D + B). (52) F (X, u) = X 2 + 2 β u B X + γ (B + D G) + A + u 2 G (53) Page 14 of 18
8. Summary and Outlook Quadratic eigenvalue problems are typically solved using the companion matrix technique in which the nonlinear eigenvalue problem is linearized. The obvious disadvantage of such an approach is that the size of the problem essentially doubles. This contribution introduces a method that tackles the quadratic eigenvalue problem directly using an approach suggested by Lancaster and Higham, to name a few. It is well known that the quadratic eigenvalue problem is closely related to a (parameterized) quadratic matrix equation. When solutions, so-called solvents, of this nonlinear matrix equation exist the eigenvalues of the associated quadratic eigenvalue problem can be found by the eigenvalues of the solvent and a matrix pencil. The parameterized quadratic matrix equation is then solved by a continuation approach with a standard Euler-Newton predictor-corrector scheme, where (generalized) Sylvester equations occur. The Sylvester equations can be solved using the Bartels-Stewart algorithm or the Hessenberg-Schur method. These methods require certain relatively expensive initial factorizations (Schur or Hessenberg) of the coefficient matrices. In this context we should make use of iterative methods (see e.g. Benner (2004), Ding (2005),...). Page 15 of 18
9. Acknowledgments Thanks to Vera Thümmler for helpful discussions on Bartels-Stewart like algorithms and continuation. Page 16 of 18
10. References Bartels and Stewart, Solution of the matrix equation AX+XB=C, Communications of the ACM, 15 820 826 (1972) Kagström and Westin, Generalized Schur methods with condition estimators for solving the generalized Sylvester equation, IEEE Transactions on Automatic Control, 34 745 751 (1989) Gardiner, Laub, Amato and Moler, Solution of the Sylvester matrix equation AXB T + CXD T = E, ACM Transactions of Mathematical Software, 18 223 231 (1992) Dennis, Traub and Weber, The algebraic theory of matrix polynomials, SIAM Journal on Numerical Analysis, 13 831 845 (1976) Dennis, Traub and Weber, Algorithms for solvents of matrix polynomials, SIAM Journal on Numerical Analysis, 15 523 533 (1978) Davis, An algorithm to compute solvents of the matrix equation AX 2 +BX + C = 0, ACM Transactions on Mathematical Software, 9 246 354 (1983) Kratz and Stickel, Numerical solution of matrix polynomial equations by Newton s method, IMA Journal of Numerical Analysis 7 355 369 (1987) Bras and de Lima, A spectral approach to polynomial matrices solvents, Appl. Math. Lett. 9 27 33 (1996) Higham and Kim, Numerical analysis of a quadratic matrix equation, IMA Journal of Numerical Analysis, 20 499 519 (2000) Page 17 of 18
Higham and Kim, Solving a quadratic matrix equation by Newton s method with exact line searches, SIAM J. Matrix Anal. Appl. 23 303 316 (2001) Pereira, On Solvents of matrix polynomials, Applied Numerical Mathematics, 47 197 208 (2003) Guo and Lancaster, Algorithms for hyperbolic quadratic eigenvalue problems, Mathematics of Computation, 74 1777 1791 (2005) Benner, Peter ; Quintana-Orti, Enrique ; Quintana-Orti, Gregorio, Solving Linear Matrix Equations via Rational Iterative Schemes Technische Universität Chemnitz, SFB 393 (Germany), SFB393-Preprint 8 (2004) Ding and Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Transactions on Automatic Control, 50 1216 1221 (2005) Antoulas, Approximation of large-scale dynamical systems, SIAM (2005) Page 18 of 18