NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-HAMILTONIAN/HAMILTONIAN PENCILS

Size: px
Start display at page:

Download "NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-HAMILTONIAN/HAMILTONIAN PENCILS"

Transcription

1 NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-AMILTONIAN/AMILTONIAN PENCILS PETER BENNER RALP BYERS VOLKER MERMANN AND ONGGUO XU Abstract. We discuss the numerical solution of structured generalized eigenvalue problems that arise from linear-quadratic optimal control problems optimization multibody systems and many other areas of applied mathematics physics and chemistry. The classical approach for these problems requires computing invariant and deflating subspaces of matrices and matrix pencils with amiltonian and/or skew-amiltonian structure. We extend the recently developed methods for amiltonian matrices to the general case of skew-amiltonian/amiltonian pencils. The algorithms circumvent problems with skew-amiltonian/amiltonian matrix pencils that lack structured Schur forms by embedding them into matrix pencils that always admit a structured Schur form. The rounding error analysis of the resulting algorithms is favorable. For the embedded matrix pencils the algorithms use structure preserving unitary matrix computations and are strongly backwards stable i.e. they compute the exact structured Schur form of a nearby matrix pencil with the same structure. Keywords. eigenvalue problem deflating subspace amiltonian matrix skew-amiltonian matrix skew-amiltonian/amiltonian matrix pencil. AMS subject classification. 49N1 65F15 93B4 93B Introduction and Preliminaries. In this paper we study eigenvalue and invariant subspace computations involving matrices and matrix pencils with the following algebraic structures. Definition 1.1. Let J := I n I n where I n is the n n identity matrix. a) A matrix C 2n2n is amiltonian if (J ) = J. The Lie Algebra of amiltonian matrices in C 2n2n is denoted by 2n. b) A matrix C 2n2n is skew-amiltonian if (J ) = J. The Jordan algebra of skew-amiltonian matrices in C 2n2n is denoted by S 2n. c) If S S 2n and 2n then αs β is a skew-amiltonian/amiltonian matrix pencil. d) A matrix Y C 2n2n is symplectic if YJ Y = J. The Lie group of symplectic matrices in C 2n2n is denoted by S 2n. e) A matrix U C 2n2n is unitary symplectic if UJ U = J and UU = I 2n. The compact Lie group of unitary symplectic matrices in C 2n2n is denoted by US 2n. f) A subspace L of C 2n is called Lagrangian if it has dimension n and x J y = for all x y L. A matrix S C 2n2n is skew-amiltonian if and only if is is amiltonian. Consequently there is little difference between the structure of complex skew-amiltonian matrices and complex amiltonian matrices. owever real skew-amiltonian matrices are not real scalar multiples of amiltonian matrices so there is a greater difference between the structure of real skew-amiltonian matrices and real amiltonian matrices. The structures in Definition 1.1 arise typically in linear-quadratic optimal control and optimization Moreover instances of skew-amiltonian/amiltonian pencils appear in several other areas of applied mathematics computational physics and chemistry e.g. gyroscopic systems 2 numerical simulation of elastic deformation and linear response Working title was Numerical Computation of Deflating Subspaces of Embedded amiltonian and Symplectic Pencils. All authors were partially supported by Deutsche Forschungsgemeinschaft Research Grant Me 79/7-2 and Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern. Zentrum für Technomathematik Fachbereich 3/Mathematik und Informatik Universität Bremen D Bremen Germany benner@math.uni-bremen.de. Department of Mathematics University of Kansas Lawrence KS 6645 USA byers@math.ukans.edu. This author was partially supported by National Science Foundation awards CCR MRI and by the NSF EPSCoR/K*STAR program through the Center for Advanced Scientific Computing. Fachbereich 3 Mathematik Technische Universität Berlin Sekr. MA 4-5 Str. des 17. Juni 136 D-1623 Berlin Germany mehrmann@math.tu-berlin.de Deparment of Mathematics University of Kansas Lawrence KS 6645 USA xu@math.ukans.edu. This work was completed while this author was with the TU Chemnitz Germany. 1

2 theory 3. Linear-quadratic optimal control and optimization problems are related to skew- amiltonian/amiltonian pencils in 4 5. It is important to exploit and preserve algebraic structures (like symmetries in the matrix blocks or symmetries in the spectrum) as much as possible. Such algebraic structures typically arise from the physical properties of the problem. If rounding errors or other perturbations destroy the algebraic structures then the results may be physically meaningless. Not coincidentally numerical methods that preserve algebraic structures are typically more efficient as well as more accurate. Despite the advantages associated with exploiting matrices with special structure condensing data into a compact structured matrix using finite precision arithmetic may be ill-advised. A discussion of avoiding normal-equations-like numerical instability when embedding linear-quadratic optimal control problems and optimization problems into skew-amiltonian/amiltonian pencils appears in 4 5. Although the numerical computation of n-dimensional Lagrangian invariant subspaces of amiltonian matrices and the related problem of solving algebraic Riccati equations have been extensively studied (see and the references therein) completely satisfactory methods for general amiltonian matrices and matrix pencils are still an open problem. Completely satisfactory methods would be numerically backward stable have complexity O(n 3 ) and preserve structure. There are several reasons for this difficulty all of which are well demonstrated in the context of algorithms for amiltonian matrices. First of all an algorithm based upon structure preserving similarity transformations (including QR-like algorithms) would require a triangularlike amiltonian Schur form that displays the desired deflating subspaces. A amiltonian Schur form under unitary symplectic similarity transformations is presented in 31. (See (1.1).) Unfortunately not every amiltonian matrix has this kind of amiltonian Schur form. For example the amiltonian matrix J in Definition 1.1 is invariant under arbitrary unitary similarity transformations but is not in the amiltonian Schur form described in 31. (Similar difficulties arise in the skew-amiltonian/amiltonian pencil case for the Schur-like forms of skew-amiltonian/amiltonian matrix pencils in and for the other structures given in Definition 1.1 in 24.) A second problem comes from the fact that even when a amiltonian Schur form exists there is no completely satisfactory structure preserving numerical method to compute it. It has been argued in 2 that except in special cases QR-like algorithms are impractically expensive because of the lack of a amiltonian essenberg-like form. For this reason other methods like the multishift-method of 1 the structured implicit product methods of do not follow the QR-algorithm paradigm. (The implicit product methods 6 7 do come quite close to optimality. We extend the method of 6 to skew-amiltonian/amiltonian matrix pencils in Section 4.) A third difficulty arises when the amiltonian matrix or the skew-amiltonian/amiltonian matrix pencil has eigenvalues on the imaginary axis. In that case the desired Lagrangian subspace is in general not unique 29. Furthermore if finite precision arithmetic or other errors perturb the matrix off the Lie algebra of amiltonian matrices then it is typically the case that the perturbed matrix has no Lagrangian subspace or does not have the expected eigenvalue pairings see e.g We close the introduction by introducing some notation. To simplify notation the term eigenvalue is used both for eigenvalues of matrices and in the context of a matrix pencil αe βa for pairs (α β) C \ ( ) for which det(αe βa) =. These pairs are not unique. If β then we identify (α β) with (α/β 1) and λ = α/β. Pairs (α ) with α are called infinite eigenvalues. By Λ(E A) we denote the set of eigenvalues of αeβa including finite and infinite eigenvalues both counted according to multiplicity. We will denote by Λ (E A) Λ (E A) and Λ + (E A) the set of finite eigenvalues of αa βe with negative zero and positive real parts respectively. The set of infinite eigenvalues is denoted by Λ (E A). Multiple eigenvalues are repeated in Λ (E A) Λ (E A) Λ + (E A) and Λ (E A) according to algebraic multiplicity. The set of all eigenvalues counted according to multiplicity is Λ(E A) := Λ (E A) Λ (E A) Λ + (E A) Λ (E A). Similarly we denote by Def (E A) Def (E A) Def + (E A) and Def (E A) the right deflating subspaces corresponding to Λ (E A) Λ (E A) Λ + (E A) and Λ (E A) respectively. 2

3 Throughout this paper the imaginary number 1 is denoted by i. The inertia of a ermitian matrix A consists of the triple In(A) = (π ω ν) where π = π(a) ω = ω(a) and ν = ν(a) represent the number of eigenvalues with positive zero and negative real parts respectively. By abuse of notation we identify a subspace and a matrix whose columns span this subspace by the same symbol. We call a matrix amiltonian block triangular if it is amiltonian and has the form F G F If furthermore F is triangular then we call the matrix amiltonian triangular. The terms skew- amiltonian block triangular and skew-amiltonian triangular are defined analogously. The amiltonian (skew-amiltonian) Schur form of a amiltonian (skew-amiltonian) matrix is the factorization. (1.1) = UT U where U US 2n and T is amiltonian (skew-amiltonian) triangular. As mentioned above not all amiltonian matrices have a amiltonian Schur form. Real skew-amiltonian matrices always have one 38 but not all complex skew-amiltonian matrices do. For amiltonian matrices that have no purely imaginary eigenvalues the existence of a amiltonian Schur form was proved in 31. Necessary and sufficient conditions for the existence of the amiltonian Schur form in the case of arbitrary spectra were suggested in 23 and a proof based on a structured amiltonian Jordan form was recently given in Schur-like forms of Skew-amiltonian/amiltonian Matrix Pencils. In this section we derive the theoretical background for algorithms to compute eigenvalues and deflating subspaces of skew-amiltonian/amiltonian matrix pencils. A primary theoretical and computation tool is the J-congruence. A J-congruence transformation of a 2n 2n pencil αs β by a nonsingular matrix Y C 2n2n is the conguence transformation J Y J T (αs β)y where J is as in Definition 1.1. The structure of skew-amiltonian/amiltonian matrix pencils is preserved by J-congruence transformations i.e. if αs β is a skew-amiltonian/amiltonian pencil and Y is nonsingular then J Y J T (αs β)y is also skew-amiltonian/amiltonian. The skew-amiltonian/amiltonian Schur form of a skew-amiltonian/amiltonian pencil αs β is the factorization (2.1) αs β = J QJ (α T S11 S S11 11 β 11 ) Q where Q C 2n2n is unitary S 11 C nn and 11 C nn are upper triangular S C nn is skew- ermitian and C nn is ermitian. Note that the skew-amiltonian/amiltonian Schur form is a special case of the Schur form a general matrix pencil and that it displays the eigenvalues and a nested system of deflating subspaces. This definition of a skew-amiltonian/amiltonian Schur form is essentially consistent with the definition of the amiltonian Schur form of a amiltonian matrix (1.1). If (2.1) holds with S = I then it is not difficult to show that Q is a unitary diagonal matrix multiple of a unitary symplectic matrix and that there is a unitary symplectic choice of Q Q = Q 1 = JQ J T for which (2.1) holds with S 11 = I and S =. Skew-amiltonian/amiltonian matrix pencils often have the characteristic that the skew- amiltonian matrix S is block diagonal 4 5 i.e. S = this case (among others) the matrix S factors in the form E E for some matrix E C nn. In (2.2) S = J Z J T Z. where Z = diag(i E ). Such a factorization may also be intrinsic to the problem formulation for non-block diagonal skew-amiltonian matrices S; see e.g

4 Let x y be the indefinite inner product on C 2n C 2n defined by x y = y J x. If Z C 2n2n then for all x y C 2n (Zx) y = x (J T Z J T )y i.e. the adjoint of Z with respect to.. is J T Z J T. Because J 1 = J T = J the adjoint may also be expressed as J Z J T. From this point of view (2.2) is a symmetric-like factorization of S into the product of adjoints J ZJ T and Z. By analogy with the factorization of symmetric matrices we will use the term J -semidefinite to refer to skew-amiltonians matrices which have a factorization of the form (2.2). A J -definite skew-amiltonian matrix is a skew-amiltonian matrix that is both J -semidefinite and non-singular. The property of J -semidefiniteness arises frequently in applications We show below that all real skew-amiltonian matrices are J -semidefinite. We also show that if a skew-amiltonian/amiltonian matrix pencil has a skew-amiltonian/amiltonian Schur form then the skew-amiltonian part is J -semidefinite. Although J -semidefiniteness is a common property of skew-amiltonian matrices it is not universal. The following lemma shows that neither ij nor any nonsingular skew-amiltonian matrix of the form ij LL T is J -semidefinite. Lemma 2.1. A nonsingular skew-amiltonian matrix S is J -definite if and only if ij S is ermitian with n positive and n negative eigenvalues. Proof. If S is J -definite then Z in (2.2) is nonsingular and the ermitian matrix ij S is congruent to ij T = ij. It follows from Sylvester s law of inertia 16 p p. 188 that ij S is a ermitian matrix with n positive eigenvalues and n negative eigenvalues. Conversely suppose that ij S is ermitian with n positive and n negative eigenvalues. The matrix ij T also has n positive and n negative eigenvalues so by an immediate consequence of Sylvester s law of inertia there is a nonsingular matrix Z C 2n2n for which ij S = Z (ij T )Z. It follows that (2.2) holds with this matrix Z. Lemma 2.1 suggests that J -semidefiniteness might be a characteristic of the inertia of ij S. The next lemma shows that this is indeed the case. Lemma 2.2. A matrix S S 2n is J -semidefinite if and only if ij S satisfies both π(ij S) n and ν(ij S) n. Proof. Suppose that S S 2n is J -semidefinite. For some Z satisfying (2.2) define S(ɛ) by S(ɛ) = J (Z + ɛi) J T (Z + ɛi). For ɛ small enough Z + ɛi is nonsingular and by Lemma 2.1 π(ij S(ɛ)) = n and ν(ij S(ɛ)) = n. Because eigenvalues are continuous functions of matrix elements and S = lim ɛ S(ɛ) it follows that π(ij S) n and ν(ij S) n. For the converse if π(ij S) = p n and ν(ij S) = q n then there exists a nonsingular matrix W for which ij S = W LW with signature matrix p n p q n q p I p n p L = q I q. n q Because p n and q n L factors as L = L diag(i n I n )L where I n is the n n identity matrix. The matrix diag(i n I n ) is the diagonal matrix of eigenvalues of ij T so L = L(U (ij T )U)L where U = (1/ 2) In I n ii n ii n is the unitary matrix of eigenvectors of ij T. ence (2.2) holds with Z = ULW. The following immediate corollary also follows from 15. Corollary 2.3. Every real skew-amiltonian matrix S is J -semidefinite. Proof. If S is real then J S is real and skew-symmetric. The eigenvalues of J S appear in complex conjugate pairs with zero real part. ence the eigenvalues of ij S lie on the real axis in ± pairs. In particular π(ij S) = ν(ij S). It follows from the trivial identity π(ij S) + ω(ij S) + ν(ij S) = 2n that π(ij S) n and ν(ij S) n. The next lemma and its corollary show that J -semidefiniteness of both S and i are necessary conditions for a skew-amiltonian/amiltonian matrix pencil αs β to have a skew-amiltonian/amiltonian Schur. 4

5 Lemma 2.4. If S S 2n and there exists a nonsingular matrix Y such that J Y J T S11 S SY = S11 with S 11 S C nn then S is J -semidefinite. Proof. Let T be the ermitian matrix T = Y (ij S)Y = In and set T (ɛ) = T + ɛ is 11 is 11 is I n I n. For ɛ sufficiently small both ɛi n is and ɛi n is 11 are nonsingular and T (ɛ) is congruent to (ɛin is 11 )(ɛi n is ) 1 (ɛi n is 11 ) (ɛi n is ) By Sylvester s law the inertia of the negative of the (1 1) block is equal to the inertia of the (2 2) block. This implies π(t (ɛ)) = ν(t (ɛ)) = n. Continuity of eigenvalues as ɛ implies π(t ) n and ν(t ) n. The assertion now follows from Lemma 2.2. Corollary 2.5. If 2n and there exists a nonsingular matrix Y such that J Y J T 11 Y = 11 with 11 C nn then i is J -semidefinite. Proof. Apply Lemma 2.4 to the skew-amiltonian matrix i. It follows from Lemma 2.4 and Corollary 2.5 that if αs β is a skew-amiltonian/amiltonian matrix pencil that has a skew-amiltonian/amiltonian Schur form then S and i are J -semidefinite. As noted above the factor Z in (2.2) is often given explicitly as part of the problem statement. It can also be obtained as in the proof of Lemma 2.2 or by a modification of Gaussian elimination 3. The next theorem shows that if S is nonsingular then the skew- amiltonian/amiltonian Schur form (if it exists) can be expressed in terms of block triangular factorizations of Z and without explicitly using S. This opens the possibility of designing numerical methods that work directly on Z and and avoid the normal-equations-like numerical instability of forming S explicitly. For regular skew-amiltonian/amiltonian matrix pencils the following theorem gives necessary and sufficient conditions for the existence of a skew-amiltonian/amiltonian Schur form. Theorem Let αs β be a regular skew-amiltonian/amiltonian matrix pencil with ν pairwise distinct finite nonzero purely imaginary eigenvalues iα 1 iα 2... iα ν of algebraic multiplicity p 1 p 2... p ν and associated right deflating subspaces Q 1 Q 2... Q ν. Let p be the algebraic multiplicity of the eigenvalue infinity and let Q be its associated deflating subspace. The following are equivalent. (i) There exists a nonsingular matrix Y such that (2.3) J Y J T S11 S (αs β)y = α S11 11 β 11 where S 11 and 11 are upper triangular while S is skew-ermitian and is ermitian. (ii) There exists a unitary matrix Q such that J Q J T (αs β)q is of the form on the right-hand-side of (2.3). (iii) For k = ν Q k J SQ k is congruent to a p k p k copy of J. (If ν = i.e. if αs β has no finite nonzero purely imaginary eigenvalue then this statement holds vacuously.) Furthermore if p then Q J Q is congruent to a p p copy of ij. 5.

6 Similar results cover real Schur-like forms of real amiltonian matrices and skew-amiltonian/amiltonian matrix pencils Theorem 2.6 gives necessary and sufficient conditions for existence of a structured triangularlike form for skew-amiltonian/amiltonian pencils. It also demonstrates that whenever a structured triangular-like form exists then it also exists under unitary transformations. It is partly because of this fact that there exist structure preserving numerically stable numerical algorithms like those described here and in 4. Theorem 2.7. Let αs β be a skew-amiltonian/amiltonian matrix pencil with nonsingular J -semidefinite skew-amiltonian part S = J Z J T Z. If any of the equivalent conditions of Theorem 2.6 holds then there exists a unitary matrix Q and a unitary symplectic matrix U such that U Z11 Z (2.4) ZQ = Z 22 J Q J T 11 (2.5) Q = 11 where Z 11 Z 22 and 11 are n n and upper triangular. Proof. With Q as in Theorem 2.6 part (ii) we obtain (2.5) and J Q J T SQ = Partition Z = ZQ as Z = Z 1 Z 2 where Z 1 Z 2 C 2nn. Using S = J Z J T Z we obtain Z J Z S11 (2.6) =. S 11 S S11 S. S 11 In particular Z1 J Z 1 = i.e. the columns of Z 1 form a basis of a Lagrangian subspace and therefore the columns of Z 1 form the first n columns of a symplectic matrix. (It is easy to verify from Definition 1.1 that using the non-negative definite square root Z 1 J Z 1 (Z1 Z 1 ) 1/2 is symplectic.) It is shown in 11 that Z 1 has a unitary symplectic QR factorization U Z 1 = Z11 where U US 2n is unitary symplectic and Z 11 C nn is upper triangular. Setting U ZQ = U Z11 Z Z = Z 22 we obtain from (2.6) that Z22Z 11 = S 11. Since S 11 and Z 11 are both upper triangular and Z 11 is nonsingular we conclude that Z22 is also upper triangular. Note that the invertibility of Z is only a sufficient condition for the existence of U as in (2.4) and (2.5). owever there is no particular pathology associated with Z being singular. The algorithms described below and in 4 do not require Z to be nonsingular. If both S and are nonsingular then the following stronger form of Theorem 2.7 holds. Corollary 2.8. Let αs β be a skew-amiltonian/amiltonian matrix pencil with nonsingular J -semidefinite skew-amiltonian part S = J Z J T Z and nonsingular J -semidefinite amiltonian part i = J W J T W. If any of the equivalent conditions of Theorem 2.6 holds then there exist a unitary matrix Q and unitary symplectic matrices U and V such that U ZQ = Z11 Z V WQ = Z 22 W11 W W 22 where Z 11 Z 22 and W 11 W 22 are n n and upper triangular. Proof. Similar to the proof of Theorem 2.7. In the following we derive the theoretical background for algorithms to compute eigenvalues and deflating subspaces of skew-amiltonian/amiltonian matrix pencils. 6

7 We will obtain the structured Schur form of a complex skew-amiltonian/amiltonian matrix pencil from the structured Schur form of a real skew-amiltonian/skew-amiltonian matrix pencil of double dimension. The following theorem establishes that in contrast to the complex skew-amiltonian/amiltonian case every real regular skew-amiltonian/skew-amiltonian pencil admits a structured real Schur form. Theorem 2.9. If αs βn is a real regular skew-amiltonian/skew-amiltonian matrix pencil with S = J Z T J T Z then there exist a real orthogonal matrix Q R 2n2n and a real orthogonal symplectic matrix U R 2n2n such that (2.7) (2.8) U T Z11 Z ZQ = Z 22 J Q T J T N11 N N Q = N11 T where Z 11 and Z T 22 are upper triangular N 11 is quasi upper triangular and N is skew-symmetric. (2.9) Moreover J Q T J T Z T (αs βn )Q = α 22 Z 11 Z22Z T ZZ T 22 Z11Z T 22 is a J -congruent skew-amiltonian/skew-amiltonian matrix pencil. N11 N β N11 T Proof. A constructive proof for the existence of Q and U satisfying (2.7) and (2.8) is Algorithm 3 in 4. To show (2.9) recall that U is orthogonal symplectic and therefore commutes with J. ence J Q T J T SQ = J Q T J T (J Z T J T Z)Q = J Q T J T (J Z T J T U)(U T ZQ) = J (U T ZQ) T J T (U T ZQ). Equation (2.9) now follows from the block triangular form of (2.7). Note that this theorem does not easily extend to complex skew-amiltonian/skew-amiltonian matrix pencils. A method for computing the structured Schur form (2.9) for real matrices was proposed in 32 but if S is given in factored form then Algorithm 3 in 4 is more robust in finite precision arithmetic because it avoids forming S explicitly. Neither the method in 32 nor Algorithm 3 in 4 applies to complex skew-amiltonian/amiltonian matrix pencils because those algorithms depend on the fact that real diagonal skewsymmetric matrices are identically zero. This property is also crucial for the structured Schur form algorithms in Algorithm 1 given below computes the eigenvalues of a complex skew-amiltonian/amiltonian matrix pencil αs β using an unusual embedding of C into R 2 that was recently proposed in 8. Let αs β be a complex skew-amiltonian/amiltonian matrix pencil with J -semidefinite skew-amiltonian part S = J Z J T Z. Split the skew-amiltonian matrix N = i S 2n as i = N = N 1 + in 2 where N 1 is real skew-amiltonian and N 2 is real amiltonian i.e. F1 G N 1 = 1 1 F1 T G 1 = G T 1 1 = 1 T F2 G N 2 = 2 2 F2 T G 2 = G T 2 2 = 2 T 7

8 and F j G j j R n n for j = 1 2. Setting 2 I2n ii Y c = 2n 2 I 2n ii 2n I n (2.1) P = I n I n I n (2.11) X c = Y c P and using the embedding B N = diag(n N ) we obtain that (2.) B c N := X c B N X c = F 1 F 2 G 1 G 2 F 2 F 1 G 2 G F1 T F2 T F2 T F1 T is a real skew-amiltonian matrix in S 4n. Similarly set Z (2.13) B Z := Z J Z J T (2.14) B T := J Z J T S (2.15) B S := = B S T B Z. ence One can easily verify that (2.16) (2.17) are all real. Therefore αb S βb N = B c Z := X c B Z X c αs βn α S β N B c T := X c B T X c = J (B c Z) T J T B c S := X c B S X c = J (B c Z) T J T B c Z. (2.18) αb c S βb c N = X c = X c (αb S βb N )X c αs βn α S β N X c is a real 4n 4n skew-amiltonian/skew-amiltonian matrix pencil. For this matrix pencil we can employ Algorithm 3 in 4 to compute the structured factorization (2.8) i.e. we can determine an orthogonal symplectic matrix U and an orthogonal matrix Q such that (2.19) (2.2) B Z c := U T BZQ c Z11 Z = Z 22 B N c := J Q T J T BN c N11 N Q = N11 T. Thus if B c S := J ( B c Z )T J T Bc Z then α B c S β B c N = α(j Q T J T B c SQ) β(j Q T J T B c N Q) 8

9 is a J -congruent skew-amiltonian/skew-amiltonian matrix pencil in Schur form. By (2.18) and the fact that the finite eigenvalues of αs βn are symmetric with respect to the real axis we observe that the spectrum of the extended matrix pencil αbs c βbc N consists of two copies of the spectrum of αs βn. Consequently Λ(S ) = Λ(S in ) = Λ(Z T 22Z 11 in 11 ). In this way Algorithm 1 below computes the eigenvalues of the complex skew-amiltonian/amiltonian matrix pencil αs β = αs + iβn. From this we can also derive the skew-amiltonian/amiltonian Schur form of αb S βb where (2.21) B = ib N = and B S is as in (2.17). The spectrum of the extended matrix pencil αb S βb consists of two copies of the spectrum of αs β 6. If (2.22) B c = ib c N = X c B X c then it follows from (2.19) and (2.2) that B Z c := U T BZQ c Z11 Z (2.23) = Z 22 B c := J Q T J T BQ c in11 in (2.24) = (in 11 ) and the matrix pencil α B S c β B c := αj ( B Z c ) J T Bc Z β B c is in skew-amiltonian/amiltonian Schur form. We have thus obtained the structured Schur form of the extended complex skew-amiltonian/amiltonian matrix pencil αbs c βbc. Moreover (2.25) α B c S β B c = J Q J T (αb c S βb c )Q = (X c J QJ T ) (αb S βb ) X c Q is in skew-amiltonian/amiltonian Schur form. We have seen so far that we can compute structured Schur forms and thus are able to compute the eigenvalues of the structured matrix pencils under consideration using the embedding technique into a structured matrix pencil of double size. 3. Deflating Subspaces of Skew-amiltonian/amiltonian Matrix Pencils. For the solution of problems involving skew-amiltonian/amiltonian matrix pencils as described in the introduction it is usually necessary to compute n-dimensional deflating subspaces associated with eigenvalues in the closed left half plane. To get the desired subspaces we generalize the techniques developed in 6. For this we need a structure preserving method to reorder the eigenvalues along the diagonal of the structured Schur form so that all eigenvalues with negative real part appear in the (1 1) block and eigenvalues with positive real part appear in the (2 2) block. Such a reordering method is described in Appendix B of 4. The following theorem uses this eigenvalue ordering to determine the desired deflating subspaces of the matrix pencil αs β from the structured Schur form (2.25). Theorem 3.1. Let αs β C 2n2n be a skew-amiltonian/amiltonian matrix pencil with J -semidefinite skew-amiltonian matrix S = J Z J T Z. Consider the extended matrices B Z = diag(z Z) B T = diag(j Z J T J Z J T ) B S = B T B Z = diag(s S) B = diag( ). 9

10 Let U V W be unitary matrices such that U Z11 Z B Z V = =: R Z Z 22 W T11 T (3.1) B T U = =: R T T 22 W 11 B V = =: R 22 where Λ (B S B ) Λ(T 11 Z ) and Λ(T 11 Z ) Λ + (B S B ) =. ere Z 11 T C mm. Suppose Λ (S ) contains p eigenvalues. If V1 V 2 C 4nm are the first m columns of V 2p m 2n 2p then there are subspaces L 1 and L 2 such that (3.2) range V 1 = Def (S ) + L 1 L 1 Def (S ) + Def (S ) range V 2 = Def + (S ) + L 2 L 2 Def (S ) + Def (S ). U1 W1 W 2 If Λ(T 11 Z ) = Λ (B S B ) and U 2 are the first m columns of U W respectively then there exist unitary matrices Q U Q V Q W such that U 1 = P U Q U U 2 = P + U Q U V 1 = P V Q V V 2 = P + V Q V W 1 = P W Q W W 2 = P + W Q W and the columns of P V and P + V form orthogonal bases of Def (S ) and Def + (S ) respectively. Moreover the matrices P U P + U P W and P + W have orthonormal columns and the following relations are satisfied (3.3) ZP V = P U Z 11 J Z J T P U = P W T 11 P V = P W 11 ZP + V = P + U Z 22 J Z J T P + U = P + W T 22 P + V = P + W 22. ere Z kk T kk and kk k = 1 2 satisfy Λ( T 11 Z11 11 ) = Λ( T 22 Z22 22 ) = Λ (S ). Proof. The factorizations in (3.1) imply that B S V = WR T R Z and B V = WR. Comparing the first m columns and making use of the block forms we have (3.4) SV 1 = W 1 (T 11 Z 11 ) V 1 = W 1 11 SV 2 = W 2 (T 11 Z 11 ) V 2 = W Clearly range V 1 and range V 2 are both deflating subspaces of αs β. Since Λ (S ) Λ (B S B ) Λ(T 11 Z ) and Λ(T 11 Z ) contains no eigenvalue with positive real part we get range V 1 Def (S ) + L 1 range V 2 Def + (S ) + L 2 L 1 Def (S ) + Def (S ) L 2 Def (S ) + Def (S ). We still need to show that (3.5) Def (S ) range V 1 Def + (S ) range V 2. Let Ṽ1 and Ṽ2 be full rank matrices whose columns form bases of Def (S ) and Def + (S ) respectively. It is easy to show that the columns of Ṽ1 span Def Ṽ (B S B ). This implies that 2 range Ṽ1 Ṽ 2 1 V1 range. V 2

11 Therefore Ṽ1 range range Ṽ 2 V1 range V 2 and from this we obtain (3.5) and hence (3.2). If Λ(T 11 Z ) = Λ (B S B ) where p is the number of eigenvalues in Λ (S ) then from (3.2) we have m = 2p and range V 1 = Def (S ) range V 2 = Def + (S ). ence rank V 1 = rank V 2 = p and furthermore T 11 Z 11 and 11 must be nonsingular. Using (3.4) we get V 1 = SV 1 ((T 11 Z 11 ) 1 11 ) V 2 = SV 2 ((T 11 Z 11 ) 1 11 ). Let V 1 = P V Q V be an RQ (triangular-orthogonal) decomposition 17 with P V of full column rank. Since rank V 1 = p we have rank P V = p. Partition V 2Q V = P V P + V conforming to V 1Q V. V1 V 2 Since the columns of are orthonormal we obtain (P + V ) P + V = I p and hence rank P + V = p. Furthermore since rank V 2 = p we have range P V range P + V = range V 2 form orthog- and using orthonormality we obtain P V =. Therefore the columns of P V and P + V onal bases of Def (S ) and Def + (S ) respectively. From (3.1) we have (3.6) ZV 1 = U 1 Z 11 J Z J T U 1 = W 1 T 11 V 1 = W 1 11 and (3.7) ZV 2 = U 2 Z 11 J Z J T U 2 = W 2 T 11 V 2 = W Let U 1 = P U Q U and W 1 = P W Q W be RQ (triangular-orthogonal) decompositions with P U P W of full column rank. Using V 1 = P V Q V and the fact that ZP V SP V and P V are of full rank (otherwise there would be a zero or infinite eigenvalue associated with the deflating subspace range P V ) from the first and third identity in (3.6) we obtain Moreover setting rank P U = rank P W = rank P V = p. Z = Q U Z 11 Q V T = QW T 11 Q U = QW 11 Q V we obtain Z = Z11 T11 11 T = = Z 21 Z22 T 21 T where all diagonal blocks are p p. Set U 2 Q U =: P U P + U W 2Q W =: P W P + W and take V 2Q V =: P + V. The block forms of Z T and together with the first identity of (3.7) imply that PU Z11 = P + Z U 21. Since the columns of U1 U 2 are orthonormal we have (P + U ) P + U = I p and (P + U ) P U =. ence Z21 = 11

12 and consequently P U =. Similarly from the third identity of (3.7) we get P W = 21 = and from the second identity we obtain T 21 =. Combining all these observations we obtain Z Z J Z J T J Z J T P V P + V P U P + U P V P + V = = = P U P + U P W P + W P W P + W Z11 Z22 T11 T which gives (3.3). We remark that (3.1) can be constructed from (2.25) by reordering the eigenvalues properly. Theorem 3.1 gives a way to obtain the stable deflating subspace of a skew-amiltonian/amiltonian matrix pencil from the deflating subspaces of an embedded skew-amiltonian/amiltonian matrix pencil of double size. This will be used by the algorithms formulated in the next section. 4. Algorithms. The results of Theorem 3.1 together with the embedding technique lead to the following algorithm to compute the eigenvalues and the deflating subspaces Def (S ) and Def + (S ) of a complex skew-amiltonian/amiltonian matrix pencil αs β. Since the algorithms are rather technical we do not discuss details like eigenvalue reordering or explicit elimination orders in the construction of the structured Schur forms. Instead we refer the reader to the technical report 4 for these details. In summary Algorithm 1 proposed below transforms a 2n 2n complex skew-amiltonian/amiltonian matrix pencil with J -semidefinite skew-amiltonian part into a 4n 4n complex skew- amiltonian/amiltonian matrix pencil in Schur form. The process passes through intermediate matrix pencils of the following types. 2n 2n complex skew-amiltonian/amiltonian matrix pencil αs β with S = J Z J T Z. Equation (2.18) 4n 4n real skew-amiltonian/skew-amiltonian matrix pencil αbs c βbc N with Bc S = J (Bc Z )T J T BZ c Algorithm 3 in 4 4n 4n real skew-amiltonian/skew-amiltonian matrix pencil in Schur form α B S c β B N c with B S c = J ( B Z c )T J T Bc Z and B c Z = U T B c Z Q = Z11 Z Z 22 Bc N = J Q T J T BN c Q = N11 N N T 11 as in (2.19) and (2.2) Algorithm 4 in 4 4n 4n complex skew-amiltonian/amiltonian matrix pencil in Schur form with ordered eigenvalues. The required deflating subspaces of the original skew-amiltonian/amiltonian matrix pencil are then obtained from the deflating subspaces of the final 4n 4n complex skew-amiltonian/amiltonian matrix pencil. (Unfortunately if there are non-real eigenvalues then Algorithm 4 in 4 (the eigenvalue sorting algorithm) reintroduces complex entries into the 4n 4n extended real matrix pencil.) Algorithm 1. Given a complex skew-amiltonian/amiltonian matrix pencil αs β with J -semidefinite skew-amiltonian part S = J Z J T Z this algorithm computes the structured

13 Schur form of the extended skew-amiltonian/amiltonian matrix pencil αbs c βbc the eigenvalues of αs β and orthonormal bases of the deflating subspace Def (S ) and the companion subspace range P U. Input: amiltonian matrix and the factor Z of S. Output: P V P U as defined in Theorem 3.1. Step 1: Set N = i and form matrices BZ c Bc N as in (2.16) and (2.) respectively. Find the structured Schur form of the skew-amiltonian/skew-amiltonian matrix pencil αbs c βbn c using Algorithm 3 in 4 to compute the factorization B Z c = U T BZQ c Z11 Z = Z 22 B N c = J Q T J T BN c N11 N Q = N11 T where Q is real orthogonal U is real orthogonal symplectic Z 11 Z22 T are upper triangular and N 11 is quasi upper triangular. Step 2: Reorder the eigenvalues using Algorithm 4 in 4 to determine a unitary matrix Q and a unitary symplectic matrix Ũ such that Z11 Ũ Z Bc Z Q = =: ˇB Z c Step 3: Z22 J Q J T (i B N c ) Q 11 = 11 =: ˇB c with Z 11 Z upper triangular such that Λ (J ( ˇB Z c ) J T ˇBc Z ˇB c ) is contained in the spectrum of the 2p 2p leading principal sub-pencil of α Z 22 Z 11 β 11. Set V = I 2n X c Q Q I2p U = I 2n X c UŨ I2p (where X c is as in (2.11)) and compute P V P U orthogonal bases of range V and range U respectively using any numerically stable orthogonalization scheme. End Based on flop counts we estimate the cost of this algorithm to be roughly 5% of the cost of the periodic QZ algorithm 1 19 applied to the 2n 2n complex pencil αj Z J T Z β (treating J Z J T as one matrix). If S is not factored then the algorithm can be simplified by using the method of 32 to compute the real skew-amiltonian/amiltonian Schur form of αbs c βbc directly. Algorithm 2. Given a complex skew-amiltonian/amiltonian matrix pencil αs β. This algorithm computes the structured Schur form of the extended skew-amiltonian/amiltonian matrix pencil αbs c βbc the eigenvalues of αs β and an orthogonal basis of the deflating subspace Def (S ). Input: A complex skew-amiltonian/amiltonian matrix pencil αs β. Output: P V as defined in Theorem 3.1. Step 1: Set N = i and form the matrices BS c Bc N as in (2.17) and (2.) respectively. Find the structured Schur form of the skew-amiltonian/skew-amiltonian matrix pencil αbs c βbc N using Algorithm 5 in 4 to compute the factorization ˇB S c = J Q T J T BSQ c S11 S = S11 T ˇB N c = J Q T J T BN c N11 N Q = N11 T where Q is real orthogonal S 11 is upper triangular and N 11 is quasi upper triangular. 13

14 Step 2: Reorder the eigenvalues using Algorithm 6 in 4 to determine a unitary matrix Q such that J Q S11 J T S ˇBc S Q = Step 3: S 11 J Q J T (i ˇB N c ) Q 11 = 11 with S upper triangular and such that Λ ( ˇB S c i ˇB N c ) is contained in the spectrum of the 2p 2p leading principal sub-pencil of α S 11 β 11. Set V = I 2n X c Q Q I2p (where X c is as in (2.11)) and compute P V the orthogonal basis of range V using any numerically stable orthogonalization scheme. End Algorithm 2 needs roughly 8% of the 16n 3 real flops required by the QZ algorithm applied to the 2n 2n complex pencil αs β as suggested in 37. If only the eigenvalues are computed then Algorithm 2 without accumulation of V needs roughly 6% of the 96n 3 real flops required by the QZ algorithm. In this section we have presented numerical algorithms for the computation of (complex) structured triangular forms. Various details appear in 4. In the next section we give an error analysis. The analysis is a generalization of the analysis for amiltonian matrices in Error and Perturbation Analysis. In this section we will give the perturbation analysis for eigenvalues and deflating subspaces of skew-amiltonian/amiltonian matrix pencils. Variables marked with a circumflex denote perturbed quantities. We begin with the perturbation analysis for the eigenvalues of αs β and αj Z J T Z β. In principle we could multiply out J Z J T Z and apply the classical perturbation analysis of matrix pencils using the chordal metric 36 but this may give pessimistic bounds and would display neither the effects of perturbing each factor separately nor the effects of structured perturbations. Therefore we make use of the perturbation analysis for formal products of matrices developed in 9. If Algorithm 2 is applied to the skew-amiltonian/amiltonian matrix pencil αs β then we compute the structured Schur form of the extended skew-amiltonian/amiltonian matrix pencil αbs c βbc. The well-known backward error analysis of orthogonal matrix computations implies that rounding errors in Algorithm 2 are equivalent to perturbing αbs c βbc to a nearby matrix pencil α ˆB S c β ˆB c where (5.1) (5.2) ˆB S c = BS c + E S ˆB c = B c + E with E S S 4n E 4n and (5.3) (5.4) E S 2 < c S ε B c S 2 E 2 < c ε B c 2. ere ε is the unit round of the floating point arithmetic and c S and c are modest constants depending on the details of the implementation and arithmetic. Let x and y be unit norm vectors such that (5.5) x = α 1 y Sx = β 1 y and let λ = α 1 /β 1 be a simple eigenvalue of αs β. If λ is finite and Re λ then λ is also a simple eigenvalue of αs β. Let u v be unit norm vectors such that (5.6) u = α 2 v Su = β 2 v 14

15 and α 2 /β 2 = λ. Then we have (5.7) ū = ᾱ 2 v Sū = β2 v. Using the equivalence of the matrix pencils αbs c βbc and αb S βb and setting U 1 = Xc y U v 2 = Xc x (5.8) ū we obtain from (5.5) and (5.7) that BU c α1 2 = U 1 B c β1 ᾱ 2 SU 2 = U 1 β2 which implies that λ is a double eigenvalue of αbs c βbc with a complete set of linearly independent eigenvectors. Similarly λ is a double eigenvalue of αbs c βbc with a complete set of linearly independent eigenvectors and BV c α2 2 = V 1 B c β2 ᾱ 1 SV 2 = V 1 β1 where (5.9) V 1 = X c v ȳ V 2 = X c u x Note that the finite eigenvalues with non-zero real part appear in pairs as in (5.5) and (5.6) but infinite and purely imaginary eigenvalues may not appear in pairs. Consequently in the following perturbation theorem the bounds for purely imaginary and infinite eigenvalues are different from the bounds for finite eigenvalues with non-zero real part. Theorem 5.1. Consider the skew-amiltonian/amiltonian matrix pencil αs β along with the corresponding extended matrix pencils αbs c βbc = X c (αb S βb )X c where B S is given by (2.15) B by (2.21) B c by (2.22) X c by (2.11) and BS c by (2.17). Let α ˆB S c β ˆB c be a perturbed extended matrix pencil satisfying (5.1) (5.4) with constants c c S and let ε be equal to the unit round of the floating point arithmetic. If λ is a simple eigenvalue of αs β with vectors x and y as in (5.5) and vectors u and v as in (5.6) then the corresponding double eigenvalue of αbs c βbc may split into two eigenvalues ˆλ 1 and ˆλ 2 of the perturbed matrix pencil α ˆB S c β ˆB c each of which satisfies the following bounds. (i) If λ is finite and Re λ then ˆλ k λ λ ε u J y ( c α c S β 1 S 2. ) + O(ε 2 ) k = 1 2. (ii) If λ is finite and Re λ = then ε ˆλ k λ β 1 x J y (c 2 + c S λ S 2 ) + O(ε 2 ) k = 1 2. (iii) If λ = then 1 ˆλ k ε c S S 2 α 1 x J y + O(ε2 ) k = 1 2. Proof. We first consider the case that λ is finite and Re λ. Let U 1 and U 2 be defined by (5.8) and V 1 and V 2 be defined by (5.9). Using the perturbation theory for formal products of matrices (see 9) we obtain ˆλ k λ ( λ min (V2 J U 1 C S ) 1 V2 J ( 1 2 λ E E S )U 2 (V 2 J U 1 ) 1 V2 J ( 1 ) λ E E S )U 2 C 1 S + O(ε 2 ). 2 15

16 ere C S = β1 β 2 and V2 J U 1 = u in (5.6) implies u J S = β 2 v J. y v = u J y x T J v. The second equation x Xc J Xc Combining this with the second equation of (5.5) we get β 2 v J x = β 1 u J y. ence ˆλ k λ λ (V 2 J U 1 C S ) 1 V2 J ( 1 2 λ E E S )U 2 + O(ε 2 ) (V 2 J U 1 C S ) λ E E S + O(ε 2 ) ( 2 1 E 2 u J y β 1 λ + E ) S 2 + O(ε 2 ) β 1 ( ε c u J y α c ) S β 1 S 2 + O(ε 2 ). If λ is purely imaginary or infinite then the bounds are obtained by adapting the classical perturbation theory in 36 to a formal product of matrices (for details see 9) and by replacing (5.7) with x = ᾱ 1 ȳ and S x = β 1 ȳ as well as replacing u v α 2 and β 2 by x y α 1 and β 1 respectively. The bound in part 1 appears to involve only u y α 1 and β 1 but not v x α 2 and β 2. owever note in the proof that β 2 v J x = β 1 u J y so the bound implicitly involves all the parameters. Note further that if S is nonsingular then v J x and u J y are just the reciprocals of the condition number of λ as eigenvalue of S 1 and S 1 respectively see 6. If S is given in factored form Algorithm 1 computes a unitary symplectic matrix U and a unitary matrix Q which reduce the perturbed matrices (5.1) ˆB c Z := B c Z + E Z ˆBc := B c + E to block upper triangular form as in (2.23) and (2.24) where (5.11) E Z 2 c Z ε B c Z 2 E 2 c ε B c 2 and c Z and c are constants. The eigenvalue perturbation bounds then are essentially the same as in Theorem 5.1. Theorem 5.2. Consider the skew-amiltonian/amiltonian matrix pencil αs β with J -semidefinite skew-amiltonian part S = J Z J T Z. Let αbs c βbc = X c (αb S βb )X c be the corresponding extended matrix pencils where BS c = J (Bc Z ) J T BZ c B Z and BZ c are given by (2.13) and (2.16) B and B c by (2.21) and (2.22) and X c by (2.11). Let ( ˆB Z c ˆBc ) be the perturbed extended matrix pair in (5.1) (5.11) with constants c c Z. Let λ be a simple eigenvalue of αs β = αj Z J T Z β with Re λ and let x y z u v w be unit norm vectors such that (5.) J Z J T x = α 1 y z = β 1 y Zz = γ 1 x with λ = β1 α 1γ 1 and (5.13) J Z J T u = α 2 v w = β 2 v Zw = γ 2 u with λ = β2 α 2γ 2. The corresponding double eigenvalue of αbs c βbc may split into two eigenvalues ˆλ 1 and ˆλ 2 of the perturbed matrix pencil α ˆB S c β ˆB c each of which satisfies the following bounds. (i) If λ is finite and Re λ then ˆλ k λ ( ) λ ε c β 1 w J y c Z min{ γ 1 u J x α 1 w J y } Z 2 + O(ε 2 ). 16

17 (ii) If λ is purely imaginary then ( ˆλ k λ ε c α 1 γ 1 y J z λ c Z γ 1 u J x Z 2 ) + O(ε 2 ). (iii) If λ = then ˆλ k 1 = O(ε 2 ). Proof. The perturbation analysis follows 9. If λ is finite and Re λ then ˆλ k λ λ (V 2 J U 3 ) 1 ( C 1 C3 ) 1 (V3 EZ J U 1 C 3 + C 3 U1 J E Z U 3 1 λ V 3 J E U 3 ) + O(ε 2 ) 2 where U 1 X c w = X c x z C 4n2 and C 1 = From V 2 J U 3 = ū C 4n2 U 3 = X c v J z α2 y T J w z ᾱ 1 C 22 C3 = it follows that w C 4n2 V 2 = X c γ2 γ 1 C 22 C 3 = v γ1 ȳ C 4n2 V 3 = γ 2 C 22. ˆλ k λ λ max{ γ 1 γ 2 } E Z λ E 2 min{ ᾱ 2 γ 2 v J z α 1 γ 1 w J y } + E Z 2 min{ ᾱ 2 v J z α 1 w J y } + O(ε2 ). From (5.) and (5.13) we also have (5.14) ᾱ 2 v J z = γ 1 u J x γ 2 u J x = α 1 w J y β2 v J z = β 1 w J y. It follows that ᾱ 2 γ 2 v J z = γ 2 γ 1 u J x = γ 1 α 1 w J y. ence max{ γ 1 γ 2 } min{ ᾱ 2 γ 2 v J z α 1 γ 1 w J y } = 1 min{ ᾱ 2 v J z α 1 w J y } λ min{ ᾱ 2 γ 2 v J z α 1 γ 1 w J y } = β 1 w J y and ˆλ k λ ( λ ε c β 1 w J y 2 + ) 2c Z min{ ᾱ 2 v J z α 1 w J y } Z 2 + O(ε 2 ). Equation (5.14) implies that ᾱ 2 v J z = γ 1 u J x. The first part of the theorem follows. If λ is purely imaginary the proof is analogous. If λ = then α 1 = or γ 1 = and β 1. Using the first equation of (5.14) we have ᾱ 1 y J z = γ 1 x J x where we have replaced u v and α 2 by x y and α 1 respectively ((5.) and (5.13) are the same now). Since λ is simple i.e. y J z and x J x we have α 1 = γ 1 = and hence α1 γ1 β1 C 1 = = C ᾱ 3 = = C 1 γ 2 =. 1 β1 Therefore E := C 1 C 2 U 3 E Z J U 1 U 1 J E Z U 3 C 1 2 C 1 C 1 C 2 U 3 J E U 3 C 1 2 C 1 =. From 9 Theorem 23 part b) we get 1 ˆλ (5.15) (U 1 J U 1 ) 1 E 2 + O(ε 2 ) = O(ε 2 ). k 17

18 If the matrix pencil αs β with J -semidefinite skew-amiltonian part S = J Z J T Z has semi-simple multiple infinite eigenvalues then the perturbation bound (5.15) weakens to O(ε) 9. To study the perturbations in the computed deflating subspaces we need to study the perturbations for the extended matrix pencil in more detail. As mentioned before by applying Algorithm 2 to αb c S βbc we actually compute a unitary matrix ˆQ such that (5.16) J ˆQ J T (α ˆB S c β ˆB ) c ˆQ = α ˆR S β ˆR Ŝ11 Ŝ =: α Ĥ11 Ĥ Ŝ11 β Ĥ 11 where ˆB S c and ˆB c are defined in (5.1) and (5.2) and Λ(Ŝ11 Ĥ11) = Λ ( ˆB S c ˆB c ). If we assume that the matrix pencil αs β has no purely imaginary eigenvalues then by Theorem 2.6 there exist unitary matrices Q 1 Q 2 such that J Q 1 J T (αs β)q 1 = α S 11 S (S 11 ) β 11 ( 11 ) with Λ(S ) = Λ (S ) and J Q 2 J T (αs β)q 2 = α S + 11 S + (S + 11 ) β ( + 11 ) with Λ(S ) = Λ +(S ) respectively. Set Q = X c diag(q 1 Q 2 )P with P and X c as in (2.1) and (2.11). Then Q is unitary and J Q J T (αbs c βb)q c S11 S S 11 + S + = α (S11 ) β (S 11 + ) S11 S =: α 11 (5.17) S11 β 11 =: αr S βr ( 11 ) ( + 11 ) This is the structured Schur form of the extended skew-amiltonian/amiltonian matrix pencil αb c S βbc. Moreover Λ(S ) = Λ (B c S Bc ). In the following we will use the linear space C nn C nn endowed with the norm (X Y ) = max{ X 2 Y 2 }. Theorem 5.3. Let αs β be a regular skew-amiltonian/amiltonian matrix pencil with neither infinite nor purely imaginary eigenvalues. Let P V be the orthogonal basis of the deflating subspace of αs β corresponding to Λ (S ) and let ˆP V be the perturbation of P V obtained by Algorithm 2 in finite precision arithmetic. Denote by Θ C nn the diagonal matrix of canonical angles between P V and ˆP V. Using the structured Schur form of the extended skew-amiltonian/amiltonian matrix pencil αbs c βbc (as in (2.17) and (2.22)) given by (5.17) define δ by ( δ = min 11Y + Y 11 S11Y Y S 11 ) (5.18). Y C 2n2n \{} Y 2 If (5.19) 8 (E S E ) (δ + (S ) ) < δ 2 18

19 then (5.2) Θ 2 < c b (E S E ) δ < c b ε (c S S c ) δ where c S and c are the modest constants in (5.3) (5.4) and c b = 8( 1 + 4)/( 1 + 2) Proof. Let α ˆR S β ˆR ˆQ be the output of Step 2 in Algorithm 2 in finite precision arithmetic where ˆB c S ˆBc satisfy (5.1) and (5.2). Let Q be the unitary matrix computed by Algorithm 2 in exact arithmetic such that J Q J T (αb c S βb c ) Q = α R S β R = α S11 S S β 11 with Λ( S ) = Λ (BS c Bc ). Since (5.17) is another structured Schur form with the same eigenvalue ordering there exists a unitary diagonal matrix G = diag(g 1 G 2 ) such that Q = QG. Therefore we have ( S ) = (S ) and for δ given in (5.18) we also have ( 11Y + Y 11 S 11Y Y S11 ) δ = min. Y C 2n2n \{} Y 2 Let Ẽ S := J Q J T E11 E E S Q =: E 21 E11 Ẽ := J Q J T F11 F E Q =: F 21 F11 and set γ = (E 21 F 21 ) η = ( S + E + F ) and δ = δ 2 (E 11 F 11 ). Since we have (ẼS Ẽ) = (E S E ) condition (5.19) implies that and clearly δ δ 2 (E S E ) > 3 4 δ 4 (E S E ) (S ) < δ 2 4δ (E S E ). ence γη (E S E ) { δ ( S ) + (E S E ) } 2 (δ 2 (E S E ) ) 2 < (E S E ) 2 + (δ 2 4δ (E S E ) )/4 (δ 2 (E S E ) ) 2 = 1 4. Following the perturbation analysis for a formal product of matrices in 9 it can be shown that there exists a unitary matrix (I + W W = W ) 1 2 W (I + W W ) 1 2 W (I + W W ) 1 2 (I + W W ) 1 2 with (5.21) W 2 < 2 γ δ < 8 γ 3 δ <

20 such that J ( QW) J T (α ˆB c S β ˆB c )( QW) is another structured Schur form of the perturbed matrix pencil. Since there are neither infinite nor purely imaginary eigenvalues (5.16) implies that ˆQ QW is unitary block diagonal. Without loss of generality we may take ˆQ = QW. If X c is as in (2.11) and X c Q = Q11 Q Q 21 Q 22 then it follows from Theorem 3.1 that P V = range Q 11. Clearly ˆP V = range{(q 11 + Q W )(I + W W ) 1 2 }. The upper bound (5.2) can then be derived from (5.21) by using the same argument as in the proof of Theorem 4.4 in 6. If S is given in factored form then we obtain a similar result. In this case by using Algorithm 1 we compute a unitary matrix ˆQ and a unitary symplectic matrix Û such that Û Ẑ11 Ẑ ˆBc Z ˆQ = ˆRZ =: (5.22) Ẑ 22 J ˆQ J T Ĥ11 Ĥ ˆBc ˆQ = ˆR =: Ĥ 11 where ˆB Z c and ˆB c are defined in (5.1) and (5.11) and Λ(Ẑ 22Ẑ11 Ĥ11) = Λ ( ˆB S c ˆB c ) where ˆB S c = J ( ˆB Z c ) J T ˆBc Z. Analogous to Theorem 2.7 if αs β has no purely imaginary eigenvalues then there exist unitary matrices Q 1 Q 2 and unitary symplectic matrices U 1 U 2 such that U1 Z ZQ 1 = 11 Z Z22 J Q 1 J T Q 1 = 11 ( 11 ) with Λ((Z22 ) Z11 11 ) = Λ (S ) and U2 Z + ZQ 2 = 11 Z + Z 22 + J Q 2 J T Q 2 = ( + 11 ) with Λ((Z + 22 ) Z ) = Λ +(S ) respectively. Set Q = X c diag(q 1 Q 2 )P U = X c diag(u 1 Ū2)P where P and X c are as in (2.1) and (2.11). Then Q is unitary and U US 4n and a simple calculation yields Z11 Z U BZQ c Z 11 + Z + = Z22 =: Z11 Z (5.23) =: R Z Z 22 Z 22 + J Q J T BQ c (5.24) = ( 11 ) ( + 11 ) =: =: R. This leads to the structured Schur form of the extended skew-amiltonian/amiltonian matrix pencil αj (B c Z ) J T B c Z βbc with Λ(Z 22Z ) = Λ (B c S Bc ). Theorem 5.4. Consider the regular skew-amiltonian/amiltonian matrix pencil αs β with nonsingular J -definite skew-amiltonian part S = J Z J T Z. Suppose that αs β has no eigenvalue with zero real part. Let the extended skew-amiltonian and amiltonian matrix B c Z 2

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 3 No 4 pp 145 169 c Society for Industrial and Applied Mathematics EXISTENCE UNIQUENESS AND PARAMETRIZATION OF LAGRANGIAN INVARIANT SUBSPACES GERARD FREILING VOLKER MERMANN

More information

FORTRAN 77 Subroutines for the Solution of Skew-Hamiltonian/Hamiltonian Eigenproblems Part I: Algorithms and Applications 1

FORTRAN 77 Subroutines for the Solution of Skew-Hamiltonian/Hamiltonian Eigenproblems Part I: Algorithms and Applications 1 SLICOT Working Note 2013-1 FORTRAN 77 Subroutines for the Solution of Skew-Hamiltonian/Hamiltonian Eigenproblems Part I: Algorithms and Applications 1 Peter Benner 2 Vasile Sima 3 Matthias Voigt 4 August

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

On doubly structured matrices and pencils that arise in linear response theory

On doubly structured matrices and pencils that arise in linear response theory On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

A new block method for computing the Hamiltonian Schur form

A new block method for computing the Hamiltonian Schur form A new block method for computing the Hamiltonian Schur form V Mehrmann, C Schröder, and D S Watkins September 8, 28 Dedicated to Henk van der Vorst on the occasion of his 65th birthday Abstract A generalization

More information

Port-Hamiltonian Realizations of Linear Time Invariant Systems

Port-Hamiltonian Realizations of Linear Time Invariant Systems Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December 16 215 Abstract The question when a general linear time invariant control

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Stability radii for real linear Hamiltonian systems with perturbed dissipation

Stability radii for real linear Hamiltonian systems with perturbed dissipation Stability radii for real linear amiltonian systems with perturbed dissipation Christian Mehl Volker Mehrmann Punit Sharma August 17, 2016 Abstract We study linear dissipative amiltonian (D) systems with

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM DAVID S WATKINS Abstract Recently Chu Liu and Mehrmann developed an O(n 3 ) structure preserving method for computing the Hamiltonian

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

A robust numerical method for the γ-iteration in H control

A robust numerical method for the γ-iteration in H control A robust numerical method for the γ-iteration in H control Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu December 6, 26 Abstract We present a numerical method for the solution of the optimal H control

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

Chapter One. Introduction

Chapter One. Introduction Chapter One Introduction Besides the introduction, front matter, back matter, and Appendix (Chapter 15), the book consists of two parts. The first part comprises Chapters 2 7. Here, fundamental properties

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Gregory Ammar Department of Mathematical Sciences Northern Illinois University DeKalb, IL 65 USA Christian Mehl Fakultät für Mathematik

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173

MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173 MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173 BO KÅGSTRÖM AND DANIEL KRESSNER Abstract. New variants of the QZ algorithm for solving the generalized eigenvalue

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 28, No 4, pp 971 1004 c 2006 Society for Industrial and Applied Mathematics VECTOR SPACES OF LINEARIZATIONS FOR MATRIX POLYNOMIALS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information