Perturbation theory for homogeneous polynomial eigenvalue problems

Size: px
Start display at page:

Download "Perturbation theory for homogeneous polynomial eigenvalue problems"

Transcription

1 Linear Algebra and its Applications 358 (23) Perturbation theory for homogeneous polynomial eigenvalue problems Jean-Pierre Dedieu a, Françoise Tisseur b,,1 a MIP, Département de Mathématique, Université Paul Sabatier, 3162 Toulouse, Cedex 4, France b Department of Mathematics, University of Manchester, Manchester M13 9PL, UK Received 2 March 21; accepted 25 June 21 Submitted by B.N. Parlett Abstract We consider polynomial eigenvalue problems P (A, α, β)x = in which the matrix polynomial is homogeneous in the eigenvalue (α, β) C 2. In this framework infinite eigenvalues are on the same footing as finite eigenvalues. We view the problem in projective spaces to avoid normalization of the eigenpairs. We show that a polynomial eigenvalue problem is wellposed when its eigenvalues are simple. We define the condition numbers of a simple eigenvalue (α, β) and a corresponding eigenvector x and show that the distance to the nearest ill-posed problem is equal to the reciprocal of the condition number of the eigenvector x. We describe a bihomogeneous Newton method for the solution of the homogeneous polynomial eigenvalue problem (homogeneous PEP). 22 Elsevier Science Inc. All rights reserved. AMS classification: 65F15; 15A18 Keywords: Polynomial eigenvalue problem; Matrix polynomial; Quadratic eigenvalue problem; Condition number Corresponding author. Tel.: ; fax: addresses: dedieu@mip.ups-tlse.fr (J.-P. Dedieu), ftisseur@ma.man.ac.uk (F. Tisseur). URL: ftisseur/ (F. Tisseur). 1 This work was supported by Engineering and Physical Sciences Research Council grant GR/R4579 and the by the Nuffield Foundation grant NAL/216/G /2/$ - see front matter 22 Elsevier Science Inc. All rights reserved. PII: S (1)423-2

2 72 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Introduction Let A = (A,A 1,...,A m ) be an (m + 1)-tuple of n n complex matrices. We consider homogeneous matrix polynomials P(A,α,β)defined by m P(A,α,β) = α k β m k A k, (1) that is, P(A,α,β) is homogeneous of degree m in (α, β) C 2. The homogeneous polynomial eigenvalue problem (homogeneous PEP) is to find pairs of scalars (α, β) = (, ) and nonzero vectors x,y C n satisfying P(A,α,β)x =, y P(A,α,β) =. The vectors x,y are called right and left eigenvectors corresponding to the eigenvalue (α, β). For instance, the homogeneous generalized eigenvalue problem (βa αb)x = and the homogeneous quadratic eigenvalue problem (α 2 A + αβb + β 2 C)x = are major cases of homogeneous PEPs whose importance lies in the diverse roles they play in the solution of problems in science and engineering. In particular the QEP is currently receiving much attention because of its extensive applications in areas such as the dynamic analysis of mechanical systems in acoustics and linear stability of flows in fluid mechanics. A comprehensive survey on QEPs can be found in [26]. Most of the theory concerning matrix polynomials [1,11,15] is developed for nonhomogeneous polynomials of the form Q(A, λ) = P(A,λ,1) = λ m A m + +λa 1 + A. In this context, the nonhomogeneous PEP is to find a complex scalar λ and nonzero vectors x,y C n such that Q(A, λ)x =, y Q(A, λ) =. When (λ,x,y) is a solution of the nonhomogeneous PEP, ((λ, 1),x,y) is a solution of the homogeneous PEP. Conversely, if ((α, β), x, y) is a solution of the homogeneous PEP and if β =, then (α/β,x,y) is a solution of the corresponding nonhomogeneous PEP. As for the linear eigenvalue problem, the eigenvalues are solutions of the scalar equation detp(a,α,β) = in the homogeneous case and detq(a, λ) = in the nonhomogeneous case. These two polynomials may be simultaneously identically zero or not. We say that an (m + 1)-tuple A is regular when detp(a,α,β). When A is regular, the polynomial detp(a,α,β) has degree mn. Its zeros consist of mn complex lines in C 2 through the origin, counting multiplicities. The nonhomogeneous problem has a different behaviour: the polynomial

3 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) detq(a, λ) has degree less than or equal to mn and deg Q(A, λ) < mn if and only if A m is singular. In such a case, Q(A, λ) has r = deg Q(A, λ) eigenvalues λ i, i = 1: r corresponding to r homogeneous eigenvalues (λ i, 1), i = 1: r. The other homogeneous eigenvalue is (1, ) with multiplicity mn r. This eigenvalue is called an infinite eigenvalue. For example, with [ ] A 2 =, A 1 = 1 [ ] and A = [ 1 1 ] 1 1, 1 1 the homogeneous polynomial detp(a,α,β) = β 4 (β + α)(2β α) has six zeros corresponding to the values (α, β) = ( 1, 1) and (2, 1) with multiplicity 1 and (1, ) and with multiplicity 4. On the other hand, the nonhomogeneous polynomial detq(a, λ) = detp(a,λ,1) = (1 + λ)(2 λ) has only two roots. In the nonhomogeneous case, infinite eigenvalues require a special treatment [11] but not in the homogeneous case. For this reason we prefer to deal with homogeneous polynomials. In [13] it is shown that pseudospectra for matrix polynomials with infinite eigenvalues and their applications in control theory can be elegantly treated in the homogeneous framework. When (α, β) is an eigenvalue of the homogeneous PEP, the (right) eigenvector x C n is a solution of P(A,α,β)x =. This equation is bihomogeneous: it is homogeneous of degree 1 in x and homogeneous of degree m in (α, β). For these reasons, it is natural to consider the eigenvalues and eigenvectors in projective spaces P(C 2 ) and P(C n ), respectively, where P(C k ) is the set of vector lines in C k. When working in complex vector spaces instead of projective spaces, the common strategy is to use a normalization of x and (α, β). Taking unit vectors for x and (α, β) works well in the real case because the unit sphere cuts any real vector line in only two antipodal points. In the complex case, this strategy causes some difficulties as a complex line in C n cuts the unit sphere in a circle. Andrew et al. [2, Section 4] discuss various normalization strategies for the eigenvector. The advantage of working in the way we do is that we do not need to introduce a normalization, that is, a local chart. In this paper we study the condition number of the homogeneous PEP. The condition number of a problem measures the sensitivity of the output to small changes in the input. We study the sensitivity of the eigenvalue (α, β) and eigenvector x to perturbations to the matrix (m + 1)-tuple A. Previous condition number analyses of PEPs [24] were done in complex vector spaces for nonhomogeneous polynomials and with the assumptionthat λ is a simple and finite eigenvalue. Our analysis covers the case of infinite eigenvalues. We use the homogeneous character of the equation P(A,α,β)x = inboth(α, β) and x to view the problem in a product of two projective spaces. This approach has been used by Stewart and Sun [23] and Dedieu [5] for the generalized eigenvalue problem βax = αbx. Stewart and Sun define the condition number of a simple eigenvalue in terms of the chordal distance between the exact eigenvalue and the perturbed eigenvalue. This paper adopts Shub and Smale s [22] and Dedieu s [5] approaches.

4 74 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) In Section 2, we define the map from the matrix (m + 1)-tuple A to an eigenvalue and eigenvector and define the condition operator as the differential of this map. The condition number is the norm of this differential. All these computations are made in projective spaces, that is, we measure the angular variations between the exact eigenvalue and the perturbed eigenvalue and between the exact eigenvector and the perturbed eigenvector. In Section 3 we characterize well-posed problems, that is, problems for which the condition operator is defined. We compute the condition numbers in Section 4. We also relate the condition number of the eigenvector to the distance to ill-posed problems in Section 7. Our results are illustrated by numerical examples in Section 8. In Section 9 we describe a bihomogeneous Newton method for the solution of the homogeneous PEP. The projective version of Newton method s has been studied extensively by Shub and Smale [2 22] for finding the zeros of homogeneous systems of n equations and n + 1 unknowns. Here, as our problem is bihomogeneous, we use the multihomogeneous Newton method introduced by Dedieu and Shub [6] for finding the zeros of multihomogeneous functions. 2. Background and definitions Let A = (A,A 1,...,A m ) M n (C) m+1,wherem n (C) m+1 denotes the set of (m + 1)-tuples of n n complex matrices. Throughout this paper we assume that A is regular, that is, detp(a,α,β). We denote by P n 1 (C) = P(C n ) the set of vector lines in C n. More precisely, P(C n ) is the quotient of C n \{} for the equivalence relation x y if and only if x = ρy for some ρ C \{}. The set P(C n ) can equally be viewed as the quotient of the unit sphere in C n : { } S 2n 1 = x C n : x 2 = 1 for the equivalence relation x y if and only if x = e iθ y for some θ R, that is, P(C n ) is the set of great circles in S 2n 1. This is a smooth complex algebraic variety of dimension n 1 and also a complex Riemannian manifold. Let T x P n 1 be the tangent space at x to P n 1. This tangent space is usually identified with x = { ẋ C n : ẋ,x = }, where, denotes the usual Hermitian inner product over C n. The Hermitian structure on T x P n 1 is defined to be v, w v, w x = (2) x,x for v, w x. If we take another representative in the same complex line, say y = λx, λ C \{}, the tangent vectors corresponding to v and w become λv and λw

5 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) so that the Hermitian structure, x is invariant under the identifications defining P n 1 (see [9, Sections 2.29 and 2.3]). The corresponding Riemannian distance in P n 1 (C) is given by ( ) x,y d R (x, y) = arccos x y for any x,y P n 1 (C). Here we identify x P n 1 (C) with any nonzero representative x C n. Notice that the Riemannian distance d R (x, y) is just the angle between the vector lines defined by x and y. Another popular distance is the chordal distance between two vector lines. This distance is the sine of the angle between these two lines: d chordal (x, y) = sin d R (x, y). We want to study the first-order variations of the eigenvalue (α, β) and eigenvector x in terms of the variations of the (m + 1)-tuple A. The relation between these three quantities is given implicitly via the equation P(A,α,β)x =. For this reason we define the set of polynomial eigenvalue problems by the following algebraic variety: { } V P = (A,x,α,β) M n (C) m+1 P n 1 P 1 : P(A,α,β)x =. V P is a smooth algebraic variety. V P is a smooth manifold because the derivative of the equation defining V P is onto (see Eq. (3) below) and V P is homogeneous in x and in (α, β). The tangent space at (A,x,α,β)to V P is given by (just differentiate the equation defining V P ): { T (A,x,α,β) V P = (Ȧ, ẋ, α, β) M n (C) m+1 C n C 2 : P(Ȧ, α, β)x + P(A,α,β)ẋ + αd α P(A,α,β)x } + βd β P(A,α,β)x =, ẋ,x =, αᾱ + β β =. (3) To avoid heavy notation we will often write P for P(A,α,β) and D α P and D β P for D α P (A, α, β)(α, β) and D β P (A, α, β)(α, β), respectively. We will use the first projection Π 1 : V P M n (C) m+1 given by Π 1 (A,x,α,β)= A and the second projection Π 2 : V P P n 1 P 1 given by Π 2 (A,x,α,β)= (x,α,β). Definition 2.1. We say that (A,x,α,β) is well-posed when the derivative of the first projection Π 1 at (A,x,α,β)is an isomorphism. Otherwise, (A,x,α,β)is said to be ill-posed. Note that this derivative is itself a projection: DΠ 1 (A,x,α,β)(Ȧ, ẋ, α, β) = Ȧ. When (A,x,α,β) is well-posed, by the inverse function theorem there exists a neighbourhood U(x,α,β) P n 1 P 1 of (x,α,β)and a neighbourhood U(A) M n (C) m+1 of A such that ( ) Π 1 : V P U(A) U(x,α,β) U(A)

6 76 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) is invertible (see [1, Section 3.5.1]). Its inverse gives rise to a smooth map G = (G 1,G 2 ) = Π 2 Π 1 1 : U(A) U(x,α,β) such that Graph(G) = V P (U(A) U(x, α, β)). For any (m + 1)-tuple A with eigenvalue (α, β) and eigenvector x and any (m + 1)- tuple A close to A, G associates an eigenvalue (α,β ) and eigenvector x close to (α, β) and x, respectively. The map G is inaccessible but its derivative DG(A) can be computed: the graph of DG(A) is equal to the tangent space T (A,x,α,β) V P.We denote by K(A,x,α,β) = DG(A): M n (C) m+1 T x P n 1 T (α,β) P 1, (4) and K 1 (A,x,α,β)=DG 1 (A): M n (C) m+1 T x P n 1, K 2 (A,x,α,β)=DG 2 (A): M n (C) m+1 T (α,β) P 1 the two components of this derivative. K 1 is the condition operator of the eigenvector and K 2 is the condition operator of the eigenvalue. These condition operators describe the first-order variations of x and (α, β) in terms of the first-order variations of A. Definition 2.2. When (A,x,α,β) is well-posed, we define the condition numbers of (A,x,α,β)to be the norms of these linear operators: C 1 (A, x)= max K 1 (A,x,α,β)(Ȧ) x, Ȧ 1 C 2 (A,α,β)= max K 2 (A,x,α,β)(Ȧ) (α,β). Ȧ 1 Here and throughout, is the norm on M n (C) m+1 associated with the Frobenius scalar product ) (Ȧ 1,...,Ȧ m ), (Ḃ 1,...,Ḃ m ) =trace. 3. Well-posed problems k=1 Ḃ k Ȧk In this section we characterize well-posed problems. Let (A,x,α,β ) V P and define P = P(A,α,β ), D α P = D α P (A, α, β)(α,β ), D β P = D β P(A,α,β) (α,β ). We also write Ker(P )= { x C n,p x = }, Im(P )= { u C n,u= P x for some x C n}.

7 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) The following two lemmas characterize some properties related to simple eigenvalues. These results are needed to prove Theorem 3.3 characterizing well-posed problems. Lemma 3.1. If (α,β ) is a simple eigenvalue of P(A,α,β)and x is a corresponding right eigenvector, then v = ( β D α P ᾱ D β P )x / ImP. Proof. We consider the local Smith form of P(A,α,β) (cf. [11, Theorem S1.1] and[15]).forany(α, β) in a neighbourhood of (α,β ),wehave where P(A,α,β) = E(α, β)d(α, β)f (α, β), (5) D(α, β) = diag ( (αβ βα ) µ 1,...,(αβ βα ) µ n ), µ 1 µ n are the partial multiplicities of (α,β ) and the matrices E(α, β), F (α, β) are nonsingular. If (α,β ) is a simple eigenvalue, then µ 1 = 1, µ 2 = =µ n = and dim ImP = n 1. Let E = E(α,β ), F = F(α,β ) and D 1 + D 2 =[e 1,,...,]+[,e 2,...,e n ]=I n, where e k is the kth column of the identity matrix. Using the local Smith form of P(A,α,β) in (5), we have P = E D 2 F.AsP x = it follows that D 2 F x =. We also have D α P = D α E D 2 F + β E D 1 F + E D 2 D α F, D β P = D β E D 2 F α E D 1 F + E D 2 D β F and therefore v = β D α P x ᾱ D β P x ( = α 2 + β 2) E D 1 F x + E D 2 ( β D α F ᾱ D β F )x. (6) If v ImP,thenv = P u for some u C n. Multiplying (6) on the left by E 1 yields ( α 2 + β 2 )D 1 F x + D 2 ( β D α F ᾱ D β F )x = D 2 F u. This implies that D 1 F x ImD 2 so that D 1 F x =. As D 2 F x = andd 1 + D 2 = I n we obtain F x =. Since F is nonsingular, we conclude that x =. Thus, v ImP. In the following, Π v denotes the projection in C n onto v and P v restriction of P to v. denotes the

8 78 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Lemma 3.2. If (α,β ) is a simple eigenvalue of P(A,α,β) with corresponding right eigenvector x and v = ( β D α P ᾱ D β P )x, then Π v P x is nonsingular. Proof. Suppose that Π v P is singular. Then, there exists a vector u x x, u/= such that Π v P u =. Thus, we have P x u ImP and Π v P u = sothat v ImP. But if (α,β ) is a simple eigenvalue, then from Lemma 3.1 v ImP and therefore Π v P x is nonsingular. and From (3), (Ȧ, ẋ, α, β) T (A,x,α,β )V P if and only if (Ȧ, ẋ, α, β) M n (C) m+1 C n C 2 J P (x, α,β ) [ ] ẋ [ P D α P x D β P x α := x β α β [ ] P(Ȧ, α,β )x = ][ẋ ] α β. (7) Theorem 3.3. Let (α,β ) be an eigenvalue of P(A,α,β)with corresponding left and right eigenvectors y and x, respectively, and let v = ( β D α P ᾱ D β P )x. The following conditions are equivalent: (i) DΠ 1 (A): T (A,x,α,β )V P M n (C) m+1 is an isomorphism, that is, (A,x, α,β ) is well-posed. (ii) The matrix J P (x, α,β ) is nonsingular. (iii) rankp = n 1 and y v /=. (iv) (α,β ) is a simple eigenvalue. Proof. (i) (ii): If [ ] ẋ J P (x, α,β ) α β =, then (, ẋ, α, β) T (A,x,α,β )V P and since DΠ 1 is an isomorphism, ẋ =, ( α, β) = (, ). Hence, J P (x, α,β ) is nonsingular. (ii) (i): IfDΠ 1 (Ȧ, ẋ, α, β) =, then Ȧ = andp(ȧ, α,β )x =. Since J P (x, α,β ) is nonsingular, ẋ = and( α, β) = (, ) thus DΠ 1 is an isomorphism. (iii) (ii): If J P (x, α,β ) [ ] ẋ α =, β

9 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) then P ẋ + αd α P x + βd β P x = P(Ȧ, α, β)x =, (8) ẋ,x =, (9) α α + β β =. (1) Eq. (8) multiplied on the left by y and Eq. (1) gives [ y D α P x y ][ ] D β P x =. (11) α β α β If y v /=, then (11) implies α = β =. In this case (8) and (9) give P ẋ =, ẋ,x =, that is, x and ẋ KerP.Asx/= and ẋ,x =, if dim KerP = 1, then ẋ =. (iv) (iii):if(α,β ) is a simple eigenvalue, then from Lemma 3.1 v / ImP. Let y be a left eigenvector corresponding to the eigenvalue (α,β ).Asy P = we have y ImP but y is not orthogonal to v. Hence, y v /=. (iii) (iv): This is an easy consequence of the Smith form. (ii) (iii): Suppose that J P (x, α,β ) is nonsingular. Without loss of generality we can assume that P is diagonal (if not, just take the SVD of P = UΣV ). If (α,β ) is an eigenvalue, P is singular and has at least one zero on its diagonal, so rankp n 1. J P (x, α,β ) takes the form (for n = 3 for instance), d. If rankp <n 1, then d = and clearly J P (x, α,β ) is singular. Hence rank P = n 1. We now prove that J P (x, α,β ) nonsingular implies y v /=. We have det(j P (x, α,β ))= ᾱ det [ P D β P x x ] + β det [ P D α P x x = ᾱ x P A D βp x β x P A D αp x = x P A v /=, (12) where P A is the adjoint of P. The adjoint has the property that P AP = det(p )I. Let y = x P A.Theny P = sothaty is a left eigenvector of P corresponding to (α,β ). Then from (12), y v /=. ]

10 8 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Let us highlight some differences between the linear case (m = 1), and the polynomial case (m >1). In the first case, that is, for the generalized eigenvalue problem, (αa 1 + βa )x =, we have the following characterization of (A,x,α,β ) being well-posed [5]: (i) The matrix J P (x, α,β ) is nonsingular. (ii) rankp = n 1. (iii) (α,β ) is a simple eigenvalue. The main difference between the linear and the polynomial case is the possibility for a matrix polynomial to satisfy rankp = n 1evenif(α,β ) is not a simple eigenvalue. This is clearly shown by the following 2 2example: P(A,α,β) = diag((α β) 2,α 2 ) at (α,β ) = (1, 1). In this case rankp = 1and(α,β ) is an eigenvalue with multiplicity 2. When the matrix pair (A,A 1 ) is regular, there exist two unitary matrices U and V such that UA V and UA 1 V are upper triangular. This is called the generalized Schur decomposition. Another major difference between the linear and the polynomial case is that the generalized Schur decomposition for pairs (A,A 1 ) does not extend to an (m + 1)-tuple of matrices with m > 1. To avoid this difficulty we use here the local Smith form of P(A,α,β). 4. Computation of the condition numbers By definition, the condition operator of the eigenvector x and the condition operator of the eigenvalue (α, β) are K 1 (A,x,α,β)(Ȧ) =ẋ, K 2 (A,x,α,β)(Ȧ) = ( α, β). The corresponding condition numbers are given by ẋ C 1 (A, x) = max ẋ x = max Ȧ 1 Ȧ 1 x, and α C 2 (A,α,β)= max ( α, β) (α,β) = max 2 + β 2 Ȧ 1 Ȧ 1 α 2 + β 2. Theorem 4.1. Assume that (α, β) is a simple eigenvalue of P(A, α, β) with corresponding right eigenvector x and let v = βd α P(A,α,β)x ᾱd β P(A,α,β)x. Then K 1 (A,x,α,β)(Ȧ) = (Π v P(A,α,β) x ) 1 Π v P(Ȧ, α, β)x,

11 and J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) ) 1/2 C 1 (A, x) = α 2k β 2(m k) (Π v P(A,α,β) x ) 1, where P(A,α,β) x is the restriction of P(A,α,β)to x and Π v is the projection over v. Proof. Eq. (1) is equivalent to α = λ β and β = λᾱ for some λ C.As ẋ,x =, ẋ x and from (8) we obtain P(A,α,β) x ẋ + λv = P(Ȧ, α, β)x. (13) Projecting Eq. (13) over v yields Π v P(A,α,β) ẋ = Π x v P(Ȧ, α, β)x. By Lemma 3.2, the operator Π v P(A,α,β) x is nonsingular and the expression for K 1 in the theorem follows. For the condition number, ẋ = (Π v P(A,α,β) x ) 1 Π v P(Ȧ, α, β)x. Suppose that α 2k β 2(m k)) = 1. In this case, when Ȧ describes the unit ball in M n (C) m+1, the vector P(Ȧ, α, β)x describes the unit ball in C n with projection over v equal to the unit ball. Thus, ẋ C 1 (A, x)= max Ȧ 1 x ) 1/2 ) 1 = α 2k β (Π 2(m k) v P(A,α,β) x. Theorem 4.2. Assume that (α, β) is a simple eigenvalue of P(A, α, β) with corresponding right and left eigenvectors x and y, respectively, and let v = βd α P(A,α, β)x ᾱd β P(A,α,β)x.Then K 2 (A,x,α,β)(Ȧ) = y P(Ȧ, α, β)x y ( β,ᾱ), v and ( m C 2 (A,α,β)= ) 1/2 x y α 2k β 2(m k) y. v Proof. Eq. (1) is equivalent to α = λ β and β = λᾱ for some λ C. Multiplying (8) on the left by y gives λy v = y P(Ȧ, α, β)x.

12 82 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) From Theorem 3.3, if (α, β) is a simple eigenvalue, then y v/= sothat ( α, β) = λ( β, ᾱ) = y P(Ȧ, α, β)x y ( β, ᾱ). v For the condition number, α 2 + β 2 α 2 + β 2 = λ 2 = y P(Ȧ, α, β)x 2 y v 2. As ) 1/2 P(Ȧ, α, β) α 2k β 2(m k) Ȧ, we obtain ) 1/2 x y C 2 (A,α,β) α 2k β 2(m k) y. v Suppose that x = e 1,wheree 1 the unit vector. We take [ ] y S = y, ᾱ k β m k, Ȧ k = ) 1/2 S, k m. α 2k β 2(m k) It is easy to check that Ȧ =1and C 2 (A,α,β) y P(Ȧ, α, β)x y v = ) 1/2 x y α 2k β 2(m k) y. v 5. Invariance properties Both condition operators and condition numbers are invariant under scaling of the eigenvector and eigenvalue because they are constructed projectively: K i (A, ρx, µ(α, β)) = K i (A, x, (α, β)), i = 1, 2, C i (A, ρx, µ(α, β)) = C i (A, x, (α, β)), i = 1, 2, for any ρ and µ C \{}. However, under scaling of A we have C i (ρa, x, (α, β)) = ρ 1 C i (A, x, (α, β)) for any ρ C \{} and i = 1, 2. In the following we prove another invariance property with respect to unitary transformations in C n.

13 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Lemma 5.1. For any unitary matrices U and V, if P(A,α,β)x =, then we also have P(V AU,α,β)U x = and K 1 (A,x,α,β)(Ȧ) = UK 1 (V AU, U x, α, β)(v ȦU), K 2 (A,x,α,β)(Ȧ) = K 2 (V AU, U x, α, β)(v ȦU), C 1 (A, x) = C 1 (V AU, U x), C 2 (A,α,β)= C 2 (V AU,α,β). Proof. First, we have that if (A,x,α,β) V P,then(V AU, U x,α,β) V P and if (Ȧ, ẋ, α, β) T (A,x,α,β) V P,then(V ȦU, U ẋ, α, β) T (V AU,U x,α,β)v P. Moreover, DΠ 1 (V AU, U x, α, β)(v ȦU, U ẋ, α, β) = V ȦU = V DΠ 1 (A,x,α,β)(Ȧ, ẋ, α, β)u for any (Ȧ, ẋ, α, β) T (A,x,α,β) V P. Hence if (A,x,α,β) is well-posed, then (V AU, U x,α,β) is well-posed. From the definition of the condition operator, if (Ȧ, ẋ, α, β) T (A,x,α,β) V P,thenK(A,x,α,β)Ȧ = (ẋ, α, β). But if (Ȧ, ẋ, α, β) T (A,x,α,β) V P,then(V ȦU, U ẋ, α, β) T (V AU,U x,α,β)v P. Thus and K(V AU, U x, α, β)(v ȦU) = (U ẋ, α, β). K 1 (V AU, U x, α, β)(v ȦU)=U K 1 (A,x,α,β)(Ȧ), K 2 (V AU, U x, α, β)(v ȦU)=K 2 (A,x,α,β)(Ȧ). Since both the Hermitian norm in C n and the Frobenius norm in M n (C) are unitarily invariant, the last two equalities in the lemma follow. 6. Special cases In the case of the generalized eigenvalue problem βax = αbx, Π v = Π (Ax) and for the condition number of the eigenvector we have C 1 (x) = ( α 2 + β 2 ) 1/2 (Π(Ax) (βax αbx) x ) 1, which is the expression obtained in [5]. For the condition number of the eigenvalue we obtain C 2 (α, β) = ( α 2 + β 2 ) 1/2 x y ᾱy Ax + βy. Bx

14 84 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) We now use a result proved in [23, Chapter 6, Corollary 1.1]: Lemma 6.1. Let (α, β) be a simple eigenvalue of the regular pair (A, B) with right and left eigenvectors x and y. Then (α, β) = ρ(y Ax, y Bx) for some nonzero constant ρ C. We take the representatives α = y Ax, β = y Bx.Then x y C 2 (α, β) = ( α 2 + β 2 ) 1/2, which is the condition number derived by Stewart and Sun [23, p. 294]. Condition numbers of eigenvalues are used in the state-feedback pole assignment problem in control system design [14,16]. For instance, consider the second-order system M q(t) + C q(t) + Kq(t) = Bu(t), (14) where M,C and K are the n n matrices and q(t) C n, B C m m and u(t) C m with m n. The control problem is to design a state-feedback controller u(t) of the form u(t) = FC T q(t) + F K T q(t) + r(t), where F C,F K C m n and r(t) C m are such that the closed-loop system M q(t) + (C + BFC T ) q(t) + (K + BFT K )q(t) = Br(t) (15) has the desired properties. The solution is, in general, underdetermined, with many degrees of freedom. The behaviour of the closed-loop system is governed by the eigenstructure of the corresponding quadratic eigenvalue problem λ 2 A 2 x + λa 1 x + A x =, (16) where A 2 = M, A 1 = C + BFC T and A = K + BFK T. A desirable property is that the eigenvalues should be insensitive to perturbations in the coefficient matrices. This criterion is used to restrict the degrees of freedom in the assignment problem and to produce a well-conditioned or robust solution to the problem. To measure the robustness of the system we can take as a global measure ν 2 = 2n k=1 ω 2 k κ(λ k) 2, where the ω k are the positive weights and κ(λ k ) is the condition number of the simple and finite eigenvalue λ k. The control design problem is then to select the feedback matrices F C and F K to assign a given set of 2n nondefective eigenvalues to the second-order closed loop system and to minimize its robustness measure ν 2. Several expressions for κ(λ) have already been obtained. In [16], the condition number of λ is defined to be

15 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) { λ κ(λ) = lim sup ɛ ɛ : Q(A + Ȧ, λ + λ)(x +ẋ) =, } A 1 2 Ȧ 2 ɛ, where A, Ȧ are the 3-tuples and Q(A, λ) is the nonhomogeneous polynomial of degree 2 in λ. This is an absolute condition number and an explicit expression is given by κ(λ) = ( λ 4 + λ 2 + 1) 1/2 x 2 y A 2 2 y. (17) (2λA 2 + A 1 )x In [24] the condition number of λ is defined to be { } λ κ(λ)= lim sup ɛ ɛ λ : Q(A + Ȧ, λ + λ)(x +ẋ) =, Ȧ i α i ɛ, i = : 2, where the α k are nonnegative parameters that allow freedom in how perturbations are measured for example, in an absolute sense (α i 1) or a relative sense (α i = A i )and denotes any subordinate matrix norm. This is a relative condition number and an explicit expression is given by κ(λ) = ( λ 2 α 2 + λ α 1 + α ) λ y y x. (18) (2λA 2 + A 1 )x Both condition numbers (17) and (18) assume that A 2 is nonsingular, which eliminates the case of infinite eigenvalues. This might be a problem in control design problems as singular matrices A 2 are frequent and a singular A 2 yields infinite eigenvalues. The advantage of working with the homogeneous form α 2 A 2 x + αβa 1 x + β 2 A x = is that infinite eigenvalues are not a special case. For this problem our condition number is α C 2 (α, β) = 4 + α 2 β 2 + β 4 x y y (2 βαa 2 + ( β 2 α 2 )A 1 2βᾱA )x. If (α, β) is an infinite eigenvalue, i.e., β =, then C 2 (α, ) = x y / y A 1 x. An expression for the condition number of the eigenvector κ(x)has been obtained in [26] in the nonhomogeneous case. As an eigenvector corresponding to a simple λ is unique only up to a scalar multiple, a linear normalization based on a constant vector g is used. The normwise condition number for the eigenvector x corresponding to the simple eigenvalue λ can be defined by { ẋ κ λ (x) = lim sup ɛ ɛ x : ( Q(A + Ȧ, λ + λ) ) (x +ẋ) =, g (2λA 2 + A 1 )x = g (2λA 2 + A 1 )(x +ẋ) 1, } Ȧ i α i ɛ, i = : 2, and an explict expression is given by (see [26, Theorem 2.7])

16 86 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) κ λ (x) = V(W (λ 2 A 2 + λa 1 + A )V ) 1 W ( λ 2 α 2 + λ α 1 + α ), (19) where the full rank matrices V,W C n (n 1) are chosen so that g (2λA 2 + A 1 )V = andw (2λA 2 + λa 1 )x =, the α k are nonnegative parameters that allow freedom in how perturbations are measured and denotes a subordinate matrix norm. This expression extends in an obvious way to nonhomogeneous polynomials of degree higher than 2. The matrix Π v P(A,α,β) in the expression for C x 1(A, x) in Theorem 4.1 can be explicitly constructed as follows. Let Q T x x = R x =± x e 1 and Q T v v = R v = ± v e 1 be QR factorizations and let Ũ = Q x (:, 2: n) and Ṽ = Q v (:, 2: n). Then for Π v P(A,α,β) we take Ṽ x P(A,α,β)Ũ. Hence, for A = (A,A 1,A 2 ) we have C 1 (A, x) = (Ṽ (α 2 A 2 + αβa 1 + β 2 A )Ũ) 1 F α 4 + αβ 2 + β Distance to the nearest ill-posed problem A problem is ill-posed when it is not well-posed. For ill-posed problems we take the condition number as equal to infinity. A problem is ill-conditioned when its condition number is large. The condition number of a problem is necessarily related to the distance from ill-posed instances because the inverse of the condition number is zero on ill-posed instances. The first example of such a result is due to Eckart and Young [8]: the Frobenius norm of the inverse of a nonsingular matrix is equal to the inverse of the Frobenius norm distance to the singular matrices. The case of homogeneous polynomial systems is studied by Shub and Smale in [2], and the eigenvalue problem by the same authors in [21]. For the generalized eigenvalue problem see [5]. See also [3] for polynomial equations of a single variable. All these results are put in a more general setting by Dedieu in [4] and by Demmel in [7] using differential inequalities. Let Π 2 : V P P n 1 P 1 be the second projection and Σ be the set of ill-posed problems. We denote by Σ (x,α,β) = Σ Π 1 2 (x,α,β) the set of ill-posed problems at (x,α,β) and Σ (x,α,β) Σ (x,α,β) the set of ill-posed problems that are such that if (B,x,α,β) Σ (x,α,β),thenv B = βd α P(B,α,β)x ᾱd β P(B,α,β)x /=. We equip Σ (x,α,β) with the distance d F deduced from the Frobenius norm over M n (C) m+1. Theorem 7.1. Let (A,x,α,β)be a well-posed problem. Then ) d F ((A,x,α,β),Σ 1 (x,α,β) = C 1 (A, x). Proof. Let U,V be two unitary matrices such that U = [ x 1 x,ũ ] and V = [ v 1 v, Ṽ ]. Then

17 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) [ p ] V P(A,α,β)U =, (2) P(Ã, α, β) where p = v 1 v P(A,α,β)Ũ and à = Ṽ AŨ, so that Ṽ P(A,α,β)Ũ = P(Ã, α, β). The matrix P(Ã, α, β) is the matrix representation of the operator Π v P(A,α,β) x in two bases orthogonal to v and x, respectively. As (Π v P(A,α,β) x ) 1 is invariant under unitary transformations we take x = e 1 in what follows. Let B = (B,B 1,...,B m ) Σ (x,α,β) so that P(B,α,β)x =, v B /= and(α, β) is a multiple eigenvalue. If (α, β) is a semi-simple eigenvalue, then rank P(B,α,β) < n 1 and therefore Π vb P(B,α,β) x is singular. If (α, β) is a defective eigenvalue, then there is a generalized eigenvector x 1 such that v B + P(B,α,β)x 1 =. As v B /=, x 1 is not collinear to x. There exists u/= suchthatx 1 = ρx + u and u x =. Using v B = P(B,α,β)x 1 = P(B,α,β)uwe have Π vb P(B,α,β) x x 1 = so that Π vb P(B,α,β) x is singular. Let V B =[v B, Ṽ B ] be unitary. Then VB P(B,α,β)U is in the form (2) and P( B,α,β) = ṼB P(B,α,β)Ũ is a matrix representation of the operator Π vb P(B,α,β) x.wedenotebys n 1 the set of (n 1) (n 1) singular matrices. We have ) P( B,α,β) α 2k β 2(m k) B, so that ) d F (P (Ã, α, β), S n 1 ) α 2k β 2(m k) d F (A, Σ x,α,β ). (21) Let S S n 1 be such that d F (P (Ã, α, β), S) = d F (P (Ã, α, β), S n 1 ).Wedefine B by B k =à k ᾱ k β (m k) L, L= ( P(Ã, α, β) S ) / ( m ) α 2k β 2(m k). It is easy to check that P( B,α,β) = S. Now, we consider B M n (C) m+1 defined such that (B k ) ij =( B k ) i 1,j 1, 2 i, j n, (B k ) 1i =(A k ) 1i, (B k ) i1 = (A k ) i1, 1 i n.

18 88 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) As P(B,α,β)e 1 = andp( B,α,β) is singular, B Σ x,α,β.wealsohave d F (P (Ã, α, β), S) 2 = d F (P (Ã, α, β), P ( B,α,β)) 2 = m α k β m k (a ij b ij ) 2 i,j n = 2 i,j n ( m ) 2 α 2k β 2(m k) L ij 2 ) = α 2k β 2(m k) ) = α 2k β 2(m k) 2 2 i,j n 1 i,j n ) = α 2k β 2(m k) d F (A, B) 2 ) α 2k β 2(m k) d F (A, Σ x,α,β ) 2, m (A k ) i,j (B k ) ij 2 m (A k ) i,j (B k ) ij 2 so that ) 1/2 d F (P (Ã, α, β), S n 1 ) α 2k β 2(m k) d F (A, Σ x,α,β ), and from (21), equality holds. We now use a result due to Eckart and Young [8]: d F (M, S n ) = M 1 1. We refer to Golub and Van Loan [12] or Stewart and Sun [23] for a proof. Therefore, d F (P (Ã, α, β), S n 1 ) = P(Ã, α, β) 1 1 ( ) 1 F = Π v P(A,α,β) x. Hence, ) 1/2 ) 1 C 1 (A, x) = α 2k β (Π 2(m k) v P(A,α,β) x ) 1/2 /df = α 2k β 2(m k) (P (Ã, α, β, S n 1 )) d F (A, Σ x,α,β ) 1.

19 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Numerical examples Our tests have been performed with MATLAB, for which the working precision is u = As a first example we consider the problem with the homogeneous polynomial [ α 2 3αβ + 2β 2 α 2 + αβ α 2 + 9β 2 ] P (A(ɛ), α, β) = α 2 αβ(1 + ɛ) αβ 3β 2 or equivalently A(ɛ) = (A,A 1 (ɛ), A 2 ) M 3 (C) 3 with [ 1 1 ] 1 A = 1, A 1 (ɛ) = [ ] 2 9 A 2 =, 3 [ ɛ 1 where ɛ is a positive constant. This problem is regular because ], detp = α 5 β (7 + ɛ)α 4 β 2 + (17 + 6ɛ)α 3 β 3 ( ɛ)α 2 β 4 + 6(1 + ɛ)αβ 5., (22) There are six zero-lines (α k,β k ),1 k 6, whose representatives in C 2, chosen such that α 2 + β 2 = 1, are given in Table 1, where ν = ((1 + ɛ) 2 + 1) 1.The two last lines of the table display the right and left eigenvectors normalized so that x = 1. We see that while (α 2,β 2 )/= (α 4,β 4 ) and (α 5,β 5 )/= (α 6,β 6 ) we have x 2 = x 4 and y 5 = y 6 : the corresponding eigenvectors are the same. For ɛ = u, the condition numbers are shown in Table 2. The eigenvalues α/β =, 2, 3and are simple and well-conditioned. Note that the condition number (18) Table 1 Eigenvalues and right and left eigenvectors for (A(ɛ), x, α, β) defined in (22) k (α k,β k ) (, 1) 1 2 (1, 1) ν(1 + ɛ, 1) 1 5 (2, 1) 1 1 (3, 1) (1, ) α k /β k 1 1+ ɛ x k 1 ɛ 1 ɛ y k (1 ɛ)

20 9 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Table 2 Condition numbers for (A(ɛ), x, α, β) as defined in (22) with ɛ = u α k /β k 1 1+ ɛ 2 3 C 1 (A(ɛ), x) C 2 (A(ɛ), α, β) Table 3 Condition numbers for (A(ɛ), x, α, β) as defined in (22) with ɛ = 1 + u α/β u C 1 (A(ɛ), x) C 2 (A(ɛ), α, β) in [24] for nonhomogeneous PEPs is not defined for zero eigenvalues and infinite eigenvalues. As ɛ is small, α 2 /β 2 and α 3 /β 3 are close to an eigenvalue of multiplicity 2 which is semi-simple (x 2 and x 3 are not collinear). As predicted by the theory, the condition numbers of the corresponding eigenvectors are large; their inverses measure the distance to the nearest ill-posed problem. On the other hand, the condition number for the eigenvalue is small because y i ( β i D α P (A(ɛ), α i,β i ) ᾱ i D β P (A(ɛ), α i,β i ))x i.2, i = 2, 3. For ɛ = 1 + u, the two eigenvalues (α 1,β 1 ) and (α 3,β 3 ) are close to a defective eigenvalue. Their condition numbers are shown in Table 3. In this case, v i = ( β i D α P (A(ɛ), α i,β i ) ᾱ i D β P (A(ɛ), α i,β i ))x i e 1 for i = 1, 3, y i e 1 for i = 1, 3 and therefore C 2 (A(ɛ), α, β) is large. Also, as v i /= the problem is expected to be close to an ill-posed problem. This is confirmed by the large condition number for the eigenvector C 1 (A(ɛ), x). We now compare our condition number for the eigenvector with the condition number κ λ (x) in (19) that was obtained in the nonhomogeneous case. We take ɛ = so that the two eigenvalues 2 and 1 + ɛ are clustered and consider two different normalizations in the nonhomogeneous case. We measure the perturbations in an absolute sense so that α = α 1 = α 2 = 1 in (19). We find that for the eigenvector corresponding to (α, β) = (2, 1) or λ = α/β = 2 C 1 (A(ɛ), x) = 2 1 6, for g = x: κ λ (x) = 1 1 7, for g = y: κ λ (x) = This example shows that in the homogeneous case, the sensitivity of an eigenvector can depend strongly on how it is normalized.

21 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) As a second illustration, we consider the problem ] + αβ P (A(ɛ), α, β)=β 2 [ ɛ 7 [ 2 ɛ 1 2 with ɛ>and with eigenvalues {(, 1), (1, 1), (1 + ɛ, 1), (2, 1)}. The sets of right eigenvectors X and left eigenvectors Y are given by [ ] [ ] X =, Y =, (1 + ɛ) 1 3 a 1 ] [ ] + α ɛ 1, (23) where a = (ɛ 2 + 3ɛ + 3)/(ɛ 2 1). Forɛ = u, A(ɛ) is close to a tuple that has a double eigenvalue which is defective. Surprisingly, the condition number for the eigenvector corresponding to the eigenvalues α 2 /β 2 α 3 /β 3 1 is small; we found that C 1 (A(ɛ), x) 2. In the proof of Theorem 7.1, we showed that Π v P (A(ɛ), α, β) x is singular when (α, β) is a defective eigenvalue and v/=. In this case, for ɛ = and(α, β) = (1, 1), v =. 9. Bihomogeneous Newton method The idea of using Newton s method to solve the eigenvalue problem Ax = λx goes back to Unger [27] and has been investigated further by Peters and Wilkinson [17] and most recently by Tisseur [25]. Newton s method for the nonhomogeneous polynomial eigenvalue problem has been studied by Ruhe [18]. In this section, we apply the multihomogeneous Newton method of Dedieu and Shub [6] to the homogeneous PEP. The multihomogeneous Newton method generalizes to multihomogeneous systems the projective version of Newton method introduced by Shub in [19] and studied by Shub and Smale [2 22]. Given a regular (m + 1)-tuple A = (A,A 1,...,A m ), our aim is to compute an eigenvector x andaneigenvalue(α, β) associated with this (m + 1)-tuple. Hence, we want to find the zeros of the bihomogeneous function F : C n C 2 C n, F(x,α,β)= P(A,α,β)x. From [6] we have that the bihomogeneous Newton map N F : P n 1 P 1 P n 1 P 1 associated with F is given by N F (x,α,β)= (x,α,β) ( DF(x,α,β) x (α,β) ) 1 F(x,α,β). The bihomogeneous Newton method has the usual properties of a Newton method: the fixed points for N F correspond to the zeros for F and the convergence of Newton sequence to these zeros is quadratic. Let (x,α,β ) = N F (x,α,β).then DF (x, α, β)(x x,α α, β β) = F(x,α,β),

22 92 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) with x x x and (α α, β β) (α, β). Thus, x and (α,β ) are given by P(x x ) + (α α)d α Px + (β β)d β Px = Px, x,x = x,x, (α,β ), (α, β) = (α, β), (α, β), or equivalently [ P Dα Px D β Px x ᾱ β ][ x x α α β β ] = [ Px Therefore, the bihomogeneous Newton iteration for the homogeneous PEP is given by [ ] xk+1 x k [ ] P(A,αk,β k )x k J P (x k,α k,β k ) α k+1 α k β k+1 β k =, k, (24) where (x,α,β ) is a given starting guess and J P is as in (7). The condition number κ(j P ) of the matrix involved in the bihomogeneous Newton iteration is related to the condition number of the eigenvalue the method is trying to approximate. Let [ ] P(A,α,β) = UΣV Σ =[Ũ,y] [Ṽ,x] be the singular value decomposition of P(A,α,β)and let [ ] [ ] U V J:= J I P (x,α,β) I Σ U D α Px U D β Px y = D α Px y D β Px. 1 ᾱ β Let [ I ] Q = 1 β ᾱ so that Σ U D α Px U v y JQ = D α Px y v, 1 ᾱ where v = ( βd α P ᾱd β P)x. We denote by J the matrix J with y v in the last column replaced by. J is a singular matrix so that its smallest singular value σn+2 =. Let σ n+2 be the smallest singular value of J. Then from [12, Theorem 2.5.3], ].

23 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) Table 4 Condition numbers for (A(ɛ), x, α, β) defined in (22) with ɛ = 1 1 α j /b j E (x, α, β) 1.7e 2 1.7e 2 2.e 2 1.5e 2 1.5e 2 1.4e 2 E 1 (x, α, β) 1.e 4 2.7e 3 4.e 3 6.1e 4 4.2e 4 3.1e 4 E 2 (x, α, β) 9.5e 9 2.1e 5 7.1e 6 8.3e 7 5.6e 8 2.6e 7 E 3 (x, α, β) 9.2e e 1 9.8e e e e 13 E 4 (x, α, β) 1.1e e e e e 16 σ n+2 = σ n+2 σn+2 J J 2 = y v. Hence, κ(j P (x,α,β)) κ( JQ) κ(q) 1 JQ 2 κ(q) y v α 2k β 2(m k)) 1/2 C 2 (A,α,β). κ(q) x y Therefore, as long as Q is not too ill-conditioned, the condition number of the matrix involved in the bihomogeneous Newton iteration will be large when (α, β) is an ill-conditioned eigenvalue. A similar conclusion holds for the Jacobian matrix in the Newton method for nonhomogeneous PEPs with normalization condition x Q (A, λ)x = 1. Thus from the numerical point of view, the bihomogeneous Newton method will not perform better than Newton s method for nonhomogeneous PEPs when approximating eigenvalues with large condition numbers. However, an advantage of the bihomogeneous Newton method is that infinite eigenvalues can be computed in the same way as finite eigenvalues. We implemented the bihomogeneous Newton method in MATLAB. To illustrate the method we consider the matrix polynomial P (A(ɛ), α, β) defined in (22), for which the eigenvalues and eigenvectors are known exactly (see Table 1). We chose ɛ =.1 so that the eigenvalues are well separated. As a starting guess, we use a random perturbation of the order 1 2 of the true eigenpairs. We measure the error in the approximate eigenpair by E k (x,α,β)= (x,α,β) (x k,α k,β k ) x (α,β) ( x xk 2 = x 2 + α α k 2 + β β k 2 ) 1/2 α 2 + β 2, where (x,α,β)is the true eigenpair we are seeking and (x k,α k,β k ) the kth iterate. The errors E k (x,α,β)are given in Table 4 and illustrate the quadratic convergence of the iterates. We see that the infinite eigenvalue is found with no more difficulty than the finite ones.

24 94 J.-P. Dedieu, F. Tisseur / Linear Algebra and its Applications 358 (23) References [1] R. Abraham, J.E. Marsden, T. Ratiu, Tensor Analysis and Applications, 2nd ed., Springer, New York, [2] A.L. Andrew, K.E. Chu, P. Lancaster, Derivatives of eigenvalues and eigenvectors of matrix functions, SIAM J. Matrix Anal. Appl. 14 (4) (1993) [3] F. Chaitin-Chatelin, V. Frayssé, Lectures on Finite Precision Computations, SIAM, Philadelphia, PA, USA, [4] J.-P. Dedieu, Approximate solutions of numerical problems, condition number analysis and condition number theorems, in: S. Smale, J. Renegar, M. Shub (Eds.), The Mathematics of Numerical Analysis, Lecture in Applied Mathematics, vol. 32, AMS, New York, [5] J.-P. Dedieu, Condition operators, condition numbers, and condition number theorem for the generalized eigenvalue problem, Linear Algebra Appl. 263 (1997) [6] J.-P. Dedieu, M. Shub, Multihomogeneous Newton methods, Math. Comp. 69 (231) (2) [7] J.W. Demmel, On condition numbers and the distance to the nearest ill-posed problem, Numer. Math. 51 (1987) [8] C. Eckart, G. Young, A principal axis transformation for non-hermitian matrices, Bull. Amer. Math. Soc. 35 (1939) [9] S. Gallot, D. Hulin, J. Lafontaine, Riemannian Geometry, second ed., Springer, Berlin, 199. [1] F.R. Gantmacher, The Theory of Matrices, vol. 1, Chelsea, New York, [11] I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, Academic Press, New York, [12] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, Baltimore, MD, [13] N.J. Higham, F. Tisseur, More on pseudospectra for polynomial eigenvalue problems and applications in control theory, Numerical Analysis Report 372, Manchester Centre for Computational Mathematics, Manchester, UK, 21. [14] J. Kautský, N.K. Nichols, P. Van Dooren, Robust pole assignment in linear state feedback, Internat. J. Control 41 (5) (1985) [15] P. Lancaster, Lambda-Matrices and Vibrating Systems, Pergamon Press, Oxford, [16] N.K. Nichols, J. Kautsky, Robust eigenstructure assignment in quadratic matrix polynomials: Nonsingular case, SIAM J. Matrix Anal. Appl. 23 (1) (21) [17] G. Peters, J.H. Wilkinson, Inverse iteration, ill-conditioned equations and Newton s method, SIAM Review 21 (1979) [18] A. Ruhe, Algorithms for the nonlinear eigenvalue problem, SIAM J. Numer. Anal. 1 (1973) [19] M. Shub, Some remarks on bézout s theorem and complexity, in: M.V. Hirsch, J.E. Marsden, M. Shub (Eds.), Proceedings of the Smalefest, 1993, pp [2] M. Shub, S. Smale, Complexity of Bézout s theorem I. geometric aspects, J. Amer. Math. Soc. 6 (1993) [21] M. Shub, S. Smale, Complexity of Bezout s theorem III. condition number and packing, J. Complexity 9 (1993) [22] M. Shub, S. Smale, Complexity of Bezout s theorem IV. Probability of success; extensions, SIAM J. Numer. Anal. 33 (1) (1996) [23] G.W. Stewart, J.-g. Sun, Matrix Perturbation Theory, Academic Press, London, 199. [24] F. Tisseur, Backward error and condition of polynomial eigenvalue problems, Linear Algebra Appl. 39 (2) [25] F. Tisseur, Newton s method in floating point arithmetic and iterative refinement of generalized eigenvalue problems, SIAM J. Matrix Anal. Appl. 22 (4) (21) [26] F. Tisseur, K. Meerbergen, The quadratic eigenvalue problem, SIAM Review 43 (2) (21) [27] H. Unger, Nichtlineare behandlung von eigenwertaufgaben, Z. Angew. Math. Mech. 3 (195)

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory.

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Jean-Pierre Dedieu a, Pierre Priouret a, Gregorio Malajovich c a MIP. Département de Mathématique, Université Paul Sabatier, 31062 Toulouse

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

On matrix equations X ± A X 2 A = I

On matrix equations X ± A X 2 A = I Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Bounds for eigenvalues of matrix polynomials

Bounds for eigenvalues of matrix polynomials Linear Algebra and its Applications 358 003 5 wwwelseviercom/locate/laa Bounds for eigenvalues of matrix polynomials Nicholas J Higham,1, Françoise Tisseur Department of Mathematics, University of Manchester,

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special

More information

Structured Condition Numbers and Backward Errors in Scalar Product Spaces. February MIMS EPrint:

Structured Condition Numbers and Backward Errors in Scalar Product Spaces. February MIMS EPrint: Structured Condition Numbers and Backward Errors in Scalar Product Spaces Françoise Tisseur and Stef Graillat February 2006 MIMS EPrint: 2006.16 Manchester Institute for Mathematical Sciences School of

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS

ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS ACCURATE SOLUTIONS OF POLYNOMIAL EIGENVALUE PROBLEMS YILING YOU, JOSE ISRAEL RODRIGUEZ, AND LEK-HENG LIM Abstract. Quadratic eigenvalue problems (QEP) and more generally polynomial eigenvalue problems

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information