ETNA Kent State University

Size: px
Start display at page:

Download "ETNA Kent State University"

Transcription

1 Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK SAFIQUE AHMAD AND VOLKER MEHRMANN Abstract In this work we propose a general framework for the structured perturbation analysis of several classes of structured matrix polynomials in homogeneous form, including complex symmetric, skew-symmetric, even and odd matrix polynomials We introduce structured backward errors for approximate eigenvalues and eigenvectors and we construct minimal structured perturbations such that an approximate eigenpair is an exact eigenpair of an appropriately perturbed matrix polynomial This work extends previous work of Adhikari and Alam for the nonhomogeneous case (we include infinite eigenvalues), and we show that the structured backward errors improve the known unstructured backward errors Key words Polynomial eigenvalue problem, even matrix polynomial, odd matrix polynomial, complex symmetric matrix polynomial, complex skew-symmetric matrix polynomial, perturbation theory, backward error AMS subject classifications 65F15, 15A18, 65F35, 15A1 1 Introduction In this paper we study the perturbation analysis for eigenvalues and eigenvectors of matrix polynomials of degree m (11) L(c, s) := c m j s j A j, with coefficient matrices, A j C n n In contrast to previous work on this topic, 3, 4], we consider the homogeneous form of matrix polynomials, where the eigenvalues are represented as pairs (c,s) C \ {0}, which for c 0 correspond to finite eigenvalues λ = s c, while (0,1) corresponds to the eigenvalue The eigenvalue problem for matrix polynomials arises naturally in a large number of applications; see, eg, 17, 18, 3, 4, 7, 9, 36, 37] and the references therein In many applications, the coefficient matrices have further structure which reflects the properties of the underlying physical model; see 9, 11, 1, 19, 8, 30, 3, 37] Since the polynomial eigenvalue problems typically arise from physical modelling, including numerical discretization methods such as finite element modelling 10, 31], and since the eigenvalue problem is usually solved with numerical methods that are subject to round-off as well as approximation errors, it is very important to study the perturbation analysis of these problems This analysis is necessary to study the sensitivity of the eigenvalue/eigenvectors under the modelling, discretization, approximation, and roundoff errors, but also to judge whether the numerical methods that are used yield reliable results While the perturbation analysis for classical and generalized eigenvalue problems is well studied (see 0, 33, 38]), for polynomial eigenvalue problems the situation is much less satisfactory and most research is very recent; see, 3, 4, 35, 36] Here we are particularly interested in the behavior of the eigenvalues and eigenvectors under perturbations which preserve the structure of the matrix polynomial This has recently been an important research topic 1,, 3, 6, 11, 1] Received September, 010 Accepted for publication June 7, 011 Published online October 6, 011 Recommended by V Olshevsky Research supported by Deutsche Forschungsgemeinschaft, via the DFG Research Center MATHEON in Berlin School of Basic Sciences, Discipline of Mathematics, Indian Institute of Technology Indore, Indore-45017, India (safique@iitiacin, safique@gmailcom) Institut für Mathematik, Ma 4-5, TU Berlin, Str des 17 Juni 136, D-1063 Berlin, Germany (mehrmann@mathtu-berlinde) 75

2 76 S S AHMAD AND V MEHRMANN In this paper we will focus on complex matrix polynomials, where the coefficient matrices are complex symmetric or skew-symmetric, ie, L(c,s) = ±L T (c,s), or where the matrix polynomials are T -even or T -odd, ie, L(c,s) = ±L T (c, s) Complex (skew)- symmetric problems arise in the finite element modelling of the acoustic field in car interiors and in the design of axisymmetric VCSEL devices; see, eg, 8, 34] Complex T -even or T -odd problems arise in the vibration analysis for high-speed trains; see, eg, 5, 6] Many applications only need finite eigenvalues and associated eigenvectors, but the eigenvectors associated with the eigenvalue infinity play an important role as well, since quite often the infinite spectrum has to be deflated before classical methods can be employed; see 13, 14] While the perturbation analysis and the construction of backward errors for finite eigenvalues have been studied in detail, there are only few results associated with the eigenvalue infinity We will present a systematic general perturbation framework that covers finite and infinite eigenvalues and extends the structured theory of 1,, 3, 6, 11, 1] as well as the unstructured theory for the homogeneous case studied in 5, 6, 7, 16, 4, 33] In particular, to present the backward error analysis for a given approximation to an eigenvalue/eigenvector pair of a matrix polynomial L, we will construct an appropriately structured minimal (in the Frobenius and the spectral norm) perturbation polynomial L such that the given eigenvalue/eigenvector pair is exact for L + L It will turn out that the so constructed minimal perturbation is unique in the case of the Frobenius norm and that there are infinitely many such minimal perturbations in the case of the spectral norm We will compare the so constructed perturbations with those constructed for matrix pencils and matrix polynomials in, 3, 4] and show that our results generalize these results and provide the following further information on the eigenvalues 0 and of L + L For the case of complex symmetric or skew-symmetric matrix polynomials, we show that the nearest perturbed matrix polynomial can have all kinds of eigenvalues including 0 and When the degree is m = 1, we present the perturbation analysis for the case of T -even and T -odd matrix pencils and we show that the nearest perturbed pair can have 0 and as eigenvalues depending on the choice of (λ,µ) for which we want to compute the backward error Furthermore, when λ = 0 or µ = 0, then we show that the perturbed pair is the same for the spectral and the Frobenius norm When the degree is m > 1 and even, then for the case of T -even matrix polynomials we show that the nearest perturbed polynomial can have both 0 and eigenvalues depending on the choice of (λ,µ) for which we want to compute the backward error Again, when λ = 0, µ 0 or λ 0, µ = 0, then the perturbed polynomial is the same for the spectral and the Frobenius norm When m > 1 is odd, then for the case of T -even matrix polynomials we show that the nearest perturbed matrix polynomial can have all possible finite eigenvalues including 0 but not the eigenvalue When m > 1 is even, then for the case of T -odd matrix polynomials we show that the nearest perturbed polynomial can have non-zero finite eigenvalues but not the eigenvalue When m > 1 is odd, then for the case of T -odd matrix polynomials we show that the perturbed polynomial can have only and non-zero finite eigenvalues The paper is organized as follows: In Section, we review some known techniques that were developed in 5, 6, 7] for matrix pencils and identify the types of structured homogeneous matrix polynomials that we will analyze as well as the eigenvalue symmetry that arises for these structured matrix polynomials In Section 3 and in Section 4 we present the structured backward error analysis of an approximate eigenpair for complex symmetric, complex

3 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 77 skew-symmetric, T -even, and T -odd matrix polynomials and compare these results with the corresponding unstructured backward errors We also present a systematic general procedure for the construction of an appropriate structured minimal complex symmetric, complex skewsymmetric, T -even, and T -odd polynomial L such that the given approximate eigenvalue and eigenvector are exact for L + L These results cover finite and infinite eigenvalues and generalize results of 1,, 3, 4, 11] in a systematic way Notation and preliminaries We denote by R n n, C n n the sets of real and complex n n matrices, respectively For an integer p, 1 p, and an elementwise nonnegative vector w = w 1,,w n ] T R n, we define a weighted p-(semi)norm of a real or complex vector x = x 1,,x n ] T via x w,p := w 1 x 1,w x,,w n x n ] T p If w is elementwise strictly positive, then this is a norm, and if w has zero components then it is a seminorm We define the componentwise inverse of w via w 1 := w1 1,,w 1 m ] T, where we use the convention that w 1 i = 0 if w i = 0 We will consider structured and unstructured backward errors both in the spectral norm and the Frobenius norm on C n n, which are given by A := max x =1 Ax, A F := (tracea A) 1/, respectively By σ max (A) and σ min (A) we denote the largest and smallest singular value of a matrix A, respectively The identity matrix is denoted by I and A, A T, and A H stand for the conjugate, transpose, and conjugate transpose of a matrix A, respectively The set of all matrix polynomials of degree m 0 with coefficients in C n n is denoted by L m (C n n ) This is a vector space which we can equip with weighted (semi)norms (given a nonnegative weight vector w := w 0,w 1,,w m ] T R m+1 \ {0}) defined as L w,f := (A 0,,A m ) w,f = (w 0 A 0 F + + w m A m F) 1/, for the Frobenius norm and L w, := (A 0,,A m ) w, = (w 0 A w m A m ) 1/, for the spectral norm A matrix polynomial is called regular if det(l(λ,µ)) 0 for some (λ,µ) C \{(0,0)}, otherwise it is called singular The spectrum of a homogeneous matrix polynomial L L m (C n n ) is defined as Λ(L) := {(c,s) C \ {(0,0)} : rank(l(c,s)) < n} In the following we normalize the set of points (c,s) C \ {(0,0)}, such that c is real and c + s = 1 With this normalization, it follows that the spectrum Λ(L) of a matrix polynomial L L m (C n n ) can be identified with a subset of the Riemann sphere; see, eg, 6] In the following we will compute backward errors for structured matrix polynomials These were introduced, eg, in 1, 35], but here we follow 5, 6, 7] and define the backward error of an approximate eigenpair as follows Let (λ,µ) C \ {(0,0)} be an approximate eigenvalue of L L m (C n n ) with corresponding normalized approximate right eigenvector x 0 with x H x = 1, ie, L(λ,µ)x = 0 Then we consider the Frobenius and spectral norm backward errors associated with a given nonnegative weight vector w 0,w 1,,w m ] T η w,f (λ,µ,x,l) := inf{ L w,f, L L m (C n n ), (L(λ,µ) + L(λ,µ))x = 0}, η w, (λ,µ,x,l) := inf{ L w,, L L m (C n n ),(L(λ,µ) + L(λ,µ))x = 0},

4 78 S S AHMAD AND V MEHRMANN respectively When w := 1,1,,1] T, then we just leave off the index w for convenience The backward errors for structured matrix polynomials from a set S L m (C n n ) are defined analogously as η S w,f(λ,µ,x,l) := inf{ L w,f, L S, (L(λ,µ) + L(λ,µ))x = 0}, η S w,(λ,µ,x,l) := inf{ L w,, L S, (L(λ,µ) + L(λ,µ))x = 0}, respectively In order to compute the backward errors, we will need the partial derivative i z w, of the map (1) C m+1 R, z z w, = (z 0,z 1,,z m ) w,, which is the derivative of (1) with respect to the variable z j obtained by fixing the variables z 0,z 1,,z i 1,z i+1,,z m as constants The gradient of the map (1) is then defined as ( z w, ) = 0 z w,, 1 z w,,, m z w, ] T C m+1 For a given (λ,µ) C \ {(0,0)} and x C n with x H x = 1, we set k := L(λ,µ)x and, with a given nonnegative weight vector w 0,w 1,,w m ] T, we introduce H w, := H w, (λ,µ) := (λ m µ 0,λ m 1 µ,,λ 0 µ m ) w,, and we use the notation j H w, for the partial derivative (with respect to z j ) of the map (1) at (λ m µ 0,λ m 1 µ,,λ 0 µ m ) Then we have () η w, (λ,µ,x,l) = L(λ,µ)x H w 1,(λ,µ) Defining for each of the coefficients (3) z Aj := jh w 1, H w 1, and introducing the perturbations A j := z Aj kx H for the coefficients, we form the matrix polynomial L(c,s) = c m j s j A j, with L w, = k H w 1, For z C we set sign(z) := z/ z, when z 0 and sign(z) := 0 when z = 0 With these definitions we have the following preliminary results which generalize the corresponding results of 5, 6] to matrix polynomials PROPOSITION 1 Consider the map z w, given by (1) Then z w, is differentiable on C m+1 and i z w, = w i z i z w,, i = 0,1,,,m

5 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 79 Proof The assertion follows from the fact that ( z i ) = z i The proof of the following two propositions is analogous PROPOSITION Let m be an integer and let m = m + 1, ˆm = m if m is even and m = m + 1, ˆm = m 1 if m is odd Consider the mapping Then K w, is differentiable and K w, : C m R i K w, (z) = z z 0,z,z 4,,zˆm ] T w, w i z i, i = 0,,4,, ˆm K w, (z) PROPOSITION 3 Let m be an integer and let m = m, ˆm = m if m is even and m = m + 1, ˆm = m 1 if m is odd Consider the mapping Then N w, is differentiable and N w, : C m R i N w, (z) = PROPOSITION 4 Consider the functions z z 1,z 3,z 5,,zˆm ] T w, w i z i, i = 1,3,5,, ˆm N w, (z) H w, (c m s 0,c m 1 s,,c 0 s m ) = c m s 0,c m 1 s,,c 0 s m ] T w,, K w, (c m s 0,c m s,,c 0 s m ) = c m s 0,c m s,,c 0 s m ] T w, K w, (c m s 0,c m s,,cs m 1 ) = c m s 0,c m s,,cs m 1 ] T w, N w, (c m 1 s,c m 3 s 3,,cs m 1 ) = c m 1 s,c m 3 s 3,,cs m 1 ] T w, N w, (c m 1 s,c m 3 s 3,,c 0 s m ) = c m 1 s,c m 3 s 3,,c 0 s m ] T w, if m is even, if m is odd, if m is even, if m is odd For even m, the following formulas hold:,j even m 1 j=1,j odd,j even,j even c m j s j jh w, H w, = K w, H w, c m j s j jh w, H w, = N w, H w, c m j s j jk w, K w, = 1, c m j s j jh w, H w, + m 1 j=1,j odd m 1 j=1 j=1,j odd w j j K w, = 1, m 1 w j j N w, = 1, c m j s j jh w, H w, = 1 c m j s j jn w, N w, = 1,

6 80 S S AHMAD AND V MEHRMANN For odd m, the following formulas hold: m 1,j even m 1,j even m 1,j even c m j s j jh w, = K w,, H w, c m j s j jk w, K w, = 1, c m j s j jh w, + H w, H w, For all m, the following formulas hold: j=1,j odd j=1,j odd j=1,j odd c m j s j jh w, H w, = 1 c m j s j jh w, = N w,, H w, c m j s j jn w, N w, = 1, H w, c m j s j jh w, H w, = 1, w j j H w, = 1 Proof By Proposition 1, we have Then, we obtain j H w, (c m,c m 1 s,,s m ) = w j cm j s j H w, (c m,c m 1 s,,s m ), j even c m j s j jh w, = H w, wjc m j s j cm j s j, j even H w, = K w, H w, The other parts follow analogously, using Propositions 1 3 After establishing these formulas for general matrix polynomials, we now turn to the structured classes These classes were discussed in detail in 8] but not in homogeneous form So let us first introduce the homogeneous versions DEFINITION 5 Let (c,s) C \ {(0,0)} A matrix polynomial L L m (C n n ) is called 1 Symmetric/skew-symmetric if L(c,s) = ±L T (c,s), T -even/t -odd if L(c,s) = ±L T (c, s) The spectra of these classes of structured matrices have a symmetry structure that is summarized in the following proposition which follows directly from the results for the nonhomogeneous case in 8] PROPOSITION 6 1 Let L L m (C n n ) be a complex symmetric or complex skew-symmetric matrix polynomial of the form (11) If x C n is a right eigenvector of L corresponding to an eigenvalue (λ,µ) C \ {(0,0)}, then x is a left eigenvector corresponding to the eigenvalue (λ,µ) Let L L m (C n n ) be a complex T -even or T -odd matrix polynomial of the form (11) If x C n and y C n are right and left eigenvector associated to an eigenvalue (λ,µ) C \ {(0,0)} of L, then y and x are right and left eigenvectors associated to the eigenvalue (λ, µ) Since T -odd and T -even matrix polynomials have coefficients that are alternating between symmetric and skew-symmetric matrices, it is clear that in the product x T (L(λ,µ))x all terms associated with skew-symmetric coefficients vanish; these are the coefficients with

7 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 81 TABLE 1 Eigenvalues and eigenvectors of structured matrix polynomials S Eigenvalues Eigenpairs x T A j x symmetric (λ, µ) ((λ, µ), x, x) skew-symm (λ, µ) ((λ, µ), x, x) 0 T-even ((λ,µ),(λ, µ)) ((λ,µ),x,y),((λ, µ),y,x) 0 for all odd j T-odd ((λ,µ),( λ,µ)) ((λ,µ),x,y),(( λ,µ),y,x) 0 for all even j odd index for T -even matrix polynomials, and the ones with even index for T -odd matrix polynomials We summarize the properties of these structured matrix polynomials in Table 1 To derive the backward error formulas, we will frequently need the following completion results in which for a symmetric matrix X, X 1 denotes the positive square root ] A C THEOREM 7 (15]) Consider a block matrix T := Then for any positive B X number { A χ max, B] the block X can be chosen such that A C B X] A C ] }, χ, where X is of the form X = KA H L + χ(i KK H ) 1/ Z(I L H L) 1/, and where K := ((χ I A H A) 1/ B H ) H, L := (χ I A H A) 1/ C with Z an arbitrary matrix such that Z 1 As a Corollary of Theorem 7 one has the following result for complex matrices ( ]) A COROLLARY 8 Let A = ±A T, C = ±B T C n n and χ := σ max Then B there exists a symmetric/skew-symmetric matrix X C n n such that ( ]) A ±B T σ max = χ, B X and X has the form X := KAK T + χ(i KK H ) 1/ Z(I KK T ) 1/, K := B(χ I AA) 1/ and where Z = ±Z T C n n is an arbitrary matrix such that Z 1 In the results presented below, we always use Z = 0 In the following section we derive backward errors for the different classes of structured matrix polynomials 3 Backward errors for complex symmetric and skew-symmetric matrix polynomials In this section we derive backward error formulas for homogeneous complex symmetric and skew-symmetric matrix polynomials Throughout this section, we will make use of the partial derivatives jh w 1, H w 1, of H w 1, and of z Aj as defined in (3)

8 8 S S AHMAD AND V MEHRMANN THEOREM 31 Let L L m (C n n ) be a regular, symmetric matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and k := L(λ,µ)x Introduce the perturbation matrices A j = xx T A j xx H + z Aj xk T + kx H (x T k)xx H], j = 0,1,,m and define L(c,s) = c m j s j A j L m (C n n ) Then L is a symmetric matrix polynomial and (L(λ,µ) + L(λ,µ))x = 0 Proof Since for all j we have A j = A T j, it follows that L is symmetric and we have that (L(λ,µ) + L(λ,µ))x = λ m j µ j (A j + A j )x = λ m j µ j A j x xx T A j x + z Aj xk T x + k (x T k)x ]] By Proposition 4 we have that = k(i xx T ) + xk T x + k (x T k)x ] m λ m j µ j z Aj λ m j µ j z Aj = 1 Then (L(λ,µ) + L(λ,µ))x = (I xx T )k + xk T x + k (x T k)x = 0, since k T x = x T k Theorem 31 with c = 1 and w = 1,1,,1] T implies Theorem 41 of ] for the case of non-homogeneous matrix polynomials that have only finite eigenvalues, ie, for which det(a m ) 0 Theorem 31 also implies Theorem of 3] for matrix pencils Using Theorem 31 we then obtain the following backward errors for complex symmetric matrix polynomials THEOREM 3 Let L L m (C n n ) be a complex symmetric matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1, and set k := L(λ,µ)x i) The structured backward error with respect to the Frobenius norm is given by k ηw,f(λ,µ,x,l) S = x T k H w 1, There exists a unique complex symmetric polynomial L(c, s) := with coefficients c m j s j A j A j = z Aj xk T + kx H (x T k)xx H], j = 0,1,,m such that the structured backward error satisfies η S w,(λ,µ,x,l) = L w, and x, x are left and right eigenvectors corresponding to the eigenvalue (λ,µ) of L + L, respectively

9 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 83 ii) The structured backward error with respect to the spectral norm is given by η S w,(λ,µ,x,l) = k H w 1, and there exist a complex symmetric polynomial L(c, s) := coefficients A j := z Aj c m j s j A j with ] xk T + kx H (k T x)xx H xt k(i xx T )kk T (I xx H ) k xt k such that L w, = ηw,(λ,µ,x,l), S and (L(λ,µ) + L(λ,µ))x = 0 Proof By Theorem 31 we have (L(λ,µ)+ L(λ,µ))x = 0 and hence k = L(λ,µ)x Now we construct a unitary matrix U which ] has x as its first column, U = x,u 1 ] C n n and let A j := U T dj,j d A j U = T j, where D j,j = Dj,j T C(n 1) (n 1) Then and hence which implies that Therefore, we get that d j D j,j U L(λ,µ)U H = UU T ( L(λ,µ))U H U = L(λ,µ), m λm j µ j d j,j m λm j µ j d j ] = U L(λ,µ)U H x = L(λ,µ)x = k, L(λ,µ)U H x = U T k = ] x T k U1 T k m w λ m j µ j jd j,j w j m w jλ m j µ j d j w j = x T k U T 1 k To minimize the norm of the perturbation, we solve this system for the parameters d j,j,d j in a least squares sense, and obtain w 0 d 0,0 z A0 z w 1 d 1,1 z A1 w 0 d A0 0 w d, = z A x T w 1 d 1 z A1 k, and = z A U1 T k, w w m d m,m z m d m Am z Am Applying Proposition 1, we then get the following relations From this we obtain d j,j = z Aj x T k, d j = z Aj U T 1 k, j = 0,1,,m A j = U AU H = xd j,j x H + U 1 d j x H + xd T j U H 1 + U 1 D j,j U H 1 = z Aj (xx T kx H ) + U 1 U T 1 kx H + xk T U 1 U H 1 )] + U 1 D j,j U H 1 ] (31) = z Aj (xx T kx H ) + (I xx T )kx H + xk T (I xx H ))] + U 1 D j,j U H 1 = z Aj kx H + xk T (k T x)xx H] + U 1 D j,j U H 1

10 84 S S AHMAD AND V MEHRMANN In the Frobenius norm, the minimal perturbation is obtained by taking D j,j = 0, and hence we get A j F = d j,j + d j = z Aj ( x T k + U T 1 k ) = j H w 1, k x T k Hw, 1, since U T k = x T k + U T 1 k Using wj jh w 1, = 1 from Proposition 4, we obtain that in the case of the Frobenius norm L w,f = wj j H w 1, k x T k Hw 1, = k x T k Hw, 1, and hence, k x T k L w,f = H w 1, As k T x is a scalar constant, it follows that all A j and thus also L are symmetric and (L(λ,µ) + L(λ,µ))x = λ m j µ j (A j + A j )x = k + ( λ m j µ j A j )x = k + λ m j µ j z Aj kx H + xk T xk T xx H ]x = k + k + xk T x xk T x = 0 Here we have used that by Proposition 4 we have that follows that x H (L(λ,µ) + L(λ,µ)) = 0 For the spectral norm we can apply Corollary 8 to (31) and get P λ m j µ j z Aj = 1 Similarly, it D j,j = z A j P x T k(u1 T k)(u1 T k) T] + χ I (UT 1 k)(u1 T k) H ] 1/ ] 1/ Z I UT 1 k(ut 1 k) T, where Z = Z T and Z 1, P = k x T k, χ := d j,j + d j With the special choice Z = 0 we get D j,j = z ] A j P x T k(u1 T k)(u1 T k) T and P Hence, U 1 D j,j U H 1 = z A j P U 1U T 1 kk T U 1 U H 1 = z A j P (I xxt )kk T (I xx H ) A j = z Aj kx H + xk T x(k T x)x H] z A j P (I xxt )kk T (I xx H ),

11 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 85 L(c, s) is symmetric, and (L(λ, µ) + L(λ, µ))x = 0 With ( ]) dj,j χ := σ max = z d Aj x T k + U1 Tk = jh w 1, k, j H w 1, and Corollary 8 we have χ = A j, and by Proposition 4, m w j jh w 1, = 1, it follows that ηw,(λ,µ,x,l) S = L w, = k H w 1, Note that in the construction of a minimal spectral norm backward error we have infinitely many choices of an appropriate completion Z for which Z 1, but here and in the following we always take Z = 0 to simplify the formulas Remark 33 If w j = 0 for j = 0,,m, then z Aj = jh w 1,(λ,µ) = 0 and hence H w 1,(λ,µ) by Theorem 3 we have that A j = 0, j = 0,,m This shows that w j = 0 implies that A j remains unperturbed We then have the following relations between structured and unstructured backward errors COROLLARY 34 Let L L m (C n n ) be a regular, symmetric matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 Then, η S w,f(λ,µ,x,l) η w, (λ,µ,x,l) η S w,(λ,µ,x,l) = η w, (λ,µ,x,l) Proof By Theorem 3 with k := L(λ,µ)x, we have that ηw,(λ,µ,x,l) S = k k, and η S H w,f(λ,µ,x,l) = x T k w 1, H w 1, and from () we have that η w, (λ,µ,x,l) = k H w 1, Thus the assertion follows As a corollary we obtain the results of, 3, 4] for the case of non-homogeneous matrix polynomials that have no infinite eigenvalues, as well as the result for homogeneous matrix pencils L(c,s) = ca + sb L 1 (C n n ) and in the special case, ie, for c = 1, we obtain results given in Theorems 31, and 3 of 3] In an analogous way we can derive the results for complex skew-symmetric matrix polynomials THEOREM 35 Let L L m (C n n ) be a complex skew-symmetric matrix polynomial of the form (11), let (λ,µ) C \{(0,0)}, let x C n such that x H x = 1 and k := L(λ,µ)x Introduce the perturbation matrices A j := z Aj xk T kx H], j = 0,1,,,m Then the matrix polynomial L(c, s) = (L(λ,µ) + L(λ,µ))x = 0 c m j s j A j, is complex skew-symmetric and

12 86 S S AHMAD AND V MEHRMANN Proof By construction L is complex skew-symmetric and by Proposition 4, we have λ m j µ j z Aj = 1 Thus, we have (L(λ,µ) + L(λ,µ))x = k + L(λ,µ)x = k + = k + xk T x + k = 0, λ m j µ j z Aj xk T kx H] x as xk T x = 0, since the polynomial has skew-symmetric coefficients THEOREM 36 Let L L m (C n n ) be a complex skew-symmetric matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and let k := L(λ, µ)x The structured backward errors with respect to the Frobenius norm and spectral norm are given by k ηw,f(λ,µ,x,l) S =, H w 1, ηw,(λ,µ,x,l) S = k, H w 1, respectively Introducing the perturbation matrices A j = z Aj xk T kx H],j = 0,1,,m, then L(c,s) := m cm j s j A j is skew-symmetric, ( L(λ,µ) + L(λ,µ))x = 0, ηw,f S (λ,µ,x,l) = L w,f and ηw,(λ,µ,x,l) S = L w, Proof By Theorem 35 we have (L(λ,µ) + L(λ,µ))x = 0 and hence we have that k = L(λ,µ)x We choose a unitary matrix ] U of the form U = x,u 1 ], U 1 C n n 1 and 0 d define A j := U T T A j U = j, where d j D j,j Then and hence D j,j = D T j,j C (n 1) (n 1) U L(λ,µ)U H = UU T ( L(λ,µ))U H U = L(λ,µ), U L(λ,µ)U H x = L(λ,µ)x = k, which implies that L(λ,µ)U H x = U T k = ] x T k U1 T k Since U H x = e 1, it follows that x T k = 0 and U1 T k = λ m j µ j d j = w j λ m j µ j d j w j

13 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 87 To minimize the perturbation we solve the system for the parameters d j,j,d j in a least squares sense, and obtain x T k = 0 and w 0 d 0 w 1 d 1 w m d m = z A0 z A1 z Am UT 1 k, where H w, = λ m µ 0,λ m 1 µ,,λ 0 µ m] T w, This yields d j,j = 0,d j = z Aj U1 T k and then A j = 0 ( z Aj U1 T k ) T] z Aj U1 T k D j,j The Frobenius norm can be minimized by taking D j,j = 0 and then we have A j F = d j = z Aj U1 T k = j H w 1, k Hw, 1, since k = U T k = x T k + U T 1 k = U T 1 k Also by Proposition 4, we have that m w j jh w 1, = 1 Thus we obtain L w,f = A j = U AU H = ] 0 x U 1 d j d T j D j,j k H w 1, ] ] x H U H 1 and = U 1 d j x H + xd T j U H 1 + U 1 D j,j U H 1 = U 1 z Aj U T 1 kx H x(z Aj U T 1 k) T U H 1 + U 1 D j,j U H 1 (3) = z Aj U1 U T 1 kx H xk T U 1 U H 1 ) ] + U 1 D j,j U H 1 = z Aj (I xx T )kx H xk T (I xx H ))] Therefore A j = z Aj kx H xk T ] is complex skew-symmetric and we have (L(λ,µ) + L(λ,µ))x = 0 To minimize the spectral norm we make use of Corollary 8 and obtain D j,j = z A j P xt k(u1 T k)(u1 T k) T ] + I (UT 1 k)(u1 T k) H ] P Z I UT 1 k(ut 1 k) T P where Z = Z T with Z 1, and P = k x T k Choosing Z = 0, we get ], and using (3), we get D j,j = z A j P xt k(u T 1 k)(u T 1 k) T ] U 1 D j,j U H 1 = z A j P xt ku 1 U T 1 kk T U 1 U H 1 = z A j P xt k(i xx T )kk T (I xx H ),

14 88 S S AHMAD AND V MEHRMANN and hence A j = z Aj kx H + xk T x(k T x)x H] z A j P xt k(i xx T )kk T (I xx H ) The skew-symmetry of A j implies that x T k = 0 and thus A j = z Aj kx H xk T] is complex skew-symmetric Then L(c, s) is complex skew-symmetric as well and with χ Aj = z Aj U1 T k we have that (L(λ,µ) + L(λ,µ))x = 0 By Corollary 8 we obtain A j = z Aj U1 T k = z Aj k xt k = z Aj k and hence η S w,(λ,µ,x,l) = L w, As a direct corollary of Theorem 36 we have the following relation between structured and unstructured backward errors of an approximate eigenpair COROLLARY 37 Let L L m (C n n ) be a skew-symmetric matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n satisfy x H x = 1, and set k := L(λ,µ)x Then the structured and unstructured backward errors are related via η S w,f(λ,µ,x,l) = η w, (λ,µ,x,l), η S w,(λ,µ,x,l) = η w, (λ,µ,x,l) As a further corollary we obtain Theorem 434 of ]; see also 4] for non-homogeneous matrix polynomials with no infinite eigenvalues For matrix pencils L(c,s) = ca 0 + sa 1 L 1 (C n n ), Theorem 36 in the special case c = 1 also implies the results given in Theorem 33 and Theorem 4 of 3] To illustrate our results, in the following we present some examples Example ] 38 Consider the complex ] symmetric pencil ] L L 1 (C ) with coefficients ı/ A 0 := and A :=, and take x = 0 1 ı/, (λ,µ) = (0,1) For the Frobenius ] norm we obtain the coefficients ] of the perturbation pencil L as A 0 = and A := Then (0,1) is an eigenvalue of L + L and L F = ηf S (λ,µ,x,l) = ] ] For the spectral norm we obtain A 0 =, and A = Again (0,1) is an eigenvalue of L + L and L = η S (λ,µ,x,l) = 07071; see also Table 31 Example 39 Consider ] the complex ] skew-symmetric pencil ] L L 1 (C ) with coefficients A 0 :=,A ı/ := and take x = 0 ı/, (λ,µ) = (0,1) For the Frobenius ] norm and spectral ] norm, the coefficients or the perturbation pencil are A 0 =, A =, (0,1) is an eigenvalue of L + L The norm of the 0 perturbation is L F = ηf S (λ,µ,x,l) = 884, while for the spectral norm we obtain L = η S (λ,µ,x,l) =, see also Table 31 4 Backward errors for complex T -odd and T -even matrix polynomials In this section we derive backward error formulas for homogeneous T -odd and T -even matrix polynomials Throughout this section we assume that the coefficient matrix A 0 is in the even position, ie, it is symmetric for a T -even and skew-symmetric for a T -odd matrix polynomial The other case can be treated analogously via a multiplication with the imaginary unit ı

15 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 89 TABLE 31 Structured and unstructured backward errors for Examples 38 and 39 Example S η S (λ,µ,x,l) η S F (λ,µ,x,l) η (λ,µ,x,l) 1 symmetric skew-symmetric 884 For a given nonnegative vector w, an eigenvalue (λ, µ) and the partial derivatives as introduced in Propositions 1 4, we use the following abbreviations z Aj := jh w 1,(λ,µ) H w 1,(λ,µ), n A j := jn w 1,(λ,µ) N w 1,(λ,µ), k A j := jk w 1,(λ,µ) K w 1,(λ,µ) We then have the following backward errors THEOREM 41 Let L L m (C n n ) be a complex T -even or T -odd matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and set k := L(λ,µ)x For j = 0,1,,,m, and different cases, we introduce the following perturbation matrices In the case that m is even and λ 0, or when m > 1 is odd then let for T -even matrix polynomials A j := { k Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] for even j, z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j, so that the perturbation preserves the structure, in the case that m > 1 is even and both λ 0,µ 0, or in the case that m is odd and µ 0, let for T -odd matrix polynomials A j := { n Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j, z Aj (I xx T )kx H + xk T (I xx H ) ] for even j, so that the perturbation again preserves the structure, in the case that λ 0,µ 0 consider perturbation matrices for symmetric or skew-symmetric coefficients A j := { xx T A j xx H + z Aj (I xx T )kx H + xk T (I xx H ) ] symm, z Aj (I xx T )kx H + xk T (I xx H ) ] skew-symm Then there exists a matrix polynomial L(c,s) = c m j s j A j C n n that is structure preserving T -odd or T -even and satisfies (L(λ, µ) + L(λ, µ))x = 0 Proof Let L L m (C n n ) be of the form L(c,s) = m cm j s j A j Then by the construction it is easy to see that L is either T -even or T -odd and it remains to show that (L(λ,µ) + L(λ,µ))x = 0 We begin with a T -odd polynomial L In both cases that

16 90 S S AHMAD AND V MEHRMANN m is even or odd, we have (L(λ,µ) + L(λ,µ))x = =,j even + = k + m 1 j=1, j odd λ m j µ j (A j + A j )x λ m j µ j A j x k + xx T k ],j even λ m j µ j A j x + (x T k)x + λ m j µ j z Aj + = k + k xx T k + x T kx = 0, since by Proposition 4 we have that, j even λ m j µ j z Aj + m 1 j=1, j odd m 1 j=1, j odd m, j even m 1 j=1, j odd λ m j µ j z Aj λ m j µ j z Aj (I xx T )k] λ m j µ j z Aj (I xx T )k + x T kx λ m j µ j z Aj = 1 The proof for T -even polynomials is analogous In the special case of linear matrix polynomials, ie, for m = 1, we have the following expressions For even pencils we have A 0 := sign(µ) xx T A 0 xx H + z A0 (I xx T )kx H + xk T (I xx H ) ], A 1 := z A1 (I xx T )kx H + xk T (I xx H ) ], and for odd pencils we have A 1 := sign(λ) xx T A 1 xx H + z A1 (I xx T )kx H + xk T (I xx H ) ], A 0 := z A0 (I xx T )kx H + xk T (I xx H ) ], where sign(z) = 1, if z 0 and sign(z) = 0, for z = 0 As a corollary we obtain the results for the case of non-homogeneous matrix polynomials with no infinite eigenvalues of Theorem 41 in ], see also 3, 4] This case follows by setting c = 1,L(s) = L(1,s),Λ = 1,µ,,µ m ] T and w = 1,1,,1] T The minimal backward errors for complex T -even polynomials and m > 1 are as follows THEOREM 4 Let L L m (C n n ) be a T -even matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and set k := L(λ,µ)x i) The structured backward error with respect to the Frobenius norm is given by x T k K + k x T k w 1, H w 1, ηw,f(λ,µ,x,l) S k x T k = H w 1, k x T k H w 1, if m is even or if µ 0 and m is odd if λ = 0 and, m is even, if µ = 0 and, m is odd

17 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 91 ii) The structured backward error with respect to the spectral norm is given by x T k K + k x T k if m is even or w 1, Hw 1, if µ 0 and m is odd ηw,(λ,µ,x,l) S = k if λ = 0 and, m is even, H w 1, k H w 1, if µ = 0 and, m is odd When m is even, or when m is odd and λ 0, introduce the perturbation matrices { k Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] for even j, A j := z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j Then L(c,s) = m cm j s j A j is the unique T -even matrix polynomial satisfying (L(λ,µ) + L(λ,µ))x = 0, and L w,f = ηw,f S (λ,µ,x,l) Similarly, for the spectral norm, when m is even or when m is odd and λ 0, introduce the perturbation matrices A A j := j k A j x T k(i xx H )kk T (I xx T ) k x T k for even j, A j for odd j Then the matrix polynomial L(c, s) = c m j s j A j is T -even, has spectral norm L w, = ηw,(λ,µ,x,l), S and satisfies (L(λ,µ) + L(λ,µ))x = 0 Proof Theorem 41 implies that (L(λ,µ)+ L(λ,µ))x = 0 and hence k = L(λ,µ)x Now choose a unitary matrix U = x,u 1 ], U 1 C n n 1 and let ] A j := U T dj,j d A j U = T j, D j,j = Dj,j T C (n 1) (n 1) d j D j,j when j is even and 0 b T A j = U j b j B j,j ] U H, B T j,j = B j,j when j is odd Then, since U L(λ,µ)U T = UU T ( L(λ,µ))U T U = L(λ,µ), it follows ] that U L(λ,µ)U T x = L(λ,µ)x = k, and hence L(λ,µ)U x T x = U T k = T k U1 T k Using λ m j µ j w j d j,j w ] j w j λ m j µ j d j w j λ m j µ j b x j = T k U T, 1 k w j, j even j=1,j odd to minimize the perturbation, we solve this system for the parameters d j,j,d j in a least square sense, and we obtain w 0 a 0,0 z Am w a, z A w m a m,m = z Am xt k w j

18 9 S S AHMAD AND V MEHRMANN Then d j,j = k Aj x T k,d j = z Aj U1 T k for even j and b j = z Aj U1 T k for odd j and we obtain ( ) T U k A j x T k z Aj U1 T k U H for even j, z Aj U1 T k D j,j A j := ( ) T 0 z U Aj U1 T k U H for odd j z Aj U1 T k This implies that B j,j (41) A j = z Aj (I xx T )kx H + xk T (I xx H ) ] + U 1 D j,j U H 1, when j is odd For even j, we get A j = ] x U 1 k A j x T k z Aj U T 1 k ( z Aj U T 1 k D j,j ) T ] x H U H 1 = k Aj (x T k)(xx H ) + z Aj U1 (U T 1 )kx H + xk T U 1 U H 1 ] + U1 D j,j U H 1, and thus (4) A j = k Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] + U 1 D j,j U H 1 The Frobenius norm can be minimized by taking A j,j = 0, so we obtain { k Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] for even j, A j := z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j Since the Frobenius norm is unitarily invariant, it follows that for even j we have A j F = a j,j + a j = k Aj x T k + z Aj U1 Tk j K w = 1, x T k Kw + jh w 1, U1 T k 1, Hw 1, Similarly for odd j, we have A j F = z Aj U1 T k Furthermore, by Proposition 4, we have wj j K w 1, = 1 and wj j H w 1, = 1 when m is even Then it follows that j=even L w,f = wj A j F = x T k = x T k K w 1, K w 1, + ( k x T k ) Hw 1, + UT 1 k H w 1, For the spectral norm, we have from (41) and (4) that { k Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] + S j for even j, A j := z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j,

19 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 93 where S j := U 1 D j,j U1 H = z A j P xt k(i xx T )kk T (I xx H ), and P = k x T k Now let k Aj x T k + z Aj ( k x T k ) for even j, χ Aj := z Aj ( k xt k ) for odd j Hence, by Corollary 8 it follows that A = χ Aj Then L w, = wj A = x T k Kw + k x T k 1, Hw, 1, and η S w,(λ,µ,x,l) = x T k K w 1, + k x T k Hw 1, We obtain the following relations between the structured and unstructured backward errors COROLLARY 43 Let L L m (C n n ) be a T -even matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1, and set k := L(λ,µ)x 1 If w := 1,1,,1] T, λ = µ = 1 and if m is odd, then H w 1, = K w 1, and for the Frobenius norm we get η S w,f(λ,µ,x,l) = η w, (λ,µ,x,l) Similarly, for the spectral norm we have k ηw,(λ,µ,x,l) S = + x T k H w 1, If m is even or if m is odd and λ 0, then for the Frobenius and the spectral norm we have η S w,f(λ,µ,x,l) η w, (λ,µ,x,l), η S w,(λ,µ,x,l) = η w, (λ,µ,x,l), respectively Proof Consider the case that λ = µ = 1, w = 1,1,,1] T and that m is odd Then Hw 1, = K w 1, Substituting these in Theorem 4 and then applying (), we get for the Frobenius norm that η S w,f(λ,µ,x,l) = η w, (λ,µ,x,l) and for the spectral norm that ηw,(λ,µ,x,l) S k + x T k = H w 1, If m is even and λ = 0, then we have K w 1, = w 1 m µ m and H w 1, = w 1 m µ m and hence η S w,f(λ,µ,x,l) η w, (λ,µ,x,l), η S w,(λ,µ,x,l) = η w, (λ,µ,x,l)

20 94 S S AHMAD AND V MEHRMANN Similarly, for µ = 0 we have K w 1, = w 1 0 λ m and H w 1, = w 1 0 λ m, and hence η S w,f(λ,µ,x,l) η w, (λ,µ,x,l), η S w,(λ,µ,x,l) = η w, (λ,µ,x,l) The assertion for the case that λ 0 and m is odd follows analogously As a corollary we obtain the results for non-homogeneous matrix polynomials with no infinite eigenvalues of, 4], using the notation Λ e := 1,µ,,µ m ] T if m is even and Λ e := 1,µ,,µ m 1 ] T if m is odd COROLLARY 44 Let L L m (C n n ) be a T -even matrix polynomial of the form L(s) = m sj A j C n n that has only finite eigenvalues Let µ C, let x C n be such that x H x = 1 and set k := L(µ)x i) The structured backward error with respect to the Frobenius norm is given by x T k ηf(µ,x,l) S = Λ e + k x T k Λ if µ C \ {0}, k xt k if µ = 0 ii) The structured backward error with respect to the spectral norm is given by x T k η S (µ,x,l) = Λ e + k x T k Λ if µ C \ {0}, η (µ,x,l) if µ = 0 In particular, if µ = 1 and m is odd, then we have Λ = Λ e Moreover, for the Frobenius norm we have ηf S(µ,x,L) = η (µ,x,l) and for the spectral norm we obtain k η S (µ,x,l) = + x T k Λ If we introduce the perturbation matrices µ j (x T k)(xx H ) Λ e A j := µj Λ + µj (I xx T Λ )kx H + xk T (I xx H ) ] for even j, (I xx T )kx H + xk T (I xx H ) ] for odd j, then L(s) = m sj A j is the uniquely defined T -even matrix polynomial that satisfies (L(µ) + L(µ))x = 0 and L F = ηf S (µ,x,l) for the Frobenius norm For the spectral norm, we introduce A j µj x T k(i xx H )kk T (I xx T ) A j := Λ e ( k x T k for even j, ) A j for odd j Then L(s) = m sj A j is a T -even matrix polynomial such that (L(µ)+ L(µ))x = 0 and L = η S (µ,x,l)

21 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 95 Proof The proof follows from Theorem 4 using w = 1,1,,1] T, c = 1 and that H w 1, := Λ,K w 1, := Λ e Remark 45 Corollary 43 implies that for µ = 1, and for the spectral norm we have that k η S (µ,x,l) = + x T k, Λ while in, Theorem 436] and in 4, Theorem 37] it is shown that η S (µ,x,l) = η (µ,x,l) when w = 1,1,,1] T and m is odd For complex T -even pencils we obtain the following result COROLLARY 46 Let L(c,s) = ca 0 + sa 1 L 1 (C n n ) be a T -even matrix pencil Let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1, and set k := L(λ,µ)x, w := 1,1] T i) The structured backward error with respect to the Frobenius norm is given by x T A 0 x + k λ x T A 0 x λ,µ] T ( ) µ if λ 0, ηf(λ,µ,x,l) S λ 1 x T k + k = = λ,µ] T ηw, (λ,µ,x,l) if µ = 0, ηw, (λ,µ,x,l) if λ = 0, ηw, (λ,µ,x,l) if λ = 1, µ = 1 ii) The structured backward error with respect to the spectral norm is given by x T A 0 x + k λ x T A 0 x λ,µ] T if λ 0, µ x T A 0 x + k = η S (λ,µ,x,l) = λ,µ] T η (λ,µ,x,l) if µ = 0, η (λ,µ,x,l) if λ = 0, Defining the perturbation matrices x T A 0 x + k if λ = µ = 1 A 0 := sign(λ) xx T A 0 xx H + z A0 (I xx T )kx H + xk T (I xx H ) ], A 1 := z A1 (I xx T )kx H + xk T (I xx H ) ], we have for the Frobenius norm that L(c,s) = c A 0 +s A 1 is the unique T -even matrix polynomial that satisfies (L(λ,µ) + L(λ,µ))x = 0 and L w,f = η S w,f (λ,µ,x,l) For the spectral norm we introduce the perturbation matrices A 0 := A 0 sign(λ )x T A 0 x(i xx T )kk T (I xx H ) ( k x T A 0 x, ) A 1 := z A1 (I xx T )kx H + xk T (I xx H ) ],

22 96 S S AHMAD AND V MEHRMANN then L(c,s) = c A 0 + s A 1 is T -even and satisfies (L(λ,µ) + L(λ,µ))x = 0 and L w, = η S w,(λ,µ,x,l) Proof The proof follows as in Theorem 41, using m = 1 and w := 1,1] T It follows that for λ = 0 in the T -even case we have A 0 = 0 and A 1 := z A1 (I xx T )kx H + xk T (I xx H ) ] These perturbations are the same for the spectral and the Frobenius norm Furthermore, Corollary 46 shows that { ηf(λ,µ,x,l) S η (λ,µ,x,l) if µ < λ, λ,µ] T η (λ,µ,x,l) if µ > λ For a non-homogeneous pencil L(s) = A 0 + sa 1 L 1 (C n n ) we then have { ηf(µ,x,l) S η (µ,x,l) if µ < 1, 1,µ] T η (λ,µ,x,l) if µ > 1, which has been shown in Theorem 34 of 3] for the case that µ 0 ] 1 Example 47 Consider a T -even matrix pencil which has coefficients A 0 :=, 1 ı ] ] 0 ı ı/ A 1 :=, let x = ı 0 ı/ and (λ,µ) = (1,0) Then we obtain the following perturbation matrices For the Frobenius norm we have ] ] ı ı 0 0 A 0 =, A ı 1 075ı 1 =, 0 0 ] ] ı ı 0 ı A 0 + A 0 =, A ı ı 1 + A 1 =, +ı 0 and L F = ηf S(λ,µ,x,L) For the spectral norm we obtain ] ] ı ı 0 0 A 0 =, A ı ı 1 =, 0 0 ] ] ı ı 0 ı A 0 + A 0 =, A ı ı 1 + A 1 =, ı 0 and η S (λ,µ,x,l) = L = 147; see also Table 41 In a similar way we can derive the results for T -odd matrix polynomials THEOREM 48 Let L L m (C n n ) be a T -odd matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and set k := L(λ,µ)x i) The structured backward error with respect to the Frobenius norm is given by x T k Nw + k x T k if µ 0 and m odd, or 1, Hw 1, if µ,λ 0, and m even, ηw,f(λ,µ,x,l) S = k xt k H w 1, if λ = 0 and m odd

23 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 97 TABLE 41 Computed structured and unstructured backward errors for Example 47 (λ,µ) S η S (λ,µ,x,l) ηf S(λ,µ,x,L) η (λ,µ,x,l) (1, 0) T -even (0, 1) T -even (, 1) T -even (4, 3) T -even (i, i) T -even ( + 3i, 1 + i) T -even (1, ) T -even (1, 1) T -even ii) The structured backward error with respect to the spectral norm is given by x T k ηw,(λ,µ,x,l) S N = + k x T k if µ 0 and m odd, or w 1, Hw 1, if λµ 0, and m even, k if λ = 0 and m odd H w 1, For µ 0 and odd m or for λ 0, µ 0 and m even, introduce the perturbation matrices { n Aj (x T k)(xx H ) + z Aj (I xx T )kx H + xk T (I xx H ) ] for odd j, A j := z Aj (I xx T )kx H + xk T (I xx H ) ] for even j Then, for the Frobenius norm, L(c,s) = m cm j s j A j is the unique T -odd matrix polynomial such that (L(λ,µ) + L(λ,µ))x = 0 and L w,f = ηw,f S (λ,µ,x,l) For µ 0 and odd m or for λ 0, µ 0 and even m and the spectral norm consider the perturbation matrices A E j := j n A j x T k(i xx H )kk T (I xx T ) ( k x T k for odd j, ) A j for even j Then L(c,s) = m cm j s j E j is T -odd, satisfies (L(λ,µ) + L(λ,µ))x = 0 and L w, = η S w,(λ,µ,x,l) Proof The proof is analogous to that for T -even matrix polynomials We then obtain the following relations between structured and unstructured backward errors of an approximate eigenpair COROLLARY 49 Let L L m (C n n ) be a T -even matrix polynomial of the form (11), let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1, and set k := L(λ,µ)x 1 If λ = 0 and m is odd, then for the Frobenius norm we have η S w,f(λ,µ,x,l) η w, (λ,µ,x,l) If λ = 0 and m is odd, then for the spectral norm we have η S w,(λ,µ,x,l) = η w, (λ,µ,x,l)

24 98 S S AHMAD AND V MEHRMANN 3 Let w := 1,1,,1] T and λ = µ = 1 for odd m Then we have for the Frobenius-norm η S w,f(λ,µ,x,l) = η w, (λ,µ,x,l), and for the spectral-norm k ηw,(λ,µ,x,l) S = + x T k H w 1 ; Proof The proof follows from the fact that if w := 1,1,,1] T and λ = µ = 1 and m is odd, then we have Hw 1, = N w 1, and then applying () the results follow As a corollary we also obtain the results for the case of non-homogeneous matrix polynomials with no infinite eigenvalues of, 4] By introducing the notation Λ o := µ,µ 3,,µ m 1 ] T when m is even and Λ o := µ,µ 3,,µ m ] T when m is odd and by choosing the weight vector w := 1,1,,1] T, we have the following result similar to Theorem 438 of ] COROLLARY 410 Let L L m (C n n ) be a T -odd matrix polynomial of the form L(s) = m sj A j C n n with det(a m ) 0, let µ C \ {0} and let x C n be such that x H x = 1 and set k := L(µ)x i) The structured backward error with respect to the Frobenius norm is given by η S F(µ,x,L) = { x T k Λ o + k x T k Λ ii) The structured backward error with respect to the spectral norm is given by { η S x (µ,x,l) = T k Λ o + k x T k Λ In particular, for odd m and µ = 1 we have for the Frobenius norm Λ = Λ o and ηf S(µ,x,L) = k η (µ,x,l) and for the spectral norm η S (µ,x,l) = + x T k Λ Defining the perturbation matrices µ j (x T k)(xx H ) Λ o + µj (I xx T Λ )kx H + xk T (I xx H ) ] for odd j, A j := µj (I xx T Λ )kx H + xk T (I xx H ) ] for even j, then L(s) = m sj A j is the uniquely defined T -odd matrix polynomial such that (L(µ) + L(µ))x = 0 and L F = ηf S (µ,x,l) in the Frobenius norm For the spectral-norm, we introduce the perturbation matrices A E j := j µj x T k(i xx H )kk T (I xx T ) Λ o ( k x T k for odd j, ) A j for even j Then L(s) = m sj E j is a T -odd matrix polynomial such that (L(µ)+ L(µ))x = 0 and L = η S (µ,x,l)

25 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 99 Proof The proof follows from Theorem 48 using the fact that H w 1, := Λ, K w 1, := Λ o when w = 1,1,,1] T and c = 1 Remark 411 The case that µ = 0 is not covered by the formulas in Corollary 410 for the case m > 1 But it has been shown in Theorem 438 of ] that for µ = 0, it hold that ηf S(µ,x,L) = η (µ,x,l) and η S (µ,x,l) = η (µ,x,l), respectively, for the Frobenius norm and the spectral norm For µ = 1 and the spectral norm we have k η S (µ,x,l) = + x T k, Λ while again it has been shown in Theorem 438 of ] that η S (µ,x,l) = η (µ,x,l) For the pencil case we have the following Corollary COROLLARY 41 Let L(c,s) = ca 0 + sa 1 L 1 (C n n ) be a T -odd matrix pencil, let (λ,µ) C \ {(0,0)}, let x C n be such that x H x = 1 and set k := L(λ,µ)x i) The structured backward error with respect to the Frobenius norm is given by x T A 1 x + k µ x T A 1 x λ,µ] T ( ) λ if µ 0, ηf(λ,µ,x,l) S µ 1 x T k + k = = λ,µ] T η (λ,µ,x,l) if λ = 0, η (λ,µ,x,l) if µ = 0, η (λ,µ,x,l) if λ = 1, µ = 1 ii) The structured backward error with respect to the spectral norm is given by x T A 1 x + k µ x T A 1 x λ,µ] T if µ 0, λ x T A 1 x + k = η S (λ,µ,x,l) = λ,µ] T η (λ,µ,x,l) if λ = 0,µ 0, η (λ,µ,x,l) if λ 0,µ = 0, x T A 1 x + k iii) Introduce the perturbation matrices if λ = 1, µ = 1 A 0 := z A0 (I xx T )kx H + xk T (I xx H ) ], A 1 := sign(µ) xx T A 1 xx H + z A1 (I xx T )kx H + xk T (I xx H ) ] Then for the Frobenius norm we obtain the unique T -odd pencil L(c,s) = c A 0 + s A 1 such that (L(λ,µ) + L(λ,µ))x = 0 and L F = η S F (λ,µ,x,l) For the spectral norm, defining E 1 := A 1 sign(µ )x T A 1 x(i xx T )kk T (I xx H ) ( k x T A 1 x ) and E 0 := A 0, then we obtain a T -odd pencil L(c,s) = c E 0 + s E 1 with (L(λ,µ) + L(λ,µ))x = 0 and L = η S (λ,µ,x,l)

26 300 S S AHMAD AND V MEHRMANN TABLE 4 Computed structured and unstructured backward errors for Example 413 (λ,µ) S η S (λ,µ,x,l) ηf S(λ,µ,x,L) η (λ,µ,x,l) (0, 1) T -odd (1, 0) T -odd (, 1) T -odd (4, 3) T -odd (i, i) T -odd ( + 3i, 1 + i) T -odd (1, ) T -odd (1, 1) T -odd Proof The proof is analogous to that of Theorem 4 using m = 1 and w := 1,1] T By the above results it is clear that if µ = 0, then for the T -odd case we have A 1 = 0 and A 0 = z A0 (I xx T )kx H + xk T (I xx H ) ] These perturbations are the same for the spectral and Frobenius norm Furthermore, Corollary 41 shows that η (λ,µ,x,l) when µ > λ, ηf(λ,µ,x,l) S λ,µ] T η (λ,µ,x,l) when µ < λ Now consider a pencil L(z) = A 0 + za 1 L 1 (C n n ) Then for given µ C and x C n such that x H x = 1, we have η (µ,x,l) when µ > 1, ηf(µ,x,l) S 1,µ 1 ] T η (λ,µ,x,l) when µ < 1, which has been shown in 3] As another corollary we obtain the results for T -odd matrix pencilsl(z) := A 0 + za 1 presented in ] Let us illustrate these perturbation results with a few examples Example 413 Consider a T -odd matrix pencil with coefficients A 0 := ] ] 1 + ı 0 ı/ and A 1 := Let x = 0 0 ı/ and (λ,µ) = (0,1) i) For the Frobenius norm we obtain the minimal perturbation coefficients 0 ] + ı ı 0 ] ] ı ı A 0 =, A =, ı ı ] ] 0 + ı ı ı A 0 + A 0 =, A ı A 1 =, ı ı and L F = η S F (λ,µ,x,l)

27 PERTURBATION ANALYSIS ON MATRIX POLYNOMIALS 301 ii) For the spectral norm we obtain ] ] ı ı A 0 =, A =, ı ı ] ] 0 + ı ı ı A 0 + A 0 =, A ı A 1 =, ı ı and L F = ηf S (λ,µ,x,l) = 1; see also Table 4 5 Conclusion The structured backward errors for an approximate eigenpair and the construction of minimal structured matrix polynomials have been introduced in 1,, 3, 4] such that an approximate eigenpair of L becomes exact for L + L in the Frobenius and the spectral norm However, this theory has been based on the condition that the polynomial eigenvalue problem has no eigenvalue at Also for T -odd matrix pencil case there is no information on the backward error for the 0 eigenvalue In this paper we have extended these results in the homogeneous setup of matrix polynomials which is a more convenient way to do the general perturbation analysis for matrix polynomials in that it equally treats all eigenvalues of a regular matrix polynomial We have presented a systematic general procedure for the construction of appropriately structured minimal norm polynomials L L m (C n n ) such that approximate eigenvector and eigenvalue become exact ones of the polynomial L + L The resulting minimal perturbation is unique in the case of the Frobenius norm and there are infinitely many solutions for the case of the spectral norm Furthermore, we derived the known results for matrix pencils and polynomials of, 3, 4] as corollaries and we have illustrated the results with several examples REFERENCES 1] B ADHIKARI, Backward errors and linearizations for palindromic matrix polynomials, Preprint, ], Backward perturbation and sensitivity analysis of structured polynomial eigenvalue problem, PhD Thesis, Dept of Mathematics, IIT Guwahati, India, 008 3] B ADHIKARI AND R ALAM, Structured backward errors and pseudospectra of structured matrix pencils, SIAM J Matrix Anal Appl, 31 (009), pp ], On backward errors of structured polynomial eigenproblems solved by structure preserving linearizations, Linear Algebra Appl, 434 (011), pp ] S S AHMAD, Pseudospectra of matrix pencils and their applications in the perturbation analysis of eigenvalues and eigendecompositions, PhD Thesis, Dept of Mathematics, IIT Guwahati, India, 007 6] S S AHMAD AND R ALAM, Pseudospectra, critical points and multiple eigenvalues of matrix polynomials, Linear Algebra Appl, 430 (009), pp ] S S AHMAD, R ALAM, AND R BYERS, On pseudospectra, critical points and multiple eigenvalues of matrix pencils, SIAM J Matrix Anal Appl, 31 (010), pp ] P ARBENZ AND O CHINELLATO, On solving complex-symmetric eigenvalue problems arising in the design of axisymmetric VCSEL devices, Appl Numer Math, 58 (008), pp ] T BETCKE, N J HIGHAM, V MEHRMANN, C SCHRÖDER, AND F TISSEUR, NLEVP: A collection of nonlinear eigenvalue problems, MIMS Eprint, The University of Manchester, ] F BLÖMELING AND H VOSS, Model reduction methods for solving symmetric rational eigenvalue problems, Proc Appl Math Mech, 4 (004), pp ] S BORA, Structured eigenvalue condition number and backward error of a class of polynomial eigenvalue problems, SIAM J Matrix Anal Appl, 31 (009), pp ] S BORA AND V MEHRMANN, Perturbation theory for structured matrix pencils arising in control theory, SIAM J Matrix Anal Appl, 8 (006), pp ] T BRÜLL AND V MEHRMANN, STCSSP: A FORTRAN 77 routine to compute a structured staircase form for a (skew-)symmetric/(skew-)symmetric pencil, Preprint , Institut für Mathematik, TU Berlin, ] R BYERS, V MEHRMANN, AND H XU, A structured staircase algorithm for skewsymmetric/symmetric pencils, Electron Trans Num Anal, 6 (007), pp

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Pseudospectra of Matrix Pencils and some Distance Problems

Pseudospectra of Matrix Pencils and some Distance Problems Pseudospectra of Matrix Pencils and some Distance Problems Rafikul Alam Department of Mathematics Indian Institute of Technology Guwahati Guwahati - 781 039, INDIA Many thanks to Safique Ahmad and Ralph

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Structured Backward Error for Palindromic Polynomial Eigenvalue Problems

Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Ren-Cang Li Wen-Wei Lin Chern-Shuh Wang Technical Report 2008-06 http://www.uta.edu/math/preprint/ Structured Backward Error for

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht G G QQ K K Z z U K z q Z 22 x z - z 97 Z x z j K K 33 G - 72 92 33 3% 98 K 924 4 G G K 2 G x G K 2 z K j x x 2 G Z 22 j K K x q j - K 72 G 43-2 2 G G z G - -G G U q - z q - G x) z q 3 26 7 x Zz - G U-

More information

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,

More information

On a root-finding approach to the polynomial eigenvalue problem

On a root-finding approach to the polynomial eigenvalue problem On a root-finding approach to the polynomial eigenvalue problem Dipartimento di Matematica, Università di Pisa www.dm.unipi.it/ bini Joint work with Vanni Noferini and Meisam Sharify Limoges, May, 10,

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

On rank one perturbations of Hamiltonian system with periodic coefficients

On rank one perturbations of Hamiltonian system with periodic coefficients On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci

More information

Structured eigenvalue condition numbers and linearizations for matrix polynomials

Structured eigenvalue condition numbers and linearizations for matrix polynomials Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Structured eigenvalue condition numbers and

More information

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Int. J. Contemp. Math. Sciences, Vol. 8, 201, no. 4, 151-158 HIKARI Ltd, www.m-hikari.com Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Kelly J. Pearson Department of Mathematics and Statistics

More information

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016.

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016. Polynomial eigenvalue solver based on tropically scaled Lagrange linearization Van Barel, Marc and Tisseur, Francoise 2016 MIMS EPrint: 201661 Manchester Institute for Mathematical Sciences School of Mathematics

More information

A note on an unusual type of polar decomposition

A note on an unusual type of polar decomposition A note on an unusual type of polar decomposition H. Faßbender a, Kh. D. Ikramov b,,1 a Institut Computational Mathematics, TU Braunschweig, Pockelsstr. 14, D-38023 Braunschweig, Germany. b Faculty of Computational

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Norms and Perturbation theory for linear systems

Norms and Perturbation theory for linear systems CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1.

Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1. NONLINEAR EIGENVALUE PROBLEMS WITH SPECIFIED EIGENVALUES MICHAEL KAROW, DANIEL KRESSNER, AND EMRE MENGI Abstract. This work considers eigenvalue problems that are nonlinear in the eigenvalue parameter.

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Perturbing Palindromic Matrix Equations to Make Them Solvable

Perturbing Palindromic Matrix Equations to Make Them Solvable Perturbing Palindromic Matrix Equations to Make Them Solvable Federico Poloni 1 Joint work with Tobias Brüll 2, Giacomo Sbrana 3, Christian Schröder 4 1 U Pisa, Dept of Computer Science 2 Computer Simulation

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise

Tropical roots as approximations to eigenvalues of matrix polynomials. Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise Tropical roots as approximations to eigenvalues of matrix polynomials Noferini, Vanni and Sharify, Meisam and Tisseur, Francoise 2014 MIMS EPrint: 2014.16 Manchester Institute for Mathematical Sciences

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Is there a Small Skew Cayley Transform with Zero Diagonal?

Is there a Small Skew Cayley Transform with Zero Diagonal? Is there a Small Skew Cayley Transform with Zero Diagonal? Abstract The eigenvectors of an Hermitian matrix H are the columns of some complex unitary matrix Q. For any diagonal unitary matrix Ω the columns

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 029 05 c 2006 Society for Industrial and Applied Mathematics STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D. STEVEN

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

EIGENVALUE BACKWARD ERRORS OF POLYNOMIAL EIGENVALUE PROBLEMS UNDER STRUCTURE PRESERVING PERTURBATIONS. Punit Sharma

EIGENVALUE BACKWARD ERRORS OF POLYNOMIAL EIGENVALUE PROBLEMS UNDER STRUCTURE PRESERVING PERTURBATIONS. Punit Sharma EIGENVALUE BACKWARD ERRORS OF POLYNOMIAL EIGENVALUE PROBLEMS UNDER STRUCTURE PRESERVING PERTURBATIONS Ph.D. Thesis Punit Sharma DEPARTMENT OF MATHEMATICS INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI August,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

POLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS

POLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS Electronic Transactions on Numerical Analysis. Volume 44, pp. 660 670, 2015. Copyright c 2015,. ISSN 1068 9613. ETNA POLYNOMIAL INTERPOLATION IN NONDIVISION ALGEBRAS GERHARD OPFER Abstract. Algorithms

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

Contour integral solutions of Sylvester-type matrix equations

Contour integral solutions of Sylvester-type matrix equations Contour integral solutions of Sylvester-type matrix equations Harald K. Wimmer Mathematisches Institut, Universität Würzburg, 97074 Würzburg, Germany Abstract The linear matrix equations AXB CXD = E, AX

More information

On an uniqueness theorem for characteristic functions

On an uniqueness theorem for characteristic functions ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of

More information

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn Structure preserving stratification of skew-symmetric matrix polynomials by Andrii Dmytryshyn UMINF 15.16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE- 901 87 UMEÅ SWEDEN Structure preserving stratification

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Polynomial Eigenvalue Problems: Theory, Computation, and Structure. Mackey, D. S. and Mackey, N. and Tisseur, F. MIMS EPrint: 2015.

Polynomial Eigenvalue Problems: Theory, Computation, and Structure. Mackey, D. S. and Mackey, N. and Tisseur, F. MIMS EPrint: 2015. Polynomial Eigenvalue Problems: Theory, Computation, and Structure Mackey, D. S. and Mackey, N. and Tisseur, F. 2015 MIMS EPrint: 2015.29 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular

More information

Optimization Based Output Feedback Control Design in Descriptor Systems

Optimization Based Output Feedback Control Design in Descriptor Systems Trabalho apresentado no XXXVII CNMAC, S.J. dos Campos - SP, 017. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics Optimization Based Output Feedback Control Design in

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION SOLVNG RATONAL EGENVALUE PROBLEMS VA LNEARZATON YANGFENG SU AND ZHAOJUN BA Abstract Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 28, No 4, pp 971 1004 c 2006 Society for Industrial and Applied Mathematics VECTOR SPACES OF LINEARIZATIONS FOR MATRIX POLYNOMIALS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN

More information

PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS

PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN MEHL, AND VOLKER MEHRMANN Abstract Palindromic polynomial eigenvalue problems

More information

Iyad T. Abu-Jeib and Thomas S. Shores

Iyad T. Abu-Jeib and Thomas S. Shores NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations

Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations Mackey, D. Steven and Mackey, Niloufer and Mehl, Christian and Mehrmann, Volker 006 MIMS EPrint: 006.8 Manchester Institute

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH

More information