Structured Backward Error for Palindromic Polynomial Eigenvalue Problems

Size: px
Start display at page:

Download "Structured Backward Error for Palindromic Polynomial Eigenvalue Problems"

Transcription

1 Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Ren-Cang Li Wen-Wei Lin Chern-Shuh Wang Technical Report

2 Structured Backward Error for Palindromic Polynomial Eigenvalue Problems Ren-Cang Li Wen-Wei Lin Chern-Shuh Wang September 5, 2008 Abstract A detailed structured backward error analysis for four kinds of Palindromic Polynomial Eigenvalue Problems PPEP) d A l λ )x l = 0, A d l = εa l for l = 0, 1,..., d/2, where is one of the two actions: transpose and conjugate transpose, and ε {±1}. Each of them has its application background with the case taking transpose and ε = 1 attracting a great deal of attention lately because of its application in the fast train modeling. Computable formulas and bounds for the optimal structured backward errors are obtained. The analysis reveals distinctive features of PPEP from the usual Polynomial Eigenvalue Problems PEP) as investigated by Tisseur Linear Algebra Appl., 309: , 2000) and by Liu and Wang Appl. Math. Comput., 165: , 2005). 1 Introduction In vibration analysis of fast trains arises the Palindromic Quadratic Eigenvalue Problem PQEP) [2, 9, 11]: find a scalar λ C and an n-dimensional vector x C n \ {0} such that Qλ)x A 0 λ 2 + A 1 λ + A 0 )x = 0, 1.1) where A 0 and A 1 are n n real or complex) matrices with A 1 = A 1, and the superscript takes matrix transpose. The scalar λ and the nonzero vector x in 1.1) are an Department of Mathematics, University of Texas at Arlington, Arlington, P.O. Box 19408, Arlington, TX 76019, USA; rcli@uta.edu. Supported in part by the National Science Foundation under Grant DMS and by the National Center for Theoretical Sciences at National Tsing-Hua University, Taiwan where this work was initiated while this author was visiting the center. Department of Applied Mathematics, National Chiao Tung University, Hsinchu 300, Taiwan; wwlin@math.nctu,edu.tw. Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan; cswang@math.ncku,edu.tw. 1

3 eigenvalue and its associated eigenvector of Qλ). The pair {λ, x} is called an eigenpair of Qλ). Presently, PQEP is solved by certain kinds of structurally preserving linearization, such as Jacobi-type method [6, 12], QR-like algorithm [13], Patel s algorithm [8] and U RV - like algorithm [14], as well as the structure-preserving doubling algorithm [2]. Experience shows that PQEP from the fast train application is notoriously difficult to solve accurately, partly because of its widely varying magnitudes in eigenvalues. How to solve it accurately and robustly is still an active research topic. PQEP is a particular case of the so-called Polynomial Eigenvalue Problem PEP) of degree d : d ) A l λ l x = 0, 1.2) where A l, l = 0,1,...,d are n n complex matrices. When d = 2, PEP 1.2) is called the Quadratic Eigenvalue Problem QEP) [16]. For a PEP in its generality there is no relation among coefficient matrices, unlike PQEP. Naturally one defines the Palindromic Polynomial Eigenvalue Problem PPEP) to be a PEP 1.2) with A d l = A l, l = 0,1,..., d/2, namely, d Pλ)x A l λ )x l = 0, A d l = A l, for l = 0,1,..., d/2, 1.3) where d/2 is the largest integer that is no bigger than d/2. Let {µ,z} be a computed eigenpair of 1.2) or 1.3). Ideally, we would like to have d A lµ l ) z = 0. But practically we have d A l µ )z l = r with residual r 0 but tiny. Backward error analysis concerns if the computed eigenpair {µ,z} is an exact eigenpair of a nearby PEP, namely, d A l + ρ l A l )µ )z l = 0 with backward error matrices A l l = 0,1,...,d) hopefully tiny in magnitude, where ρ l l = 0,1,...,d) are scaling parameters, usually taken to be some norms of A l, respectively. For a general PEP, no constraints on and/or among A l, l = 0,1,...,d are imposed, except that A l, l = 0,1,...,d should have as tiny magnitude as possible. But when a PEP has worthy structures like PPEP, naturally we would ask if the computed eigenpair {µ,z} is an exact eigenpair of a nearby PPEP. This is the so-called structured backward error analysis for PPEP. Tisseur [15] thoroughly developed a backward error analysis for PEP in its generality, where no structure upon on δa l is imposed because there is not any on A l assumed. When A l, l = 0,1,...,d are Hermitian, naturally one would like to enforcing A l Hermitian, 2

4 too. This is the structured backward error analysis for the case. In the same article, Tisseur obtained a result when µ is real. When µ is complex, the structured backward error analysis is completed by Liu and Wang [10] who also investigated the corresponding question for the generalized and multi-parameter eigenvalue problem. Other two related works are Higham and Higham [3] and Hochstenbach and Plestenjak [7]. None of the results from these papers apply to PPEP s structured backward error analysis, however. In this paper, we shall consider PPEP in a more broader sense, with 1.3) being one of the four different kinds. To present these PPEP compactly, we let the superscript be either or H which takes complex conjugate and transpose and let ε {±1}. Collectively by a PPEP, we mean a polynomial eigenvalue problem of the following form d A l λ )x l = 0, A d l = εa l for l = 0,1,..., d/2. 1.4) We have mentioned the fast train application [2, 9, 11] which yielded a problem of this form with d = 2, = T, and ε = 1. It was first raised in the study of the vibration in the structural analysis for fast trains [5, 6], and then of the behavior of periodic surface acoustic wave SAW) filters [18]. PQEP with = H was raised in the computation of the Crawford number by the bisection and level set methods in [4]. A typical even degree PPEP of 1.4) can be derived from solving linear quadratic discrete-time optimal control problem subject to the higher order discrete system [1, 17]. Let {µ,z} be a computed eigenpair and let be either the spectral norm 1 2 or the Frobenius norm F. We are interested in knowing the optimal structured backward error d/2 min A l 2 subject to d ) A l + ρ l A l )µ l z = 0, ρ d l = ρ l 0, A d l + ρ d l A d l = εa l + ρ l A l ), for l = 0,1,..., d/2. 1.5) In the usual PEP case, backward errors always exist. But as one might expect, forcing backward perturbations to be structured creates critical cases where backward errors fail to exist, i.e., there are no A l, l = 0,1,...,d, satisfying the constraints in 1.5). In fact, as we shall see that the critical cases are when µ = 1 if = H and when µ = ±1 if =. The rest of this paper is organized as follows. Section 2 significantly simplifies 1.5) to a problem of solving a 2-by-2 matrix equation, which is solved in Section 3 for = H and in Section 4 for = T. It turns out the case when = H is considerably more complicated and interesting than the case when = T. Numerical examples are given in Section 5 to illustrate our results, and finally we present a few concluding remarks in Section 6. Notation. Throughout this paper, C n m is the set of all n m complex matrices, C n = C n 1, and C = C 1. Similarly define R n m, R n, and R except replacing the word complex by real. I n or simply I if its dimension is clear from the context) is the n n identity matrix. x is the complex conjugate of a scalar or the entry-wise complex conjugate 1 Notation 2 is generic and on a vector it is its l 2-norm. 3

5 of a vector or matrix. Rα) and Iα) are the real and imaginary part of α, respectively, and ι is the imaginary unit. We shall also adopt MATLAB-like convention to access the entries of vectors and matrices. i : j is the set of integers from i to j inclusive and i : i = {i}. For a matrix X, X i,j) is X s i,j)th entry, and X k:l,i:j) is X s submatrices consisting of intersections of row k to row l and column i to column j. X is X s Moore-Penrose inverse. 2 Simplify the Problem The constraints in 1.5) require ρ d l = ρ l and A d l = ε A l, for l = 0,1,..., d/2, and d ) d ρ l A l µ l z = r def = A l µ )z, l 2.1) where A l C n n l = 0,1,...,d). We will be investigating if 2.1) has a solution { A l,l = 0,1,..., d/2 } and if it does, we ll seek A l such that d/2 A l 2 = min, 2.2) where is either 2 or F. The two trivial cases i) when all ρ l = 0, and ii) when r = 0 will be excluded from our consideration. Assume n 2. Later we shall comment on which part of our analysis will cover the case when n = 1, i.e., the scalar polynomial case. Let Q C n n be a unitary matrix, i.e., QQ H = I n, such that α γ 0 β Q H z r ) = 0 0, 2.3) where Such Q always exists, and necessarily 2.3) implies It follows from 2.1) that r = r if = H, and r = r if =. 2.4) α = z 2, γ = zh r ᾱ, 2.5) β = zh r r z 2 z r 2 = 2 z 2 2 zh r ) 2 2 z 2 Q d ρ l A l µ l )QQ H z = Q r, 2.7) 4

6 or equivalently d ) ρ l B l µ l y = w, ρ d l = ρ l and B d l = ε Bl, for l = 0,1,..., d/2, 2.8) where γ α B l = Q A l )Q, y = Q H 0 β z =., w = Q r = 0,. 0 0 β = β and γ = γ if = H, and β = β and γ = γ if =. This reduction technique of going from 2.1) to 2.8) through performing 2.3) is similar to that used in [10, p.407] which came to our attention after this paper of ours is very much in its final stage. Since Q is unitary, 2.1) and 2.8) have the same solvability property, i.e., if one is solvable, so is the other, and moreover d/2 B l 2 = d/2 A l 2. Thus the optimal solution { B l,l = 0,1,..., d/2 } to 2.8) in the sense of d/2 B l 2 = min 2.9) gives rise to one solution { A l,l = 0,1,..., d/2 } to 2.1) in the sense of 2.2), and vice versa. Set δ 1 = γ/α, δ 2 = β/α. 2.10) and p = min d/2 B l 2 p : 2.8) satisfied for p = 2, F. 2.11) p = means that 2.8) is not solvable. It follows from 2.5) and 2.6) that δ 1 = zh r z 2, δ 2 = 2 r 2 2 z 2 2 zh r 2 z 2, 2 δ1 2 + δ 2 2 = r 2 z ) In particular δ 1 = γ α = zh r ᾱα = zh r z 2 2 for = H. 2.13) 5

7 Theorem 2.1 Suppose that 2.8) has a solution so does 2.1)). The optimal solution { B l,l = 0,1,..., d/2 } to 2.8) in the sense of 2.9) for the Frobenius norm satisfies B l ) i,j) = 0 for i, j > 2 and for l = 0,1,..., d/ ) There is an optimal solution { B l,l = 0,1,..., d/2 } in the sense of 2.9) for the spectral norm satisfies 2.14). Finally 2 F ) Proof No proof is needed for B l with ρ l = 0. Suppose all ρ l are positive. Equating the corresponding entries at the both sides of 2.8) leads to n linear equations in the entries of B l,l = 0,1,..., d/2. The last n 2 equations are homogenous and contain only those entries B l ) i,j) = 0 for i, j > 2 and for l = 0,1,..., d/2 and at the same time they do not appear in the first 2 equations. Therefore the optimal solution { B l,l = 0,1,..., d/2 } in the sense of 2.9) for the Frobenius norm must satisfy 2.14). For any 2-by-2 block matrix ) X11 X X = 12, X 21 X 22 we have X 2 X Therefore if { B l,l = 0,1,..., d/2 } is an optimal solution to 2.8) for the spectral norm, resetting B l ) i,j) = 0 for i, j > 2 and for l = 0,1,..., d/2 and leaving B l ) i,j) for 1 i, j 2 and for l = 0,1,..., d/2 untouched yield a solution to 2.8) and at the same time do not make d/2 B l 2 2 bigger. Since before resetting, { B l,l = 0,1,..., d/2 } is optimal, d/2 B l 2 2 will not change before and after the resetting. Thus the resulted { B l,l = 0,1,..., d/2 } is an optimal solution satisfying 2.14) for the spectral norm. Let { B l,l = 0,1,..., d/2 } be the optimal solution in the Frobenius norm. Then 2 d/2 B l 2 2 d/2 B l 2 F = F. On the other hand, let { B l,l = 0,1,..., d/2 } be an optimal solution satisfying 2.14) in the spectral norm. Then rank B l ) 2 for l = 0,1,..., d/2. Thus F d/2 B l 2 F d/2 2 B l 2 2 = 2 2, as expected. 6

8 Theorem 2.1 tells us that it suffices to consider the first two equations from the first 2 entries at the both sides of 2.8). In the two equations, only the entries B l ) i,j) for 1 i, j 2 and for l = 0,1,..., d/2 appear. We shall do so in the next two sections for each different combination of the superscript {, H} and ε {±1}. For this purpose, let l) b 11 b l) ) 12 B l ) 1:2,1:2) =. 2.16) b l) 21 b l) 22 Constraints among b l) ij because of B d l = ε Bl, for l = 0,1,..., d/2 will be imposed at the time we consider each case. Recall in 2.8) ρ d l = ρ l and B d l = ε Bl for l = 0,1,..., d/2. For even d, we have d ρ l B l µ l = = = = and B d/2 = ε Bd/2. For odd d, d ρ l B l µ l = = = = ρ l B l µ l + ρ d/2 B d/2 µ d/2 + ρ l B l µ l + ρ d/2 B d/2 µ d/2 + d l=d/2+1 0 j= ρ l B l µ l + ρ d/2 B d/2 µ d/2 + j=0 ρ l B l µ l ρ d j B d j µ d j ρ j ε B j µ d j ρ l B l + ε B l µd 2l) µ l + ρ d/2 B d/2 µ d/2 2.17) d 1)/2 d 1)/2 d 1)/2 d 1)/2 ρ l B l µ l + ρ l B l µ l + d l=d 1)/2+1 0 j=d 1)/2 d 1)/2 ρ l B l µ l + j=0 ρ l B l µ l ρ d j B d j µ d j ρ j ε B j µ d j ρ l B l + ε B l µd 2l) µ l. 2.18) Our detailed analysis in the next two sections is for the Frobenius norm only. Results for the spectral norm can then be deduced straightforwardly by using 2.15). So in the next two sections, in 2.9) is always the Frobenius norm. Recall the assumption r 0 which implies δ 1 + δ 2 > ) 7

9 For the ease of analysis, we assume also all ρ l > ) This will remove many special cases caused by one or more ρ l possibly being zeros. It seems that doing so would limit the applicability of our analysis to cases where no backward errors are preferred to some of the A l s, which is usually realized by setting the associated ρ l s to zero. But we note that such cases can be practically dealt with satisfactorily by giving sufficiently tiny positive numbers to the ρ l s. The following lemma will be needed in the later sections. Lemma 2.1 i) For τ = τ e ιϑ C, ϑ R, and η C, we have ) ) ) ) ) ) Rτ η) cos ϑ sin ϑ Rη) Rτ η) cos ϑ sin ϑ Rη) = τ, = τ. Iτ η) sin ϑ cos ϑ Iη) Iτ η) sinϑ cos ϑ Iη) ii) For ξ R and ϑ R, we have [ )] 2 ) cos ϑ sin ϑ I 2 + ξ = 1 + ξ 2 cos ϑ sinϑ )I sin ϑ cos ϑ 2 + 2ξ, sin ϑ cos ϑ ) ) cos ϑ cos ϑ = 1 [ )] cos 2ϑ sin 2ϑ I +, sin ϑ sin ϑ 2 sin 2ϑ cos 2ϑ ) ) sin ϑ sin ϑ = 1 [ )] cos 2ϑ sin 2ϑ I. cos ϑ cos ϑ 2 sin 2ϑ cos 2ϑ 3 Solve 2.8) when = H We recall 2.16), 2.17), 2.18), and Theorem 2.1, and keep in mind that = H, ε = ±1, and = F in 2.9). For even d, we need to solve l) b 11 b l) ) 12 ρ l b l) 21 b l) 22 + ε bl) 11 bl) 12 d/2) b 11 b d/2) 12 +ρ d/2 ε b d/2) 12 b d/2) 22 bl) 21 bl) 22 )µ d 2l ) ) ] α ) µ d/2 = 0 µ l ) γ, 3.1) β where εb d/2) 11 and εb d/2) 22 are real which means they are real for ε = 1 and pure imaginary for ε = 1). Componentwise, it gives ρ l b l) l) 11 + ε b 11 µd 2l) µ l + ρ d/2 b d/2) 11 µ d/2 = δ 1, 3.2) ρ l b l) l) 21 + ε b 12 µd 2l) µ l + ερ d/2 bd/2) 12 µ d/2 = δ ) 8

10 For odd d, we need to solve d 1)/2 l) b 11 b l) 12 ρ l b l) 21 b l) 22 ) + ε bl) 11 bl) 12 bl) 21 bl) 22 )µ d 2l ) µ l ) α = 0 ) γ. 3.4) β Componentwise, it gives We observe that: 1. b l) 22 d 1)/2 d 1)/2 for all l do not show up. Thus must bl) 22 ρ l b l) l) 11 + ε b 11 µd 2l) µ l = δ 1, 3.5) ρ l b l) l) 21 + ε b 12 µd 2l) µ l = δ ) = 0 for all l by 2.9) ) and 3.3) are decoupled. Thus they can be solved separately for optimal solutions in the sense of d/2 b l) 11 2 = min, 2 b d/2) b l) b l) 21 2) = min, respectively, to arrive at optimal solutions to 3.1) in the sense of 2.9) ) and 3.6) are decoupled. Thus they can also be solved separately for optimal solutions in the sense of d 1)/2 b l) 11 2 = min, d 1)/2 b l) b l) 21 2) = min, respectively, to arrive at optimal solutions to 3.4) in the sense of 2.9). We shall now solve each of 3.2), 3.3), 3.5), and 3.6) in terms of four lemmas. For this purpose, write µ = µ e ιθ, c l = coslθ), s l = sinlθ). 3.7) Then by Lemma 2.1 i), we have [ ] ) R b l) l) 11 + ε b 11 µd 2l µ l ) [ ] ) I b l) = µ l cl s l Rb l) l) ) 11 + ε b 11 µd 2l ) l) 11 + ε b 11 µd 2l µ l s l c l Ib l) l) 11 + ε b 11 µd 2l ) )[ )] ) = µ l cl s l I s l c 2 + ε µ d 2l cd 2l s d 2l Rb l) l s d 2l c d 2l Ib l). 3.8) 9

11 Lemma 3.1 Equation 3.2) which is for even d has a solution if and only if either µ 1 or µ = 1 and z H r/ εµ d/2 ) R. Otherwise it is inconsistent. Let 2 ξ even = ρ2 d/2 µ d 2 ζ even = µ d Φ even φ even + ρ2 d/ def = ξ even + ζ even = def = ξ even ζ even = µ 2l + µ 2d l)), 3.9), 3.10) µ l + µ d l) 2 + ρ 2 d/2 µ d, 3.11) µ l µ d l) 2, 3.12) J ε = 1 ) 1 + ε ε ) 2 1 ε 1 + ε When 3.2) has a solution, its optimal solution in the sense of d/2 by for 0 l d/2 1 ) Rb l) ) µ l + ε µ d l cd/2 l s Ib l) = ρ d/2 l l ξ even + εζ even s d/2 l c d/2 l and 3 and the optimal value 4 d/2 cd/2 s d/2 s d/2 c d/2 ε εb d/2) 11 = b l) 11 2 = ) ) Rδ1 ) Iδ 1 ) ρ d/2 µ d/2 ξ even + ζ even c d/2 s d/2 )J ε bl) µ l ε µ d l ξ even εζ even 11 2 = min is given, 3.14) ) Rδ1 ), 3.15) Iδ 1 ) Re ιθd/2 δ 1 ) 2 Φ even + Ie ιθd/2 δ 1 ) 2 φ even for ε = 1, Re ιθd/2 δ 1 ) 2 φ even + Ie ιθd/2 δ 1 ) 2 Φ even for ε = ) δ ) φ even ) Making J 1 = I 2 and J 1 = is all that is necessary The quantity ε εb d/2) 11 looks strange. But it is in fact b d/2) 11 when ε = 1 for which b d/2) 11 is real), and it is Ib d/2) when ε = 1 for which b d/2) 11 is pure imaginary). ) d/2 µ 4 e ιθd/2 z H r δ 1 can also be expressed as. µ z

12 When µ = 1 and z H r/ εµ d/2 ) R, 3.14), 3.15), and 3.16) can be significantly simplified to for 0 l d/2 1 ) Rb l) cd/2 l and Ib l) = 2ρ l Φ even J ε s d/2 l ) δ1 εµ d/2, ε εb d/2) 11 = ρ d/2 Φ even δ 1 εµ d/2, 3.18) d/2 b l) 11 2 = δ 1 2, Φ even = ρ 2 d/2 Φ ) even Proof Equation 3.2) needs to be split into two equations according to the real and imaginary parts because εb d/2) 11 is real and both b l) l) 11 and b 11 for l = 0,1,...,d/2 1 appear. By 3.8), we see that 3.2) is equivalently to Rb 0) Ib 0) where for l = 0,1,...,d/2 1 and Z 0 Z Z d/2 ) Z l = ρ l µ l cl s l. Rb ) Ib ) ε εb d/2) 11 = )[ s l I 2 + ε µ d 2l cd 2l c l s d 2l ) Rδ1 ), 3.20) Iδ 1 ) s d 2l c d 2l )], 3.21) ) Z d/2 = ρ d/2 µ d/2 cd/2 J ε. 3.22) s d/2 Set Z = Z 0 Z Z d/2 ) R 2 d+1). It can be computed, by Lemma 2.1 ii), that )[ Z l Zl = ρ 2 cl s l l µ 2l I 2 + ε µ d 2l cd 2l s l c l s d 2l s d 2l c d 2l )] 2 cl s l [ = µ 2l + µ 2d l)) )] I 2 + 2ε µ d cd s d for 0 l d/2 1, s d c d s l c l ) Z d/2 Z d/2 Therefore = ρ 2 d/2 µ d J ε cd/2 = ρ2 d/2 µ d 2 s d/2 [ I 2 + ε d/2 ) cd/2 s d/2 cd s d s d ) J c d ε )]. ZZ = Z l Zl cd = ξ even I 2 + εζ even 11 s d ) s d. 3.23) c d

13 ) cd s The eigenvalues of d are ±1. In fact, its eigendecomposition can be explicitly s d c d written out as ) ) ) ) cd s d cd/2 s = d/2 1 0 cd/2 s d/ ) s d c d s d/2 c d/2 0 1 s d/2 c d/2 So the eigenvalues of ZZ are Φ even and φ even, and moreover, we have an explicit eigendecomposition for ZZ : ) ) ) ZZ cd/2 s = d/2 ξeven + εζ even cd/2 s d/ ) s d/2 c d/2 ξ even εζ even s d/2 c d/2 If µ 1, then the smaller eigenvalue φ even > 0 and thus ZZ is invertible. The optimal solution in the sense of d/2 bl) 11 2 = min is Rb 0) Ib 0). Rb ) Ib ) ε εb d/2) 11 ) ) = Z Rδ1 ) = Z ZZ ) 1 Rδ1 ). 3.26) Iδ 1 ) Iδ 1 ) To arrive at 3.14), 3.15), and 3.16), we notice from 3.26) that ) Rb l) Ib l) ) = Zl ZZ ) 1 Rδ1 ), ε ) εb d/2) Iδ 1 ) 11 = Zd/2 ZZ ) 1 Rδ1 ). 3.27) Iδ 1 ) It can be verified that for 0 l d/2 1 ) Z l = ρ l µ l cd/2 s d/2 1 + ε µ d 2l s d/2 c d/2 1 ε µ d 2l ) cd/2 l s d/2 l s d/2 l c d/2 l ). 3.28) Equations 3.14) and 3.15) are the consequences of 3.22), 3.25), 3.27), and 3.28), upon noticing ) ) cd/2 s d/2 Rδ1 ) Re = ιθd/2 ) δ 1 ) s d/2 c d/2 Iδ 1 ) Ie ιθd/2, δ 1 ) ) { c d/2 s d/2 )Jε Rδ1 ) Re = ιθd/2 δ 1 ) for ε = 1, Iδ 1 ) Ie ιθd/2 δ 1 ) for ε = 1. Finally we have, by 3.14) and 3.15), the optimal value d/2 b l) 11 2 = [ µ l + ε µ d l ) 2 Re ιθd/2 δ 1 ) 2 ξ even + εζ even 12

14 µ l ε µ d l ) 2 ] + Ie ιθd/2 δ 1 ) 2 ξ even εζ even + ρd/2 µ d/2 ξ even + ζ even ) 2 c d/2 s d/2 )J ε ) Rδ1 ) 2. Iδ 1 ) which leads to 3.16). If, however, µ = 1, then rankz) = 1 and 3.20) is not solvable unless Rδ 1 ) Iδ 1 )) is parallel to c d/2 s d/2 ) J ε, or equivalently δ 1 εµ d/2 = z H r εµ d/2 z 2 2 R. 3.29) When it does, the optimal solution can be gotten from either equation from the first or the second component in 3.20). We shall do so now. When µ = 1, we have for l = 0,1,...,d/2 1 )[ )] cl s Z l = ρ l cd 2l s l I s l c 2 + ε d 2l l s d 2l c d 2l [ ) )] cl s = ρ l cd l s l + ε d l and s l = 2ρ l J ε cd/2 s d/2 c l s d l ) cd/2 l s d/2 l ) cd/2 Z d/2 = ρ d/2 J ε. s d/2 c d l ) J ε, 3.30) To see 3.18) and 3.19), we take the first equation in 3.20) for example. With µ = 1, the first equation is For ε = 1, it is 2c d/2 [ ] ρ l c l + εc d l )Rb l) + s l + εs d l )Ib l) ρ [ ] d/2 d/2 1 + ε)cd/2 1 ε)s d/2 ε εb 11 = Rδ 1). 3.31) [ ] ρ l c d/2 l Rb l) + s d l/2ib l) + ρ d/2 c d/2 ε εb d/2 11 = Rδ 1) which, under the condition that c d/2 0, leads to 3.18) and 3.19) upon noticing that Rδ 1 )/c d/2 = δ 1 /µ d/2 because of 3.29). If c d/2 = 0, the second equation in 3.20) will lead to the same conclusion for the case. The case for ε = 1 is similar. 13

15 Remark 3.1 Inequality 3.17) becomes an equality when Rδ 1 ) Iδ 1 )) is parallel to ) cd s the eigenvector of d associated with its eigenvalue ε. s d c d Remark 3.2 Equation 3.16) reveals what contributes the most to the structured backward error, while 3.17) fails to do at the price of being the simplest. That is that not all of δ 1 contributes equally in proportional to 1/ φ even to the structured backward error. In fact only either Re ιθd/2 δ 1 ) or Ie ιθd/2 δ 1 ), depending on what ε is, does. Conceivably, there are circumstances where the extreme tininess of either Re ιθd/2 δ 1 ) or Ie ιθd/2 δ 1 ) may offset the tininess of φ even to render reasonably tiny structured backward error, much smaller than 3.17) would indicate. Similar comment applies to part of the development in Lemma 3.3, too. Lemma 3.2 Equation 3.3) which is for even d always has a solution. Its optimal solution in the sense of 2 b d/2) ) b l) b l) 21 2 = min is given by 2 b d/2) 12 = ε ρ d/2 µ d/2 δ2 / 2 Ψ even, 3.32) b l) 12 = ερ l µ d l δ2 Ψ even, b l) 21 = ρ l µ l δ 2 Ψ even for l = 0,1,...,d/ ) satisfying where 2 b d/2) def Ψ even = b l) b l) 21 2) = δ 2 2 Ψ even, 3.34) µ 2l + µ 2d l)) + ρ 2 d/2 µ d / ) Proof Ψ even > 0 always since by assumption all ρ l > 0. It can be verified that b l) ij s given by 3.32) and 3.33) satisfy 3.3). On the other hand, 3.3) implies, by the Cauchy-Schwarz inequality, δ 2 2 Ψ even 2 b d/2) b l) b l) 21 2) for any solution {b l) ij } to 3.3). Lemma 3.3 Equation 3.5) which is for odd d has a solution if and only if either µ 1 or µ = 1 and z H r/ εµ d/2 ) R. Otherwise it is inconsistent. Let ξ odd = d 1)/2 µ 2l + µ 2d l)), 3.36) 14

16 d 1)/2 ζ odd = 2 µ d, 3.37) Φ odd φ odd d 1)/2 def = ξ odd + ζ odd = d 1)/2 def = ξ odd + ζ odd = µ l + µ d l) 2, 3.38) µ l µ d l) ) When 3.5) has a solution, its optimal solution in the sense of d 1)/2 given by for 0 l d 1)/2 1 ) Rb l) ) µ l + ε µ d l cd/2 l s Ib l) = ρ d/2 l l ξ odd + εζ odd s d/2 l c d/2 l and the optimal value d 1)/2 b l) 11 2 = cd/2 s d/2 s d/2 c d/2 ) ) Rδ1 ) Iδ 1 ) b l) µ l ε µ d l ξ odd εζ odd 11 2 = min is, 3.40) Re ιθd/2 δ 1 ) 2 Φ odd + Ie ιθd/2 δ 1 ) 2 φ odd for ε = 1, Re ιθd/2 δ 1 ) 2 φ odd + Ie ιθd/2 δ 1 ) 2 Φ odd for ε = ) δ 1 2 φ odd. 3.42) When µ = 1 and z H r/ εµ d/2 ) R, 3.40) and 3.41) can be significantly simplified to for 0 l d 1)/2 1 ) Rb l) cd/2 l and d 1)/2 Ib l) = 2ρ l Φ odd J ε s d/2 l b l) 11 2 = δ 1 2 k 1)/2, Φ odd = 4 Φ odd Z 0 Z d 1)/2 ) ) δ1 εµ d/2, 3.43) Proof Similarly to the proof of Lemma 3.1 above, 3.5) is equivalently to Rb 0) Ib 0). Rb d 1)/2) Ib d 1)/2) 15 =. 3.44) ) Rδ1 ), 3.45) Iδ 1 )

17 where Z l are as in 3.21). Let Z = Z 0 Z d 1)/2 ). Then, as in the proof of Lemma 3.1, ) ZZ cd s = ξ odd I 2 + εζ d odd s d c d ) ) ) cd/2 s = d/2 ξodd + εζ odd cd/2 s d/2. s d/2 c d/2 ξ odd εζ odd s d/2 c d/2 The eigenvalues of ZZ are Φ odd and φ odd. If µ 1, then the smaller eigenvalue φ odd > 0 and thus ZZ is invertible. The optimal solution in the sense of d 1)/2 b l) 11 2 = min is for l = 0,1,...,d 1)/2 1 ) Rb l) which lead to 3.40) and 3.41). Ib l) ) = Zl ZZ ) 1 Rδ1 ) Iδ 1 ) 3.46) If, however, µ = 1, then rankz) = 1 and 3.45) is not solvable unless Rδ 1 ) Iδ 1 )) is parallel to c d/2 s d/2 ) J ε, or equivalently 3.29) holds. When it does, the optimal solution can be gotten similarly as in the proof of Lemma 3.1. Remark 3.3 Inequality 3.42) becomes an equality when Rδ 1 ) Iδ 1 )) is parallel to ) cd s the eigenvector of d associated with its eigenvalue ε. s d c d Lemma 3.4 Equation 3.6) which is for odd d always has a solution. Its optimal solution in the sense of ) d 1)/2 b l) b l) 21 2 = min is given by satisfying where b l) 12 = ερ l µ d l δ2 Ψ odd, b l) 21 = ρ l µ l δ 2 Ψ odd for l = 0,1,...,d 1)/2 3.47) d 1)/2 Ψ odd = b l) b l) 21 2) = δ 2 2 Ψ odd. 3.48) d 1)/2 µ 2l + µ 2d l)). 3.49) Proof Ψ odd > 0 always since by assumption all ρ l > 0. It can be verified that b l) ij s given by 3.47) satisfy 3.6). On the other hand, 3.6) implies, by the Cauchy-Schwarz inequality, d 1)/2 1 δ 2 2 Ψ odd b l) b l) 21 2) for any solution {b l) ij } to 3.6). 16

18 We summarize our findings in Lemmas into the following theorem. Theorem 3.1 Suppose = H and ε = ±1 in 2.1). Let φ = { φeven for even d, φ odd, for odd d, Φ = { Φeven for even d, Φ odd, for odd d, Ψ = { Ψeven for even d, Ψ odd, for odd d, where φ even, φ odd, Φ even, Φ odd, Ψ even, and Ψ odd are as defined in Lemmas and let δ 1 and δ 2 be given as in 2.12) with r = r. We have 1. If µ = 1 and z H r/ εµ d/2 ) R, then F = If µ = 1 and z H r/ εµ d/2 ) R, then δ1 F = 2 Φ + δ 2 2 Ψ. 3. If µ 1, then F δ 1 2 φ + δ 2 2 Ψ. 3.50) Remark 3.4 Inequality 3.50) in general cannot be improved because ) it becomes an equality when Rδ 1 ) Iδ 1 )) cd s is parallel to the eigenvector of d associated with s d c d its eigenvalue ε. Besides bounding F as in 3.50), a close formula for it for the case µ 1 at the price of more complicated expressions can be gotten by combining 3.16), 3.34), 3.41), and 3.48). We omit the detail. Remark 3.5 Our solutions to 3.2) and 3.5) by Lemmas 3.1 and 3.3 are all that are needed for dealing with the scalar polynomial case, i.e., when n = 1. 4 Solve 2.8) when = We again recall 2.16), 2.17), 2.18), and Theorem 2.1, and keep in mind that = T, ε = ±1, and = F in 2.9). For even d, we need to solve l) b 11 b l) 12 ρ l b l) 21 b l) 22 ) + ε σε b d/2) 11 b d/2) 12 +ρ d/2 εb d/2) 12 σ ε b d/2) 22 l) b 11 b l) 21 b l) 12 b l) 22 )µ d 2l ) ) ] α ) µ d/2 = 0 µ l γ β), 4.1) 17

19 where σ ε = 1+ε 2, i.e., σ ε equals to 1 for ε = 1 and 1 for ε = 1. Componentwise, it gives For odd d, we need to solve d 1)/2 Componentwise, it gives We observe that: 1. b l) 22 ρ l b l) 11 + εbl) 11 µd 2l) µ l + ρ d/2 σ ε b d/2) 11 µ d/2 = δ 1, 4.2) ρ l b l) 21 + εbl) 12 µd 2l) µ l + ερ d/2 b d/2) 12 µ d/2 = δ ) l) b 11 b l) 12 ρ l b l) 21 b l) 22 d 1)/2 d 1)/2 ) + ε for all l do not show up. Thus must bl) 22 l) b 11 b l) ) 21 )µ d 2l b l) 12 b l) 22 µ l ) γ β) α =. 4.4) 0 ρ l b l) 11 + εbl) 11 µd 2l) µ l = δ 1, 4.5) ρ l b l) 21 + εbl) 12 µd 2l) µ l = δ ) = 0 for all l by 2.9) ) and 4.3) are decoupled. Thus they can be solved separately for optimal solutions in the sense that d/2 b l) 11 2 = min, 2 b d/2) b l) b l) 21 2) = min, respectively, to arrive at optimal solutions to 4.1) in the sense of 2.9) ) and 4.6) are decoupled. Thus they can also be solved separately for optimal solutions in the sense that d 1)/2 b l) 11 2 = min, d 1)/2 b l) b l) 21 2) = min, respectively, to arrive at optimal solutions to 4.4) in the sense of 2.9). We shall now solve each of 4.2), 4.3), 4.5), and 4.6) in terms of four lemmas. 18

20 Lemma 4.1 Equation 4.2) which is for even d has a solution if and only if ε = 1, or ε = 1 and µ ±1, or ε = 1 and µ = ±1 and z H r = 0. For ε = 1 or for ε = 1 and µ ±1, the optimal solution to 4.2) in the sense of d/2 bl) 11 2 = min is given by satisfying where b d/2) ρ d/2 µ d/2 δ 1 11 = σ ε, 4.7) Φ even b l) 11 = ρ l µ l + ε µ d l) δ 1 for l = 0,1,...,d/ ) Φ even d/2 def Φ even = b l) 11 2 = δ 1 2 Φ even, 4.9) µ l + εµ d l 2 + σε ρ 2 d/2 µ d. 4.10) For ε = 1 and µ = ±1 and z H r = 0, the optimal solution is all b l) 11 = 0. Proof Note all ρ l > 0 by the assumption 2.20). For ε = 1, Φ even > 0 always; For ε = 1, Φ even > 0 if and only if µ ±1. It can be verified that b l) ij s given by 4.7) and 4.8) satisfy 4.2) whenever Φ even > 0. On the other hand, 4.2) implies, by the Cauchy-Schwarz inequality, d/2 δ 1 2 Φ even b l) for any solution {b l) ij } to 4.2). For ε = 1 if µ = ±1, then 4.2) is consistent if and only if δ 1 = 0 or equivalently z H r = 0) for which case the optimal solution is b l) 11 = 0 for all l. Lemma 4.2 Equation 4.3) which is for even d always has a solution. Its optimal solution in the sense of 2 b d/2) ) b l) b l) 21 2 = min is given by b d/2) 12 = ε ρ d/2 µ d/2 δ 2 / 2 Ψ even, 4.11) b l) 12 = ερ l µ d l δ 2 Ψ even, b l) 21 = ρ l µ l δ 2 Ψ even for l = 0,1,...,d/ ) satisfying 2 b d/2) b l) b l) 21 2) = δ 2 2 Ψ even, 4.13) 19

21 where def Ψ even = µ 2l + µ 2d l)) + ρ 2 d/2 µ d / ) Proof Ψ even > 0 always. It can be verified that b l) ij s given by 4.11) and 4.12) satisfy 4.3). On the other hand, 4.3) implies, by the Cauchy-Schwarz inequality, δ 2 2 Ψ even 2 b d/2) b l) b l) 21 2) for any solution {b l) ij } to 4.3). Lemma 4.3 Equation 4.5) which is for odd d has a solution if and only if either µ ε or µ = ε and z H r = 0. For µ ε, then the optimal solution to 4.5) in the sense of d 1)/2 b l) 11 2 = min is given by satisfying where b l) 11 = ρ l µ l + ε µ d l) δ 1 Φ even, for l = 0,1,...,d 1)/2 4.15) d 1)/2 d 1)/2 def Φ odd = For µ = ε and z H r = 0, the optimal solution is all b l) 11 = 0. b l) 11 2 = δ 1 2 Φ odd, 4.16) µ l + εµ d l ) Proof Since d is odd, Φ odd = 0 if and only if µ = ε. It can be verified that b l) ij s given by 4.15) satisfy 4.5) when Φ even > 0. On the other hand, 4.5) implies, by the Cauchy-Schwarz inequality, d 1)/2 δ 1 2 Φ odd b l) for any solution {b l) ij } to 4.5). For µ = ε, then 4.5) is consistent if and only if δ 1 = 0 or equivalently z H r = 0) for which case the optimal solution is b l) 11 = 0 for all l

22 Lemma 4.4 Equation 4.6) which is for odd d always has a solution. Its optimal solution in the sense of ) d 1)/2 b l) b l) 21 2 = min is given by b l) 12 = ερ l µ d l δ 2 Ψ odd, b l) 21 = ρ l µ l δ 2 Ψ odd for l = 0,1,...,d 1)/2 4.18) satisfying where d 1)/2 Ψ odd = b l) b l) 21 2) = δ 2 2 Ψ odd, 4.19) d 1)/2 µ 2l + µ 2d l)). 4.20) Proof Ψ odd > 0 always. It can be verified that b l) ij s given by 4.18) satisfy 4.6). On the other hand, 4.6) implies, by the Cauchy-Schwarz inequality, d 1)/2 δ 2 2 Ψ odd b l) b l) 21 2) for any solution {b l) ij } to 4.6). We summarize our findings in Lemmas into the following theorem. Theorem 4.1 Suppose = and ε = ±1 in 2.1). Let Φ = { Φeven for even d, Φ odd, for odd d, Ψ = { Ψeven for even d, Ψ odd, for odd d, where Φ even, Φ odd, Ψ even, and Ψ odd are given by 4.10), 4.17), 4.14), and 4.20), respectively, and let δ 1 and δ 2 be given as in 2.12) with r = r. We have 1. If d is odd, µ = ε, or if d is even, µ = ±1, and ε = 1, then { +, if zh r 0, F = δ 2 / Ψ, if z H r = If d is odd, µ ε, or if d is even, µ ±1, or if d is even, ε = 1, then δ1 F = 2 Φ + δ 2 2 Ψ. 4.21) Remark 4.1 Our solutions to 4.2) and 4.5) by Lemmas 4.1 and 4.3 are all that are needed for dealing with the scalar polynomial case, i.e., when n = 1. 21

23 µ 1 Bound µ F ηµ,z) 10 4 Bound F ηµ,z) Index of approximate eigenpairs ε=1) Index of approximate eigenpairs ε= 1) Figure 5.1: Backward errors for the case = H; Left: ε = 1 and Right: ε = 1. Plotted quantities are µ 1, bounds by 3.50), F by combining 3.16) and 3.34), and Tisseur s ηµ, z) in 5.1). Bounds by 3.50) and F are visually indistinguishable from the plotted lines. 5 Numerical Examples In this section we test some numerical examples to illustrate our structured backward error compared with the existing unstructured) backward errors. For convenience, we consider PQEP only: Qλ) ελ 2 A 0 + λa 1 + A 0 with also A 1 = εa 1, and set ρ i = A i 2. By the existing unstructured) backward error, we mean the one from Tisseur [15] who defined the unstructured) backward error of an approximate eigenpair {µ,z} of Qλ) as ηµ,z) = min {δ : [Qµ) + Qµ)]z = 0, A 2 2 δρ 0, A 1 2 δρ 1, A 0 2 δρ 0 }, where Qλ) = λ 2 A 2 ) + λ A 1 ) + A 0 with no constraints imposed on A i. An explicit expression for ηµ,z) in terms of r = Qµ)z is given by [15] ηµ,z) = r 2 µ 2 ρ 0 + µ ρ 1 + ρ 0 ) z ) Take n = 10 and choose A 0, A 1 C randomly as by this piece of MATLAB code for = H: A0 = randnn)+i*randnn); A2 = varepsilon * A0 ; A1 = randnn)+i*randnn)); A1 = A1+varepsilon*A1 ; For = T, all needed is to replace A0 and A1 by A0. and A1., respectively. We compute 2n approximate eigenpairs {µ, z} of Qλ) by MATLAB s eig ) after linearizing Qλ). With {µ, z} running through all 2n approximate eigenpairs, we plots various quantities of interest in Figures 5.1 and 5.2. Figure 5.1 is for = H. It is worth pointing out that in our many numerical runs with different random matrices constructed as such and different n as well), there are always 22

24 F ηµ,z) F ηµ,z) Index of approximate eigenpairs ε=1) Index of approximate eigenpairs ε= 1) Figure 5.2: Backward errors for the case = T; Left: ε = 1 and Right: ε = 1. Plotted quantities are F by 4.21), and Tisseur s ηµ, z) in 5.1). some eigenvalues with µ being 1 in the working IEEE double) precision. As our analysis in Section 3 shows, if µ = 1, the matrix Z is singular and there may not exist a structured backward error unless z H r/ εµ d/2 ) is real. But this condition is nearly impossible to verify when the approximate eigenpair is so accurate in the working precision that r may have no correct significant digits. So in the numerical results presented in Figure 5.1, we purposely perturb the computed µ relatively by errors up to 10 8 to make µ 1 away from 0 by no less than about Doing so, however, does not prevent us from making our point with the numerical results, namely F is very sensitive with respect to µ 1 being near 0 or not. Figure 5.1 clearly shows when µ 1 is away from 0, F is not much worse than ηµ,z), and it appears that F is proportional to the reciprocal of µ 1. Figure 5.2 is for = T. We did not encounter the critical cases µ = ±1 as revealed in our analysis in Section 4 during our many numerical runs. The numerical results presented in this figure do not include any critical case, unfortunately. Because of the lack of critical cases, F is not much worse than ηµ,z) here. 6 Concluding Remarks We have presented a detailed structured backward error analysis for an approximate eigenpair {µ, z} of PPEP 1.4) by solving problem 1.5). Computable formulas and bounds for the optimal structured backward errors are obtained. These formulas and bounds show distinctive features of PPEP from the usual PEP as investigated by Tisseur [15] and Liu and Wang [10], namely there are critical cases µ = 1 for = H and µ = ±1 for = T) for which structured backward errors may not exist. In the PEP case, Tisseur [15] asked and solved what the optimal backward error for an approximated eigenvalue is by minimizing the backward errors for an approximate eigenpair {µ, z} over all possible vectors z. We also attempted to solve a similar question, namely what the optimal structured backward error for an approximated eigenvalue in 23

25 the PPEP case is. But it appears that our question is not as simple as in the PEP case because our formulas for structured backward errors for PPEP depend on the approximate eigenvector z in a more complicated way than Tisseur s. Also in the PEP case, Tisseur [15] asked and solved what the optimal backward errors for an approximate eigen-triplet an eigenvalue and its associated left and right eigenvectors). We attempted to solve a similar question, too, but did not succeed. References [1] R. Byers, D. S. Mackey, V. Mehrmann, and H. Xu. Symplectic, BVD, and palindromic approaches to discrete-time control problems. Technical report, Preprint , Institute of Mathematics, Technische Universität Berlin, [2] E. K.-W. Chu, T.-M. Hwang, W.-W. Lin, and C.-T. Wu. Vibration of fast trains, palindromic eigenvalue problems and structure-preserving doubling algorithms. J. Comput. Appl. Math., 229: , [3] D. J. Higham and N. J. Higham. Structured backward error and condition of generalized eigenvalue problems. SIAM J. Matrix Anal. Appl., 202): , [4] N. J. Higham, F. Tisseur, and P. M. Van Dooren. Detecting a definite Hermitian pair and a hyperbolic or elliptic quadratic eigenvalue problem, and associated nearness problems. Linear Algebra Appl., : , [5] A. Hilliges. Numerische Lösung von quadratischen Eigenwertproblemen mit Anwendungen in der Schiendynamik. Master s thesis, Technical University Berlin, Germany, July [6] A Hilliges, C. Mehl, and V. Mehrmann. On the solution of palindramic eigenvalue problems. In Proceedings of 4th European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS), Jyväskylä, Finland, [7] M. E. Hochstenbach and B. Plestenjak. Backward error, condition numbers, and pseudospectra for the multiparameter eigenvalue problem. Linear Algebra Appl., 375:63 81, [8] T.-M. Hwang, W.-W. Lin, and J. Qian. Numerically stable, structure-preserving algorithms for palindromic quadratic eigenvalue problems. Technical report, National Center for Theoretical Sciences, National Tsing Hua University, Taiwan, Preprints in Mathematics , [9] I. C. F. Ipsen. Accurate eigenvalues for fast trains. SIAM News, 37, [10] X.-G. Liu and Z.-X. Wang. A note on the backward errors for Hermite eigenvalue problems. Appl. Math. Comput., 1652): ,

26 [11] D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann. Structured polynomial eigenvalue problems: Good vibrations from good linearizations. SIAM J. Matrix Anal. Appl., 28: , [12] D.S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann. Vector spaces of linearizations for matrix polynomials. SIAM J. Matrix Anal. Appl., 28: , [13] C. Schröder. A QR-like algorithm for the palindromic eigenvalue problem. Technical report, Preprint 388, TU Berlin, Matheon, Germany, [14] C. Schröder. URV decomposition based structured methods for palindromic and even eigenvalue problems. Technical report, Preprint 375, TU Berlin, MATHEON, Germany, [15] F. Tisseur. Backward error and condition of polynomial eigenvalue problems. Linear Algebra Appl., 309: , [16] F. Tisseur and K. Meerbergen. The quadratic eigenvalue problem. SIAM Rev., 43: , [17] H. Xu. On equivalence of pencils from discrete-time and continuous-time control. Linear Algebra Appl., 414:97 124, [18] S. Zaglmayr. Eigenvalue problems in saw-filter simulations. Diplomarbeit, Institute of Computational Mathematics, Johannes Kepler University Linz, Linz, Austria,

Perturbation of Palindromic Eigenvalue Problems

Perturbation of Palindromic Eigenvalue Problems Numerische Mathematik manuscript No. (will be inserted by the editor) Eric King-wah Chu Wen-Wei Lin Chern-Shuh Wang Perturbation of Palindromic Eigenvalue Problems Received: date / Revised version: date

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM

SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM SOLVING A STRUCTURED QUADRATIC EIGENVALUE PROBLEM BY A STRUCTURE-PRESERVING DOUBLING ALGORITHM CHUN-HUA GUO AND WEN-WEI LIN Abstract In studying the vibration of fast trains, we encounter a palindromic

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN 1068-9613 ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK

More information

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular

More information

A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems

A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems A Structure-Preserving Doubling Algorithm for Quadratic Eigenvalue Problems Arising from Time-Delay Systems Tie-xiang Li Eric King-wah Chu Wen-Wei Lin Abstract We propose a structure-preserving doubling

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations Elias Jarlebring a, Michiel E. Hochstenbach b,1, a Technische Universität Braunschweig,

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun) NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Numerische Mathematik manuscript No. (will be inserted by the editor) The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Ren-Cang Li 1, Wei Zhang 1 Department of Mathematics, University

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Backward Error of Polynomial Eigenproblems Solved by Linearization. Higham, Nicholas J. and Li, Ren-Cang and Tisseur, Françoise. MIMS EPrint: 2006.

Backward Error of Polynomial Eigenproblems Solved by Linearization. Higham, Nicholas J. and Li, Ren-Cang and Tisseur, Françoise. MIMS EPrint: 2006. Backward Error of Polynomial Eigenproblems Solved by Linearization Higham, Nicholas J and Li, Ren-Cang and Tisseur, Françoise 2007 MIMS EPrint: 2006137 Manchester Institute for Mathematical Sciences School

More information

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn Structure preserving stratification of skew-symmetric matrix polynomials by Andrii Dmytryshyn UMINF 15.16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE- 901 87 UMEÅ SWEDEN Structure preserving stratification

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise

An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems. Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems Hammarling, Sven and Munro, Christopher J. and Tisseur, Francoise 2011 MIMS EPrint: 2011.86 Manchester Institute for Mathematical

More information

Iyad T. Abu-Jeib and Thomas S. Shores

Iyad T. Abu-Jeib and Thomas S. Shores NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Block companion matrices, discrete-time block diagonal stability and polynomial matrices Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract

More information

2-Dimensional Density Matrices

2-Dimensional Density Matrices A Transformational Property of -Dimensional Density Matrices Nicholas Wheeler January 03 Introduction. This note derives from recent work related to decoherence models described by Zurek and Albrecht.

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne Outline Eigenvalue PDFs Peter Forrester, M&S, University of Melbourne Hermitian matrices with real, complex or real quaternion elements Circular ensembles and classical groups Products of random matrices

More information

How to Detect Definite Hermitian Pairs

How to Detect Definite Hermitian Pairs How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

Available online at   ScienceDirect. Procedia Engineering 100 (2015 ) 56 63 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 56 63 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Definite Quadratic

More information

REPRESENTING HOMOLOGY AUTOMORPHISMS OF NONORIENTABLE SURFACES

REPRESENTING HOMOLOGY AUTOMORPHISMS OF NONORIENTABLE SURFACES REPRESENTING HOMOLOGY AUTOMORPHISMS OF NONORIENTABLE SURFACES JOHN D. MCCARTHY AND ULRICH PINKALL Abstract. In this paper, we prove that every automorphism of the first homology group of a closed, connected,

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Federico Poloni 1 Joint work with Giang T. Nguyen 2 1 U Pisa, Computer Science Dept. 2 U Adelaide, School of Math.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

ELA

ELA LEFT EIGENVALUES OF 2 2 SYMPLECTIC MATRICES E. MACíAS-VIRGÓS AND M.J. PEREIRA-SÁEZ Abstract. A complete characterization is obtained of the 2 2 symplectic matrices that have an infinite number of left

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

SOME INEQUALITIES FOR COMMUTATORS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES. S. S. Dragomir

SOME INEQUALITIES FOR COMMUTATORS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES. S. S. Dragomir Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Filomat 5: 011), 151 16 DOI: 10.98/FIL110151D SOME INEQUALITIES FOR COMMUTATORS OF BOUNDED LINEAR

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 1 QMA - the quantum analog to MA (and NP). Definition 1 QMA. The complexity class QMA is the class of all languages

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information