Nonlinear eigenvalue problems: Analysis and numerical solution

Size: px
Start display at page:

Download "Nonlinear eigenvalue problems: Analysis and numerical solution"

Transcription

1 Nonlinear eigenvalue problems: Analysis and numerical solution Volker Mehrmann TU Berlin, Institut für Mathematik DFG Research Center MATHEON Mathematics for key technologies MATTRIAD 2011 July 2011

2 Outline Lecture I: 1. Motivation and Applications 2. Normal and condensed forms 3. Linearization 4. Structure preservation Lecture II: 1. Smith forms and Jordan Triples 2. Perturbation theory 3. Numerical methods for linear and polynomial problems 4. Numerical methods for general nonlinear problems Nonlinear EVPs 2 / 203

3 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 3 / 203

4 Nonlinear Eigenvalue Problems We discuss nonlinear eigenvalue problems to compute eigenvalues λ and right eigenvectors x of f (λ, α)x = 0, x F n, λ F, where F is a field, typically F = R or F = C. f : F F p F l, and α denotes a set of p parameters. Sometimes we also want left eigenvectors y F l such that y f (λ, α) = 0. Nonlinear EVPs 4 / 203

5 Typical functions Generalized linear evps f = λa 1 + A 0. Quadratic evps f = λ 2 A 2 + λa 1 + A 0. Polynomial evps f = k i=0 λi A i with coefficient matrices A i F l,n. Rational evps f = k i= j λi A i with coefficient matrices A i F l,n. General nonlinear evps f, e.g. f = exp( k i= j λi A i ). Nonlinear EVPs 5 / 203

6 Typical questions Find all eigenvalues λ and associated eigenvectors x for a given parameter value α. Find some important eigenvalues λ and associated eigenvectors x for a given parameter α. Find all eigenvalues in a given subset of C for a given parameter α. Optimize eigenvalue positions over parameter set..... Nonlinear EVPs 6 / 203

7 Resonances in rail tracks Project with company SFE in Berlin 2004/2006 Modeling of excitation of the tracks by the train. Discretization of rail and gravel bed with finite elements. Parametric eigenvalue problem for excitation frequencies. Optimization of parameters. Goal: Higher safety and reduction of noise. Nonlinear EVPs 7 / 203

8 Why did SFE need help? Even for very coarse discretization (of the track and gravel bed), commercial programs needed several hours of cpu time to solve the evp for the whole frequency range. Commercial packages delivered no correct digit in the evs. Accuracy went down when the discretization was made finer. Optimization of parameters not possible with current tools. Nonlinear EVPs 8 / 203

9 Background Under the assumption of an infinite rail, FEM in space leads to M z + Dż + Kz = F, with symmetric infinite block tridiagonal coefficient matrices (operators) M, D, K, where M, z, F are given by Mj 1,0 M j, z j 1 F j 1 0 Mj,1 T M j,0 M j+1,1 0, z j, F j..... M T j+1,1 M j+1,0 M j+2,1 z j+1 F j Operators D, K have the same structure as M. Furthermore, M j,0 > 0, D j,0, K j,0 0. Nonlinear EVPs 9 / 203

10 Solution ansatz Fourier expansion F j = ˆF j e iωt, z j = ẑ j e iωt, where ω is the excitation frequency. Using periodicity and combining l parts into one vector y j = [ ẑ T j ẑ T j+1... ẑ T j+l ] T gives a compley (singular) difference equation A 1 (ω)y j+1 + A 0 (ω)y j + A 1 (ω) T y j 1 = G j. with A 0 (ω) = A T 0 (ω) block tridiagonal and A 1(ω) singular of rank smaller than n/2. Nonlinear EVPs 10 / 203

11 The associated nonlinear evp Ansatz: y j+1 = κy j, leads to the palindromic rational eigenvalue problem R(κ)x = (κa 1 (ω) + A 0 (ω) + 1 κ A 1(ω) T )x = 0. Alternative representation as palindromic polynomial eigenvalue problem P(λ)x = (λ 2 A 1 (ω) + λa 0 (ω) + A 1 (ω) T )x = 0. Nonlinear EVPs 11 / 203

12 Acoustic field computation Industrial Project with company SFE in Berlin 2007/2011. Computation of acoustic field inside car. SFE has its own parameterized FEM model which allows geometry and topology changes. ( film) Problem is needed within optimization loop that changes geometry, topology, damping material, etc. Model reduction and reduced order models for optimization. Ultimate goal: Minimize noise in important regions in car interior. Nonlinear EVPs 12 / 203

13 Acoustic field SFE AKUSMOD SFE GmbH 2007 Unit force = 1 N mm DLOAD 1 = symmetrical excitation DLOAD 2 = antimetrical excitation grid-id grid-id SFE GmbH, Berlin CEO: Hans Zimmer h.zimmer@sfe-berlin.de Nonlinear EVPs 13 / 203

14 Tasks in the project Numerical methods for large scale structured parameter dependent polynomial eigenvalue problems. Compute eigenvalues in trapezoidal region around 0. Determine projectors on important spectral subspaces for model reduction. Model reduction for parameterized model. Optimization of frequencies. Implementation of parallel solver in SFE Concept. Nonlinear EVPs 14 / 203

15 Full model: summary [ ] [ ] [ ] [ ] Ms 0 üd Ds 0 u Dsf T + d M f p d 0 D f ṗ d [ ] [ ] [ ] Ks (ω) D + sf ud fs =. 0 K f p d 0 M s, M f, K f are real symm. pos. semidef. mass/stiffness matrices of structure and air, M s is singular and diagonal, M s is a factor larger than M f. K s (ω) = K s (ω) T = K 1 (ω) + ık 2. D s, D f are real symmetric damping matrices. D sf is real coupling matrix between structure and air. Parts depend on geometry, topology and material parameters. Nonlinear EVPs 15 / 203

16 Eigenvalue problem ( [ λ 2 Ms 0 Dsf T M f ] [ Ds 0 + λ 0 D f ] [ Ks (ω) D + sf 0 K f ]) [ ] xs = 0, x f or after scaling second block row with λ 1 and second block column with λ one has the complex symmetric quadratic evp ( [ ] [ ] [ ]) [ ] λ 2 Ms 0 Ds D + λ sf Ks (ω) 0 x + s 0 M f 0 K f λ 1 = 0. x f D T sf D f Nonlinear EVPs 16 / 203

17 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 17 / 203

18 Linear Evps For linear evps (A 0 + λa 1 )x = 0 we can analyze the properties via the normal form of the pair (A 0, A 1 ) under equivalence transformations (PA 0 Q, PA 1 Q) with nonsingular matrices P, Q. For the special case A 1 = I under similarity transformations Q 1 A 0 Q with nonsingular Q. For the numerical solution we want to use unitary (real orthogonal) P, Q. Nonlinear EVPs 18 / 203

19 Normal Forms Problem class Transformation Normal form λx + A 0 x Q 1 AQ, Q invertible Jordan can. form λa 1 x + A 0 x, PA 0 Q, PA 1 Q P, Q inv. Kronecker can. form ( k i=0 λi A i )x = 0?? Nonlinear EVPs 19 / 203

20 Jordan canonical form Theorem (Jordan canonical form (JCF)) For every A 0 C n,n there exists nonsingular Q C n,n such that λ j 1 Q 1... A 0 Q = diag(j ρ1,..., J ρv ), J ρj =... 1 Cρ j,ρ j, λ j What can we learn from the JCF? Eigenvalues, algebraic and geometric multiplicities, minimal polynomial, characteristic polynomial, left and right eigenvectors and principal vectors, invariant subspaces. In the real case, real Jordan form. Nonlinear EVPs 20 / 203

21 Kronecker canonical form (KCF) Theorem (Kronecker canonical form (KCF)) For every pair (A 1, A 0 ), A i C l,n there exist nonsingular P C l,l, Q C n,n such that P(λA 1 + A 0 )Q = diag(l ɛ1,..., L ɛp, M η1,..., M ηq, J ρ1,..., J ρv, N σ1,..., N σw ), with L ɛj = λ J ρj = λ λ j , Mη j 1 = λ , N. σj = λ λ j , Nonlinear EVPs 21 / 203

22 Regularity, index Definition A matrix pencil λa 1 + A 0, A 0, A 1 C l,n, is called regular if l = n and if P(λ) = det(λa 1 + A 0 ) does not vanish identically, otherwise singular. The size ν d of the largest nilpotent (N)-blocks in the KCF is called the index of λa 1 + A 0. Values λ C such that rank(λa 1 + A 0 ) < min(l, n) are called finite eigenvalues of λa 1 + A 0. The eigenvalue µ = 0 of A 1 + µa 0 is called the infinite eigenvalue of λa 1 + A 0. Nonlinear EVPs 22 / 203

23 Deflating subspaces Definition A subspace L C n is called deflating subspace for the pencil λa 1 + A 0 if for a matrix X L C n,r with full column rank and rangex L = L there exist matrices Y L C n,r, R L C r,r, and S L C r,r such that A 1 X L = Y L R L, A 0 X L = Y L S L. Nonlinear EVPs 23 / 203

24 Weierstraß canonical form (WCF) Theorem (Weierstraß canonical form (WCF)) For every regular pair (A 1, A 0 ), A i C n,n there exist nonsingular P C n,n, Q C n,n such that P(λA 1 + A 0 )Q = diag(j ρ1,..., J ρv, N σ1,..., N σw ), with J ρj = λ λ j λ j, N σj = λ Nonlinear EVPs 24 / 203

25 What do learn from KCF, WCF? Finite eigenvalues and infinite eigenvalues. Algebraic and geometric multiplicities. Index, Kronecker indices (sizes of singular blocks). Regularity, non-regularity. Eigenvectors, principal vectors, deflating subspaces, reducing subspaces (subspaces associated with singular blocks). Nonlinear EVPs 25 / 203

26 Evaluation of Normal Form Approach The normal forms are essential for the theoretical analysis evps. They allow linear stability/ bifurcation analysis, the points where ranks change are a superset of the set of critical points. But, they cannot be used in general for the development of numerical methods They are typically not numerically stably computable, since the structure can be changed by arbitrary small perturbations. Nonlinear EVPs 26 / 203

27 Schur form Theorem (Schur form) For every matrix A 0 C n,n there exist a unitary matrix Q C n,n such that λ 1... Q H 0 λ 2 A 0 Q = λ n What can we learn from Schur form? Eigenvalues, algebraic multiplicity, invariant subspaces. Nonlinear EVPs 27 / 203

28 Generalized upper triang. form Theorem (GUPTRI form: Demmel/Kågström 1993) Given a matrix pair (A 1, A 0 ), there exist unitary matrices P, Q such that (PA 1 Q, PA 0 Q) are in the following generalized upper triangular form: P(λA 1 +A 0 )Q = λe 11 A 11 λe 12 A 12 λe 13 A 13 λe 14 A 14 0 λe 22 A 22 λe 23 A 23 λe 24 A λe 33 A 33 λe 34 A λe 44 A 44 Here n 2 = l 2, n 3 = l 3, λe 11 A 11 and λe 44 A 44 contains all left and right singular Kronecker blocks of λa 1 + A 0, respectively. Furthermore, λe 22 A 22 and λe 33 A 33 are regular and contain the regular finite and infinite structure of λa 1 + A 0, respectively.. Nonlinear EVPs 28 / 203

29 What can we learn form GUPTRI? Eigenvalues Algebraic multiplicity of eigenvalues. Left and right deflating and reducing subspaces. With a lot of perturbation theory also information on the length of chains. Nonlinear EVPs 29 / 203

30 Stably computable condensed forms Problem class transformation stable cond. form λx + A 0 x = 0 Q 1 AQ, Q unitary Schur form λa 1 x + A 0 x = 0 PA 1 Q, PA 0 Q, P, Q unit. gen. Schur form ( k i=0 λi A i )x = 0?? Nonlinear EVPs 30 / 203

31 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 31 / 203

32 Matrix polynomials We now study polynomial evps P(λ) x = ( k λ i A i )x = 0, where A i F l,n. We will extend concepts from linear to general matrix polynomials. i=0 Nonlinear EVPs 32 / 203

33 Regularity Definition A matrix polynomial P(λ) = k i=0 λi A i, with A 0,..., A k F l,n, A k 0 is called regular if the coefficients are square matrices and if det P(λ) does not vanish identically for all λ C, otherwise it is called singular. Nonlinear EVPs 33 / 203

34 Definition Jordan Chains Let P(λ) = k i=0 λi A i with A 0,..., A k F l,n. A right (left) Jordan chain of length r + 1 associated with a finite ev ˆλ of P(λ) is a sequence x i (y i ), i = 0,..., r with x 0, x r 0 (y 0, y r 0) and P(ˆλ)x 0 = 0; P(ˆλ)x 1 + [ 1 d P(ˆλ)]x 1! dλ 0 = 0;. P(ˆλ)x r + [ 1 d P(ˆλ)]x 1! dλ r [ 1 d r P(ˆλ)]x r! dλ r 0 = 0, y0 HP(ˆλ) = 0; y1 HP(ˆλ) + y0 H[ 1 d P(ˆλ)] = 0; 1! dλ. yl HP(ˆλ) + yr 1 H [ 1 d P(ˆλ)] y H 1! dλ 0 [ 1 d r P(ˆλ)] = 0. r! dλ r Nonlinear EVPs 34 / 203

35 Kronecker Chains to ev Definition Let P(λ) = k i=0 λi A i with A 0,..., A k F l,n. The reversal of P(λ) is the polynomial rev P(λ) := λ k P(1/λ) = k λ i A k i. i=0 A right (left) Kronecker chain of length r + 1 associated with the eigenvalue infinity of P(λ) is a right (left) Jordan chain of length r + 1 associated with eigenvalue λ = 0 of revp(λ). Nonlinear EVPs 35 / 203

36 Singular Kronecker Chains Definition Let P(λ) = k i=0 λi A i with A 0,..., A k F r,n. A right singular Kronecker chain of length r + 1 associated with the right singular part of P(λ) is defined as the sequence of coefficient vectors x i, i = 0,..., r in a nonzero vector polynomial x(λ) = x r λ r x 1 λ 1 + x 0 of minimal degree such that P(λ)x(λ) = 0, considered as a polynomial equation in λ. A left singular Kronecker chain of length r + 1 associated with the left singular part of P(λ) is a sequence of coefficient vectors y i, i = 0,..., r in a nonzero vector polynomial y(λ) = y r λ r y 1 λ 1 + y 0 of minimal degree such that y H (λ)p(λ) = 0. Here y H (λ) = y H r λ r y H 1 λ1 + y H 0. Nonlinear EVPs 36 / 203

37 Linearization Definition For a matrix polynomial P(λ), a matrix pencil L(λ) = λx + Y is called linearization of P(λ), if there exist unimodular matrices S(λ), T (λ) such that S(λ)L(λ)T (λ) = diag(p(λ), I n,..., I n ). If L(λ) is a linearization for P(λ) and rev L(λ) is a linearization for rev P(λ), then L(λ) is said to be a strong linearization for P(λ). Nonlinear EVPs 37 / 203

38 Properties of linearization Linearization preserves the algebraic and geometric multiplicities of all finite eigenvalues, but not those of infinite eigenvalues. Strong linearization preserves the algebraic and geometric multiplicities of all finite and infinite eigenvalues. The lengths of singular chains are not all invariant. Nonlinear EVPs 38 / 203

39 Companion linearization The classical companion linearization for polynomial eigenvalue problems k P(λ)x = λ i A i x is to introduce new variables y T = [ ] T [ y 1, y 2,..., y l = x, λx,..., λ l 1 x ] T and to turn it into a generalized linear eigenvalue problem L(λ)y := (λx + Y )y = 0 of size nk nk. i=0 Nonlinear EVPs 39 / 203

40 Difficulties with companion form Example The even quadratic eigenvalue problem (λ 2 M + λg K )x = 0 with M = M T, K = K T > 0, G = G T has a spectrum that is symmetric with respect to both axis, but in the companion linearization [ O I K G ] [ x y does not see this structure. ] [ I O = λ O M ] [ x y Numerical methods destroy eigenvalue symmetries in finite arithmetic! Perturbation theory requires structured perturbation for stability near the imaginary axis. Nonlinear EVPs 40 / 203 ],

41 Pros and cons of linearization Is it a good idea to perform a linearization? Pros Simpler analysis for linear eigenvalue problems. Not many well studied methods for matrix polynomials. No generalization of Jordan/Kronecker canonical form for matrix polynomials. Cons The condition number (sensitivity) may increase dramatically. The size of the problem is increased. Symmetry structures may be lost. Nonlinear EVPs 41 / 203

42 Optimal Linearizations Goal: Find a large class of linearizations for which: the linear pencil is easily constructed; structure preserving linearizations exist; the conditioning of the linear problem can be characterized and optimized; eigenvalues/eigenvectors of the original problem are easily read off; a structured perturbation analysis is possible. Nonlinear EVPs 42 / 203

43 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 43 / 203

44 Vector spaces of potential linearizations Notation: Λ := [λ k 1, λ k 2,..., λ, 1] T, - Kronecker product. Definition Mackey 2 /Mehl/M For a given n n matrix polynomial P(λ) of degree k define the sets: V P = {v P(λ) : v F k }, v is called right ansatz vector, W P = {w T P(λ) : w F k }, w is called left ansatz vector, } L 1 (P) = {L(λ) = λx + Y : X, Y F kn kn, L(λ) (Λ I n ) V P, L 2 (P) = {L(λ) = λx + Y : X, Y F kn kn, ( ) } Λ T I n L(λ) WP DL(P) = L 1 (P) L 2 (P). Nonlinear EVPs 44 / 203

45 Properties of these sets Lemma For any n n matrix polynomial P(λ) of degree k, L 1 (P) is a vector space of dimension k(k 1)n 2 + k, L 2 (P) is a vector space of dimension k(k 1)n 2 + k, DL(P) is a vector space of dimension k. These are not all linearizations but they form a large class. Nonlinear EVPs 45 / 203

46 Example Consider the cubic matrix polynomial (Antoniou and Vologiannidis 2004), P(λ) = λ 3 A 3 + λ 2 A 2 + λa 1 + A 0. Then the pencil L(λ) = λ 0 A 3 0 I A I + I A 1 A 0 0 I 0 is a linearization for P but neither in L 1 (P) nor in L 2 (P). Nonlinear EVPs 46 / 203

47 Definition (Column Shifted sum) The shifted sum Let X and Y be block matrices X 11 X 1k Y 11 Y 1k X =.., Y =.. X k1 X kk Y k1 Y kk with blocks X ij, Y ij F n n. Then the column shifted sum of X and Y is defined to be X 11 X 1k 0 0 Y 11 Y 1k X Y := X k1 X kk 0 0 Y k1 Y kk The row shifted sum is defined analogously. Nonlinear EVPs 47 / 203

48 Example For the first companion form C 1 (λ) = λx 1 + Y 1 of P(λ) = k i=0 λi A i, the column shifted sum X 1 Y 1 is just A k 0 0 A k 1 A k 2 A 0. 0 I.. n I n I n 0 I n 0 A k A k 1 A = Nonlinear EVPs 48 / 203

49 Shifted sum property Lemma Let P(λ) = k i=0 λi A i be an n n matrix polynomial, and L(λ) = λx + Y a kn kn pencil. Then for v F k, (λx+y ) (Λ I n ) = v P(λ) X Y = v [A k A k 1 A 0 ], and so the space L 1 (P) may be alternatively characterized as } L 1 (P) = {λx + Y : X Y = v [A k A k 1 A 0 ], v F k. Analogously we can characterize L 2 (P). Nonlinear EVPs 49 / 203

50 Characterization of L 1 (P) Theorem Let P(λ) = k i=0 λi A i be an n n matrix polynomial, and v F k any vector. Then the set of pencils in L 1 (P) with right ansatz vector v consists of all L(λ) = λx + Y such that X = [ v A k W ] and Y = [ W + ( v [ A k 1 A 1 ]) v A0 ]. with W F kn (k 1)n chosen arbitrarily. Corollary L 2 (P) = [ L 1 (P T ) ] T. Nonlinear EVPs 50 / 203

51 A linearization Check Procedure for linearization condition for L 1 (P). 1) Suppose P(λ) is regular and L(λ) = λx + Y L 1 (P) has 0 v F k, i.e., L(λ) (Λ I n ) = v P(λ). 2) Select any nonsingular matrix M such that Mv = αe 1. 3) Form L(λ) := (M I n )L(λ), which must be of the form [ ] [ ] X11 L(λ) = λ X + Ỹ = λ X12 Ỹ11 Ỹ + 12, 0 Z Z 0 where X 11 and Ỹ12 are n n. 4) Check det Z 0, the linearization condition for L(λ). This procedure can be implemented as a numerical algorithm: choose M to be unitary, then use a rank revealing factorization to check if Z is nonsingular. Nonlinear EVPs 51 / 203

52 Consider a general regular P(λ) = λ 2 A + λb + C, and [ ] [ ] A B + C C C L(λ) = λ +. A 2B A A B C Since [ ] [ ] A B + C C C = A 2B A A B C [ ] A B C, A B C Example we have L(λ) L 1 (P) with right ansatz vector v = [ 1 1 ] T. Subtracting the first entry from the second reduces v to e 1, and the corresponding block-row-operation on Y yields [ ] C C Ỹ =. A B + C 0 Hence Z = A B + C, and we have a linearization iff det(a B + C) = det P( 1) 0, i.e., λ = 1 is not an eigenvalue of P. Nonlinear EVPs 52 / 203

53 Example Consider the general regular cubic polynomial P(λ) = λ 3 A + λ 2 B + λc + D and the pencil A 0 2C B C D λx+y = λ 2A B C D 4C + C B 2C D 2D 0 A I A I 0 in L 1 (P). Since X Y = [ ] T [ A B C D ], we have v = [ ] T. Adding twice the first block-row of Y to the second block-row of Y gives [ ] B + C D Z =, A I and hence the condition det Z = det(b + C DA) 0. Nonlinear EVPs 53 / 203

54 Eigenvector Recovery Property Theorem Let P(λ) be an n n matrix polynomial of degree k, and let L(λ) be any pencil in L 1 (P) with ansatz vector v 0. Then x C n is a right eigenvector for P(λ) with finite eigenvalue λ C if and only if Λ x is a right eigenvector for L(λ) with eigenvalue λ. If in addition P is regular, i.e. det P(λ) 0, and L L 1 (P) is a linearization then every eigenvector of L with finite eigenvalue λ is of the form Λ x for some eigenvector x of P. A similar result holds for the space L 2 (P) and also for eigenvalues 0,. Nonlinear EVPs 54 / 203

55 Strong Linearization Property Theorem Let P(λ) be a regular matrix polynomial and L(λ) L 1 (P) (or L(λ) L 2 (P)). Then the following statements are equivalent. (i) L(λ) is a linearization for P(λ). (ii) L(λ) is a regular pencil. (iii) L(λ) is a strong linearization for P(λ). Nonlinear EVPs 55 / 203

56 Example The first and second companion forms A k I C 1 (λ) := λ n I n A k I C 2 (λ) := λ n I n A k 1 A k 2 A 0 I n I n 0 A k 1 I n 0. A k In. A are strong linearizations in L 1 (P), L 2 (P), respectively. Nonlinear EVPs 56 / 203

57 Characterization of DL(P) Lemma Consider an n n matrix polynomial P(λ) of degree k. Then, for v = (v 1,..., v k ) T and w = (w 1,..., w k ) T in F k, the associated pencil satisfies L(λ) = λx + Y DL(P) if and only if v = w. Once an ansatz vector v has been chosen, a pencil from DL(P) is uniquely determined. Nonlinear EVPs 57 / 203

58 Are the classes large enough? Theorem For any regular n n matrix polynomial P(λ) of degree k, almost every pencil in L 1 (P) (L 2 (P)) is a linearization for P(λ). For any regular matrix polynomial P(λ), pencils in DL(P) are linearizations of P(λ) for almost all v F k. Almost every means for all but a closed, nowhere dense set of measure zero. Nonlinear EVPs 58 / 203

59 Linearization property for DL(P) Theorem Consider an n n matrix polynomial P(λ) of degree k. Then for given ansatz vector v = w = [v 1,..., v k ] T the associated linear pencil in DL(P) is a linearization if and only if no root of the v-polynomial is an eigenvalue of P. p(v; x) := v 1 x k v k 1 x + v k Nonlinear EVPs 59 / 203

60 Example Consider P(λ) = λ 3 A + λ 2 B + λc + D and A 0 A B A + C D L(λ) = λ 0 A C B D + A + C B + D 0 A B D C D 0 D in DL(P) with ansatz vector v = [ ] T, p(v; x) = x 2 1. Using the check one easily finds that [ A + C B + D det B + D A + C ] 0 is the linearization condition for L(λ). This is equivalent to saying that neither 1 nor +1 is an eigenvalue of the matrix polynomial P(λ). Nonlinear EVPs 60 / 203

61 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 61 / 203

62 Polynomial evps with structure P(λ) x = 0, where P(λ) is polynomial matrix valued function; x is a real or complex eigenvector; λ is a real or complex eigenvalue; and P(λ) has some further structure. Nonlinear EVPs 62 / 203

63 Which structures? Definition A nonlinear matrix polynomial P(λ) of degree k is called T-even (H-even) if P(λ) = P T ( λ) (P(λ) = P H ( λ)); T-palindromic (H-palindromic) if P(λ) = P T (λ 1 )λ k (P(λ) = P H (λ 1 )λ k. T-symmetric (Hermitian) if P(λ) = P T (λ) (P(λ) H = P(λ)); Nonlinear EVPs 63 / 203

64 Examples A T-even quadratic problem has the form λ 2 M + λg + K with M = M T, K = K T, G = G T. A H-palindromic cubic problem has the form λ 3 A 3 + λ 2 A 2 + λa 1 + A 0 where A 3 = A H 0, A 2 = A H 1. A quadratic symmetric problem has the form λ 2 M + λd + K, with M = M T, K = K T, D = D T. Nonlinear EVPs 64 / 203

65 Properties of even matrix functions Lemma Consider a T-even eigenvalue problem P(λ)x = 0. Then P(λ)x = 0 if and only if x T P( λ) = 0, i.e., the eigenvalues occur in pairs λ, λ, or quadruples λ, λ, λ, λ in the real case. Consider a H-even eigenvalue problem P(λ)x = 0. Then P(λ)x = 0 if and only if x H P( λ) = 0, i.e., the eigenvalues occur in pairs λ, λ Even matrix functions have Hamiltonian spectrum, they generalize Hamiltonian problems λi + H, where H is Hamiltonian. Nonlinear EVPs 65 / 203

66 Properties of palindromic matrix functions Lemma Consider a T-palindromic eigenvalue problem P(λ)x = 0. Then P(λ)x = 0 if and only if x T P(1/λ) = 0, i.e., the eigenvalues occur in pairs λ, 1/λ or quadruples λ, 1/λ, λ, 1/ λ in the real case. Consider a H-palindromic eigenvalue problem P(λ)x = 0. Then P(λ)x = 0 if and only if x T P(1/ λ) = 0, i.e., the eigenvalues occur in pairs λ, 1/ λ. Palindromic matrix functions have symplectic spectrum, they generalize symplectic problems λi + S, where S is a symplectic matrix. Nonlinear EVPs 66 / 203

67 Properties of symmetric matrix functions In general symmetric/hermition matrix functions have no special symmetries in the spectrum. Left and right eigenvectors to an eigenvalue, however, are the same. In some special cases (definite, hyperbolic) it can be shown that all eigenvalues are real or purely imaginary. Example Consider the quadratic problem λ 2 M + K with M, K real symmetric, M positive definite, K positive semidefinite. Then the eigenvalues are the square roots of the negative eigenvalues of L 1 KL T and thus purely imaginary. Nonlinear EVPs 67 / 203

68 Even Matrix Polynomials Singularities (cracks) in anisotropic materials as functions of material or geometry parameters Optimal control of differential equations Gyroscopic systems Optimal Waveguide Design, H control Nonlinear EVPs 68 / 203

69 Palindromic Matrix Polynomials Excitation of rail tracks by high speed trains Periodic surface acoustic wave filters Control of (high order) difference equations Computation of the Crawford number Nonlinear EVPs 69 / 203

70 Symmetric/Hermitian Matrix Polynomials Mass, spring, damper systems, dynamic simulation of structures. Acoustic field problem. Quantum dot simulation.... Nonlinear EVPs 70 / 203

71 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 71 / 203

72 Structured linearization Lemma Consider an n n even matrix polynomial P(λ) of degree k. For an ansatz vector v = (v 1,..., v k ) T F k the linearization L(λ) = λx + Y DL(P) is even, i.e. X = X T and Y = Y T, if and only if p(v; x) is even. Consider an n n palindromic matrix polynomial P(λ) of degree k. Then, for a vector v = (v 1,..., v k ) T F k the linearization L(λ) = λx + Y DL(P) is (the permutation of) a palindromic, if and only if p(v; x) is palindromic. Nonlinear EVPs 72 / 203

73 Example For the palindromic polynomial P(λ)y = (λ 2 A T 1 + λa 0 + A 1 )y = 0, A 0 = A T 0 a palindromic vector v = [α, α] T, α 0 leads to a palindromic pencil [ ] A (κz + Z T T )z = 0, Z = 1 A 0 A 1 A T 1 A T. 1 This is a linearization if and only if 1 is not an eigenvalue of P(λ). Nonlinear EVPs 73 / 203

74 Symmetric linearizations For symmetric P, a simple argument shows that every pencil in DL(P) is also symmetric: L DL(P) with ansatz vector v implies that L T is also in DL(P) with the same ansatz vector v, and then L = L T follows from the uniqueness. Nonlinear EVPs 74 / 203

75 Deflation of bad eigenvalues To have structure preserving linearizations we need to deflate bad eigenvalues and singular blocks first. Open Question: Can we linearize first and then deflate in the linear problem? If so, then we need structure preserving procedures for even (palindromic) pencils. Do we have structured Kronecker forms? Nonlinear EVPs 75 / 203

76 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 76 / 203

77 Structured Kronecker form To preserve the even, palindromic, symmetric structure, we use congruence transformations λñ + M = λu T NU + U T MU, λãt + Ã = λut A T U + U T AU, λ M + K = λu T MU + U T KU, λñ + M = λu H NU + U H MU, λãt + Ã = λut A T U + U T AU, λ M + K = λu H MU + U H KU, with nonsingular (orthogonal, unitary) U. What are the structured canonical condensed forms under these transformations? Nonlinear EVPs 77 / 203

78 Theorem (Thompson 91) Structured KCF for even pencils If N, M R n,n with N = N T, M = M T, then there exists a nonsingular matrix X C n,n such that X T (λn + M)X = diag(b S, B I, B Z, B F ), B S = diag(o η, S ξ1,..., S ξk ), B I = diag (I 2ɛ1 +1,..., I 2ɛl +1, I 2δ1,..., I 2δm ), B Z = diag (Z 2σ1 +1,..., Z 2σr +1, Z 2ρ1,..., Z 2ρs ), B F = diag(r φ1,..., R φt, C ψ1,..., C ψu ) This structured Kronecker canonical form is unique up to permutation of the blocks. Nonlinear EVPs 78 / 203

79 Properties of blocks 1. O η = λ0 η + 0 η ; 2. Each S ξj is a (2ξ j + 1) (2ξ j + 1) block that combines a right singular block and a left singular block, both of minimal index ξ j. It has the form λ ; Nonlinear EVPs 79 / 203

80 3. Each I 2ɛj +1 is a (2ɛ j + 1) (2ɛ j + 1) block that contains a single block corresponding to the eigenvalue with index 2ɛ j + 1. It has the form λ s , where s {1, 1} is the sign-index or sign-characteristic of the block; Nonlinear EVPs 80 / 203

81 4. Each I 2δj is a 4δ j 4δ j block that combines two 2δ j 2δ j infinite eigenvalue blocks of index δ j. It has the form λ ; Nonlinear EVPs 81 / 203

82 5. Each Z 2σj +1 is a (4σ j + 2) (4σ j + 2) block that combines two (2σ j + 1) (2σ j + 1) Jordan blocks corresponding to the eigenvalue 0. It has the form λ ; Nonlinear EVPs 82 / 203

83 6. Each Z 2ρj is a 2ρ j 2ρ j block that contains a single Jordan block corresponding to the eigenvalue 0. It has the form λ s where s {1, 1} is the sign characteristic of this block;, Nonlinear EVPs 83 / 203

84 7. Each R φj is a 2φ j 2φ j block that combines two φ j φ j Jordan blocks corresponding to nonzero real eigenvalues a j and a j. It has the form λ a j a j 1 a j a j. Nonlinear EVPs 84 / 203

85 8 a. Either C ψj is a 2ψ j 2ψ j block combining two ψ j ψ j Jordan blocks with purely imaginary eigenvalues ib j, ib j (b j > 0). It has the form λ s where s {1, 1} is the sign characteristic. 1 b j b j 1 b j b j, Nonlinear EVPs 85 / 203

86 8 b. or C ψj is a 4ψ j 4ψ j block combining ψ j ψ j Jordan blocks for each of the complex eigenvalues a j + ib j, a j ib j, a j + ib j, a j ib j (with a j 0 and b j 0). In this case it has form λ Ω with Ω =... Ω [ Ω... Ω + ] [ bj a and Λ j = j a j Ω Ω... Λ j b j ]. Λ j Ω Ω... Λ j Λ j Nonlinear EVPs 86 / 203

87 Structured KCF for palindromic pencils Theorem (Horn/Sergejchuk 06, Schröder 06) If A R n,n, then there exists a nonsingular matrix X R n,n such that λx T A T X + X T AX = diag(λa 1 + A T 1,..., λa T l + A l ) is in structured Kronecker form. This structured Kronecker canonical form is unique up to permutation of blocks, i.e., the kind, size and number of the blocks as well as the sign characteristics are characteristic of the pencil λa T + A. Nonlinear EVPs 87 / 203

88 S p = L 1,p (λ) = Properties of blocks 0 0 p R 2p+1,2p+1, p N 0 ; p 1 0 λ 0 p λ 1 R 2p,2p, p 1 where p N, λ R, λ < 1; Nonlinear EVPs 88 / 203

89 L 2,p (α, β) = 0 2p... I I 2 Λ I p Λ R 4p,4p, I 2 [ ] α β where p N, Λ =, α, β R \ {0}, β < 0, α + iβ < 1; β α σu 1,p = σ 1 σ {1, 1}; 0 p p R p,p, where p N is odd, Nonlinear EVPs 89 / 203

90 U 2,p = L p(1) = 1 0 p where p N is even; U 3,p = L p( 1) = odd; p 1 R 2p,2p, 1 0 p p R 2p,2p, where p N is Nonlinear EVPs 90 / 203

91 1 0 p σu 4,p = σ where p N is even, σ {1, 1}; R p,p, Nonlinear EVPs 91 / 203

92 σu 5,p (α, β) = σ I p 2... I 2 Λ... Λ 1 2 I 2 I p 2 Λ R 2p,2p, [ ] α β where p N is odd, α + iβ = 1, β < 0, Λ =, β α Λ 1 2 is defined as rotation matrix with rotation angle φ (0, π), 2 where φ = arctan( β ) is the rotation angle of the rotation matrix Λ, α σ {1, 1}; Nonlinear EVPs 92 / 203

93 σu 6,p (α, β) = σ I p I 2 I 2 Λ I 2 Λ... I where p N is even, [ ] α β α + iβ = 1, β < 0, Λ =, σ {1, 1}. β α R 2p,2p, Nonlinear EVPs 93 / 203

94 Other structured KCF Analogous results for symmetric pencils. Analogous results exist for complex even pencils. Analogous results exist for complex palindromic pencils. Hermitian pencils and complex T -symmetric pencils can be treated like complex even pencils (Set λ = iµ). Nonlinear EVPs 94 / 203

95 Consequences Even, palindromic, symmetric KCF for pencils exist. But the transformation matrix X may be arbitrarily ill-conditioned. The even, palindromic, symmetric KCF cannot be computed well with finite precision algorithms. The information given in the even, palindromic, symmetric KCF is essential for the understanding of the computational problems. We need alternatives, from which we can derive the information, that allows the deflation of singular blocks and blocks associated with 0, (1 and 1). Nonlinear EVPs 95 / 203

96 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 96 / 203

97 Why not just λqnu + QMU? Example Consider a 3 3 even pencil with matrices N = Q Q T, M = Q Q T, where Q is a random real orthogonal matrix. The pencil is congruent to λ For different randomly generated orthogonal matrices Q the QZ algorithm in MATLAB produced all variations of eigenvalues that are possible in a general 3 3 pencil. Nonlinear EVPs 97 / 203

98 Structured staircase form for even pencils Theorem (Byers/M./Xu 07) For λn + M with N = N T, M = M T R n,n, there exists a real orthogonal matrix U R n,n, such that U T NU = N N 1,m N 1,m+1 N 1,m+2... N 1,2m Nm 1,m+2... N 1,m T N m,m N m,m+1 0 N1,m+1 T Nm,m+1 T N m+1,m+1 N 1,m+2 T Nm 1,m+2 T N1,2m T... 0 n 1.. n m l q ṃ. q 2 q 1 Nonlinear EVPs 98 / 203

99 U T MU = M 11 M 1,m M 1,m+1 M 1,m M 1,2m M1,m T M m,m M m,m+1 M m,m+2 M 1,m+1 T Mm,m+1 T M m+1,m+1 M 1,m+2 T Mm,m+2 T M1,2m+1 T n 1.. n m l q ṃ.. q 1, where q 1 n 1 q 2 n 2... q m n m, N j,2m+1 j R n j,qj+1, 1 j m 1, [ ] 0 N m+1,m+1 =, = T R 2p,2p, 0 0 M j,2m+2 j = M m+1,m+1 = [ Γj 0 ] R n j,qj, Γ j R n j,nj, 1 j m, [ ] Σ11 Σ 12 Σ T, Σ 12 Σ 11 = Σ T 11 R2p,2p, Σ 22 = Σ T 22 Rl 2p,l 2p, 22 and the blocks Σ 22 and and Γ j, j = 1,..., m are nonsingular. Nonlinear EVPs 99 / 203

100 The middle block λn m+1,m+1 + M m+1,m+1 = λ [ ] [ Σ11 Σ + 12 Σ T 12 Σ 22 ], contains all the blocks associated with finite eigenvalues and 1 1 blocks associated with the eigenvalue. The finite spectrum of is obtained from the even pencil λ + Σ = λ + (Σ 11 Σ 12 Σ 1 22 ΣT 12 ) with invertible. The matrix [ has] a skew-cholesky factorization = LJL T, 0 I with J =, I 0 The spectral information can be obtained from the Hamiltonian matrix H = JL 1 ΣL T. Nonlinear EVPs 100 / 203

101 Consequences, other cases Similar staircase forms for palindromic, symmetric pencils. All the information about the invariants (Kronecker indices) can be read off. Byers/M./Xu 07. Bad eigenvalues can be deflated first. Singularities and high order blocks to the eigenvalue 0, can be deflated. The best treatment of infinite eigenvalue in the middle block λn m+1,m+1 + M m+1,m+1 is unclear. Is the use of skew-cholesky better than projecting out the nullspace with unitary (symplectic) transformations? Nonlinear EVPs 101 / 203

102 Computational procedure The procedure consists of a recursive sequence of singular value decompositions. The staircase form essentially determines a least generic even pencil within the rounding error cloud surrounding λn + M. Rank decisions face the usual difficulties and have to be adapted to the recursive procedure. Similar difficulties as in standard staircase form, GUPTRI. What to do in case of doubt? In applications, assume worst case. Perturbation analysis is essentially open for singular and higher order blocks associated with. Nonlinear EVPs 102 / 203

103 Example revisited Our MATLAB implementation of the structured staircase Algorithm determined that in the cloud of rounding-error small perturbations of each even λn + M, there is an even pencil with structured staircase form λ with one block I 3 with sign-characteristic 1. The algorithm successfully located a least generic even pencil within the cloud., Nonlinear EVPs 103 / 203

104 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 104 / 203

105 Equivalence for matrix polynomials Definition Two m n matrix polynomials P(λ), Q(λ) are said to be equivalent, denoted P(λ) Q(λ), if there exist unimodular matrix polynomials E(λ) and F(λ) of size m m and n n, respectively, such that E(λ)P(λ)F(λ) = Q(λ). The canonical form of a matrix polynomial P(λ) under equivalence transformations is the Smith form of P(λ). Nonlinear EVPs 105 / 203

106 The Smith form Theorem (Smith form) Let P(λ) be an m n matrix polynomial over an arbitrary field F. Then there exists r N, and unimodular matrix polynomials E(λ) and F(λ) of size m m and n n, respectively, such that E(λ)P(λ)F(λ) = diag(d 1 (λ),..., d min {m,n} (λ)) =: D(λ), where d 1 (λ),..., d r (λ) are monic i.e., the highest degree terms all have coefficient 1, d r+1 (λ),..., d min {m,n} (λ) are identically zero, and d j (λ) is a divisor of d j+1 (λ) for j = 1,..., r 1. Moreover, D(λ) is unique. The r nonzero diagonal elements d j (λ) in the Smith form are called the invariant polynomials or invariant factors of P(λ). Nonlinear EVPs 106 / 203

107 Greatest common divisors For an m n matrix A, let α {1,..., m} and β {1,..., n} be arbitrary index sets of cardinality j min(m, n). Then A αβ denotes the j j submatrix of A in rows α and columns β; the determinant det A αβ is called the αβ-minor of order j of A. For d(x) 0, it is standard notation to write d(x) p(x) to mean that d(x) is a divisor of p(x), i.e., there exists some q(x) such that p(x) = d(x)q(x). Note that d(x) 0 is true for any d(x) 0. Extending this notation to a set S of scalar polynomials, we write d S to mean that d(x) divides each element of S, i.e., d(x) is a common divisor of the elements of S. The greatest common divisor (or GCD) of a set S containing at least one nonzero polynomial is the unique monic polynomial g(x) such that g(x) S, and if d(x) S then d(x) g(x). Nonlinear EVPs 107 / 203

108 Theorem Characterization of invariant polynomials Let P(λ) be an m n matrix polynomial over a field F with a given Smith form. Set p 0 (λ) 1. For 1 j min(m, n), let p j (λ) 0 if all minors of P(λ) of order j are zero; otherwise, let p j (λ) be the greatest common divisor of all minors of P(λ) of order j. Then the number r in Smith form is the largest integer such that p r (λ) 0. Furthermore, the invariant polynomials d 1 (λ),..., d r (λ) of P(λ) are ratios of GCDs given by d j (λ) = p j(λ), j = 1,..., r, p j 1 (λ) the remaining diagonal entries are given by d j (λ) = p j (λ) 0, j = r + 1,..., min {m, n}. Nonlinear EVPs 108 / 203

109 Elementary divisors When the field F is algebraically closed, the invariant polynomials are unique products of powers of linear factors d i (λ) = (λ λ i,1 ) α i,1 (λ λ i,ki ) α i,k i, i = 1,..., r, where λ i,1,..., λ i,ki F are distinct and α i,1,..., α i,ki N. The factors (λ λ i,j ) α i,j, j = 1,..., ki, i = 1,..., r are called the elementary divisors of P(λ). Some polynomials (λ λ 0 ) α may occur multiple times as elementary divisors of P(λ), because they may be factors in distinct invariant polynomials d i1 (λ) and d i2 (λ). For a matrix A C n n, each elementary divisor (λ λ 0 ) α of the matrix pencil λi A corresponds to a Jordan block of size α α associated with the eigenvalue λ 0 of A. Nonlinear EVPs 109 / 203

110 Jordan structure Definition (Jordan structure of a matrix polynomial) For an m n matrix polynomial P(λ) over the field F, the Jordan structure of P(λ) is the collection of all the finite and infinite elementary divisors of P(λ), including repetitions, where P(λ) is viewed as a polynomial over the algebraic closure F. Nonlinear EVPs 110 / 203

111 Companion linearization The classical companion linearization for polynomial eigenvalue problems k P(λ)x = λ i A i x is A k 0 0 A k 1 A k 2 A 0. L(λ) = λ 0 I n I n I n 0 I n 0 i=0 Nonlinear EVPs 111 / 203

112 Companion matrix If P(λ) is regular and A k invertible then we form the companion matrix A 1 k A k 1 A 1 k A k 2 A 1 k A 0 I C P = n I n 0 We have det P(λ) = det(λi + C P ) det(a k ). Nonlinear EVPs 112 / 203

113 Theorem Resolvents For every complex number λ that is not an eigenvalue of the regular n n matrix polynomial P(λ) with nonsingular leading coefficient P(λ) 1 = S 1 (λi + C P ) 1 T 1 where S 1 = [ I n ], T1 = A 1 ḳ. 0 Any triple (S, C, T ) such that S = S 1 M, C = M 1 C P M, T = M 1 T 1 is called a standard triple and we have P(λ) 1 = S(λI + C) 1 T. Nonlinear EVPs 113 / 203

114 Canonical form of linearization Lemma Let (S, C, T ) be a standard triple for an n n matrix polynomial P(λ) with nonsingular leading coefficient, then Q = is invertible and C P = QCQ 1 SC k 1. SC S A standard triple that brings C P to Jordan form is called a Jordan triple. Nonlinear EVPs 114 / 203

115 Jordan triples and matrix equations The blocks of Jordan triples are constructed from the Jordan chains of P(λ) and if (S, C, T ) is a Jordan triple then the matrix equation holds. A 1 k A k 1SC k +... A 1 k A 1SC + A 1 k A 0S = 0 Nonlinear EVPs 115 / 203

116 Lemma Suppose that D(λ) = diag ( d 1 (λ), d 2 (λ),..., d n (λ) ) is the Smith form of the T -even n n matrix polynomial P(λ). Then the following statements hold: a) Each d l (λ) is alternating. b) If P(λ) is regular, and ν is the number of indices l for which the invariant polynomial d l (λ) is odd, then ν is even. Nonlinear EVPs 116 / 203

117 Jordan structure of even mat. polys Theorem (Jordan struct. of T -altern. matrix poly s) Let P(λ) be an n n T -alternating matrix polyn. of degree k. Then the Jordan structure of P(λ) is as follows. (a) If (λ λ 0 ) α 1,..., (λ λ0 ) α l are the elem. div. to λ0 0, then the elem. div. to λ 0 are (λ + λ 0 ) α 1,..., (λ + λ0 ) α l. (b) Zero elementary divisors λ β : (i) if P(λ) is T -even, then for each odd β N, λ β has even multip. (ii) if P(λ) is T -odd, then for each even β N, λ β has even multip. (c) Infinite elementary divisors: (i) Suppose P(λ) is T -even and k is even, or P(λ) is T -odd and k is odd. Then revp(λ) is T -even, and for each odd γ N, the inf. elem. div. of P(λ) of degree γ has even multiplicity. (ii) Suppose P(λ) is T -even and k is odd, or P(λ) is T -odd and k is even. Then revp(λ) is T -odd, and for each even γ N, the inf. elem. div. of P(λ) of degree γ has even multiplicity. Nonlinear EVPs 117 / 203

118 Corollary Jordan structure T-even pencils Let L(λ) = λx + Y be an n n T -alternating pencil. Then the Jordan structure of L(λ) has the following properties: (a) Nonzero elementary divisors occur in pairs: if (λ λ 0 ) α 1,..., (λ λ0 ) α l are the elem. div. of L(λ) to λ0 0, then the elem. div. of L(λ) to λ 0 are (λ + λ 0 ) α 1,..., (λ + λ0 ) α l. (b) If L(λ) is T -even, then the following elem. div. occur with even multip. (i) for each odd β N, the elem. div. λ β, and (ii) for each even γ N, the elem. div. at of degree γ. (c) If L(λ) is T -odd, then the following elem. div. occur with even multi.: (i) for each even β N, the elem. div. λ β, and (ii) for each odd γ N, the elem. div. at of degree γ. Nonlinear EVPs 118 / 203

119 Example Consider the T -even matrix polynomial [ ] [ ] P(λ) = λ = diag(λ 2, 1) Both P(λ) and revp(λ) = diag(1, λ 2 ) have the same Smith form diag(1, λ 2 ); thus P(λ) has elementary divisor λ 2 with odd multiplicity, and also an even degree elementary divisor at with odd multiplicity. But this Jordan structure is incompatible with every T -even pencil. and with every T -odd pencil. Thus we see a reason why P(λ) can have no T -alternating strong linearization. Nonlinear EVPs 119 / 203

120 Consequences, Questions. The Jordan structure of any odd degree T -alternating matrix polynomial P(λ) is completely compatible with that of a T -alternating pencil of the same parity. This strongly suggests that it should be possible to construct a structure-preserving strong linearization for any such P(λ). A question is whether compatibility of Jordan structures is also sufficient to imply the existence of a T -alternating strong linearization. A more refined question concerns the existence of T -alternating linearizations that preserve all the spectral information of P(λ), comprised not only of its finite and infinite elementary divisors, but also (when P(λ) is singular) of its Kronecker indices. See De Teran, Dopico Mackey 201x Nonlinear EVPs 120 / 203

121 Strong linearizations Generalization of (Antoniou and Vologiannidis 2004). Lemma Let P(λ) be any n n matrix polynomial of odd degree, regular or singular, over an arbitrary field F. Then S P (λ) is a strong linearization for P(λ). Theorem Every T -alternating polynomial P(λ) of odd degree has a T -alternating strong linearization with the same parity as P. Nonlinear EVPs 121 / 203

122 Consequences, Questions We know already that there are even degree T -alternating matrix polynomials that, because of Jordan structure incompatibilities, do not have any T -alternating strong linearization. If we put such cases aside, however, and consider only T -alternating matrix polynomials whose Jordan structure is compatible with at least some type of T -alternating pencil, then is that compatibility sufficient to guarantee the existence of a T -alternating strong linearization? This is partially open. Nonlinear EVPs 122 / 203

123 Theorem The Regular Case Let P(λ) be a regular n n T -alternating matrix polynomial of even degree k 2, over the field F = R or F = C. (a) If P(λ) is T -even, then P(λ) has a T -even strong linearization if and only if for each even γ N, the infinite elementary divisor of degree γ occurs with even multiplicity. a T -odd strong linearization if and only if for each even β N, the elementary divisor λ β occurs with even multiplicity. (b) If P(λ) is T -odd, then P(λ) has a T -even strong linearization if and only if for each odd β N, the elementary divisor λ β occurs with even multiplicity. a T -odd strong linearization if and only if for each odd γ N, the infinite elementary divisor of degree γ occurs with even multiplicity. Nonlinear EVPs 123 / 203

124 Palindromic polynomials Mackey, Mackey, Mehl, M., 2010 Many of the results for the even case carry over to the palindromic case. The smith form of a T palindromic But there are differences, these have to do with the difference between grade and degree. λ 2 I + λi is not palindromic as a degree 2 but as degree 3 matrix polynomial. We don t have if and only if results. Many open questions. Nonlinear EVPs 124 / 203

125 Outline 1 Motivation and Applications 2 Normal and condensed forms 3 Matrix Polynomials 4 New Classes of Linearizations 5 Structured polynomial Evps. 6 Structured Linearization 7 Structured canonical forms 8 Structured Staircase Form 9 Smith form and invariant polynomials 10 Backward error analysis 11 Numerical methods Nonlinear EVPs 125 / 203

126 Wilkinson Backward Error Analysis Consider a set of data D, a computational problem and a solution space S. Suppose that the exact solution is described by mapping φ that maps the data d D to a solution s S. φ : D S d s In finite precision arithmetic we have an inaccurate solution φ and get an inaccurate answer s. Nonlinear EVPs 126 / 203

127 Wilkinson Backward Error Analysis Classical Wilkinson backward error analysis. Show that s = φ( d), i.e. the result is the exact result with perturbed data d and an equivalent backward error η = d d. Then analyse via perturbation analysis what the effect of a perturbation of the data on the result is. The amplification factor for the error in the data is called the condition number. Nonlinear EVPs 127 / 203

128 Backward error analysis for P(λ)x = 0. Data: coefficients A i, i = 0,..., k. Solution: λ, x. Method: Eigenvalue solver. For the perturbation analysis we have to analyze how the eigenvalues λ and the eigenvectors x behave under perturbation of the coefficients A i, i = 0,..., k, i.e. we have to determine the condition number. Nonlinear EVPs 128 / 203

129 Linearization We do not solve P(λ)x = 0, but L(λ)z = 0, where the last part of the vector z is x. Is the condition number of P(λ)x = 0 and L(λ)z = 0 different? If yes can we make the difference better by choosing different linearizations? What about structure preservation, does that help? Nonlinear EVPs 129 / 203

130 Conditioning of polyn. evps Definition Let λ be a simple nonzero finite eigenvalue of P(λ) = k i=0 λi A i and let x, y be the associated right and left eigenvectors. The norm-wise condition number (with respect to perturbations P = k i=0 λi A i ) is defined as κ P (λ) = { λ lim sup : (P + P)(λ + λ)(x + x) = 0, ɛ 0 ɛ λ A i ɛω i, i = 0, 1,..., k} where the weights ω i allow to put different measures on the coefficients. Nonlinear EVPs 130 / 203

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/

More information

Solving the Polynomial Eigenvalue Problem by Linearization

Solving the Polynomial Eigenvalue Problem by Linearization Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 28, No 4, pp 971 1004 c 2006 Society for Industrial and Applied Mathematics VECTOR SPACES OF LINEARIZATIONS FOR MATRIX POLYNOMIALS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Polynomial Eigenvalue Problems: Theory, Computation, and Structure. Mackey, D. S. and Mackey, N. and Tisseur, F. MIMS EPrint: 2015.

Polynomial Eigenvalue Problems: Theory, Computation, and Structure. Mackey, D. S. and Mackey, N. and Tisseur, F. MIMS EPrint: 2015. Polynomial Eigenvalue Problems: Theory, Computation, and Structure Mackey, D. S. and Mackey, N. and Tisseur, F. 2015 MIMS EPrint: 2015.29 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Eigenvector error bound and perturbation for nonlinear eigenvalue problems

Eigenvector error bound and perturbation for nonlinear eigenvalue problems Eigenvector error bound and perturbation for nonlinear eigenvalue problems Yuji Nakatsukasa School of Mathematics University of Tokyo Joint work with Françoise Tisseur Workshop on Nonlinear Eigenvalue

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Fiedler Companion Linearizations and the Recovery of Minimal Indices. De Teran, Fernando and Dopico, Froilan M. and Mackey, D.

Fiedler Companion Linearizations and the Recovery of Minimal Indices. De Teran, Fernando and Dopico, Froilan M. and Mackey, D. Fiedler Companion Linearizations and the Recovery of Minimal Indices De Teran, Fernando and Dopico, Froilan M and Mackey, D Steven 2009 MIMS EPrint: 200977 Manchester Institute for Mathematical Sciences

More information

On doubly structured matrices and pencils that arise in linear response theory

On doubly structured matrices and pencils that arise in linear response theory On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS M I BUENO, M MARTIN, J PÉREZ, A SONG, AND I VIVIANO Abstract In the last decade, there has been a continued effort to produce families

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k 2 of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k 2 of the form FIEDLER COMPANION LINEARIZATIONS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M DOPICO, AND D STEVEN MACKEY Abstract A standard way of dealing with a matrix polynomial P (λ) is to convert

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

1. Introduction. In this paper we discuss a new class of structured matrix polynomial eigenproblems Q(λ)v = 0, where

1. Introduction. In this paper we discuss a new class of structured matrix polynomial eigenproblems Q(λ)v = 0, where STRUCTURED POLYNOMIAL EIGENPROBLEMS RELATED TO TIME-DELAY SYSTEMS H. FASSBENDER, D. S. MACKEY, N. MACKEY, AND C. SCHRÖDER Abstract. A new class of structured polynomial eigenproblems arising in the stability

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS.

A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS. A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS M I BUENO, F M DOPICO, J PÉREZ, R SAAVEDRA, AND B ZYKOSKI Abstract The standard way of solving the polynomial eigenvalue

More information

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation

More information

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint: Triangularizing matrix polynomials Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion 2012 MIMS EPrint: 2012.77 Manchester Institute for Mathematical Sciences School of Mathematics The University of

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

Linearizations of singular matrix polynomials and the recovery of minimal indices

Linearizations of singular matrix polynomials and the recovery of minimal indices Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 32 2009 Linearizations of singular matrix polynomials and the recovery of minimal indices Fernando de Teran fteran@math.uc3m.es Froilan

More information

PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS

PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN MEHL, AND VOLKER MEHRMANN Abstract Palindromic polynomial eigenvalue problems

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

On the computation of the Jordan canonical form of regular matrix polynomials

On the computation of the Jordan canonical form of regular matrix polynomials On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Quadratic Realizability of Palindromic Matrix Polynomials

Quadratic Realizability of Palindromic Matrix Polynomials Quadratic Realizability of Palindromic Matrix Polynomials Fernando De Terán a, Froilán M. Dopico a, D. Steven Mackey b, Vasilije Perović c, a Departamento de Matemáticas, Universidad Carlos III de Madrid,

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification Al-Ammari, Maha and Tisseur, Francoise 2010 MIMS EPrint: 2010.9 Manchester Institute for Mathematical Sciences

More information

arxiv: v2 [math.na] 8 Aug 2018

arxiv: v2 [math.na] 8 Aug 2018 THE CONDITIONING OF BLOCK KRONECKER l-ifications OF MATRIX POLYNOMIALS JAVIER PÉREZ arxiv:1808.01078v2 math.na] 8 Aug 2018 Abstract. A strong l-ification of a matrix polynomial P(λ) = A i λ i of degree

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 31, pp 306-330, 2008 Copyright 2008, ISSN 1068-9613 ETNA STRUCTURED POLYNOMIAL EIGENPROBLEMS RELATED TO TIME-DELAY SYSTEMS HEIKE FASSBENDER, D STEVEN

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y). Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 029 05 c 2006 Society for Industrial and Applied Mathematics STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D. STEVEN

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Accurate eigenvalue decomposition of arrowhead matrices and applications

Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information