A note on companion pencils

Size: px
Start display at page:

Download "A note on companion pencils"

Transcription

1 Contemporary Mathematics A note on companion pencils Jared L Aurentz, Thomas Mach, Raf Vandebril, and David S Watkins Abstract Various generalizations of companion matrices to companion pencils are presented Companion matrices link to monic polynomials, whereas companion pencils do not require monicity of the corresponding polynomial In the classical companion pencil case (A, B) only the coefficient of the highest degree appears in B s lower right corner We will show, however, that all coefficients of the polynomial can be distributed over both A and B creating additional flexibility Companion matrices admit a so-called Fiedler factorization into essentially 2 2 matrices These Fiedler factors can be reordered without affecting the eigenvalues (which equal the polynomial roots) of the assembled matrix We will propose a generalization of this factorization and the reshuffling property for companion pencils Special examples of the factorizations and extensions to matrix polynomials and product eigenvalue problems are included It is well known that the polynomial Introduction () p(z) = c n z n + c n z n + + c z + c Mathematics Subject Classification Primary 65F5; Secondary 5A23 Key words and phrases Companion pencil, polynomial, matrix polynomial, rootfinding, product eigenvalue problems The research was partially supported by the Research Council KU Leuven, projects CREA Can Unconventional Eigenvalue Algorithms Supersede the State of the Art, OT//055 Spectral Properties of Perturbed Normal Matrices and their Applications, and CoE EF/05/006 Optimization in Engineering (OPTEC); by the Fund for Scientific Research Flanders (Belgium) project G03422N Reestablishing Smoothness for Matrix Manifold Optimization via Resolution of Singularities; by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office, Belgian Network DYSCO (Dynamical Systems, Control, and Optimization); and by the European Research Council under the European Unions Seventh Framework Programme (FP7/ )/ERC grant agreement no The views expressed in this article are not those of the ERC or the European Commission, and the European Union is not liable for any use that may be made of the information contained here c 0000 (copyright holder)

2 2 J L AURENTZ, T MACH, R VANDEBRIL, AND D S WATKINS equals the determinant det (zb A), with 0 c 0 0 c A = (2) and B = 0 c n 2 c n c n The matrix pencil (A, B) is called the companion pencil of p(z) and can be used to compute the roots of p(z) as the roots of p(z) coincide with the eigenvalues of the pencil (A, B) At the moment, generalizing Frobenius companion matrices and/or pencils [3, 7,9] is an active research topic as new matrix forms and/or factorizations open up possibilities to develop new, possibly faster or more accurate algorithms for both the classical rootfinding problem as well as matrix polynomial eigenvalue problems [6,8, 2] Even though the companion pencil exhibits a favorable numerical behavior [0] with respect to the companion matrix, there are only few structured QZ algorithms for companion pencils available [4, 5] The backward error bound for polynomial rootfinding based on companion matrices is c max c c < k c ɛ m + O ( ɛ 2 ) m, c n with c the coefficients of a polynomial with the computed roots, k 2 a constant, c max the coefficient with the largest absolute value, and ɛ m the machine precision However, the backward error for polynomial rootfinding based on companion pencils is only c c < k 2 c ɛ m, where a scaling to c = is possible [8,0,2] Thus polynomial rootfinding with companion pencils is advantageous for polynomials with large cmax c n In this article no new rootfinders will be proposed, only theoretical extensions of the existing companion pencil are introduced First, in Section 2 we will generalize the companion pencil () by distributing not only the highest order, but all polynomial coefficients over both A and B Moreover, one does not have to choose whether to put a specific coefficient c i in A or B, one can also write c i = v i+ + w i, where the term v i will appear in A, and w i in B For matrix polynomials a related approach has been investigated in [] Second, we will adjust Fiedler s factorization of a companion matrix to obtain a similar factorization of the companion pencil and we will prove that also in this case one can reorder all the factors without altering the pencil s eigenvalues Finally, in Sections 4 and 5 we link some of the results to matrix polynomials and product eigenvalue problems 2 Generalized companion pencils The following theorem shows that the coefficients can be distributed over the last column of A and the last column of B

3 A NOTE ON COMPANION PENCILS 3 Theorem 2 Consider p(z), a polynomial with complex coefficients p(z) = c n z n + c n z n + + c z + c 0 Let v, w be vectors of length n with (3) v = c 0, v i+ + w i = c i, for i =,, n, and w n = c n Then the determinant det (zb A) of the pencil (A, B), with (4) A = Z ve T n, B = Z H Z + we T n, e n the nth standard unit vector, and Z the downshift matrix 0 0 Z =, 0 equals p(z) Proof We have z zw + v z zw 2 + v 2 det (zb A) = det z zw n + v n zw n + v n Applying Laplace s formula to the first row yields z zw 2 + v 2 z zw 3 + v 3 det (zb A) =z det z zw n + v n zw n + v n z z + ( ) n+ (zw + v ) det z } {{ } =( ) n The proof is completed by applying Laplace s formula recursively The coefficients c 0 in A and c n in B are fixed All other coefficients can be freely distributed between parts in A and parts in B as long as v i+ and w i sum up to c i The following example demonstrates that the additional freedom can be used to create additional structure in A and B

4 4 J L AURENTZ, T MACH, R VANDEBRIL, AND D S WATKINS Example 22 Let p(z) = 5z 3 z 2 + 2z Then both A and B can be chosen to consist solely of non-negative entries, eg, 0 3 A = 0 and B = 2 5 Another option is to choose always either w i = 0 or v i+ = 0: Corollary 23 Consider again p(z) from () Take two disjunct subsets I {0,, n } and J {,, n}, I J = such that their union I J = {0,, n} Define two vectors v and w as { c i if i I, v i+ = 0 if i / I, and w j = Then the eigenvalues of the pencil (A, B) with A = Z ve T n, B = Z H Z + we T n, { c j if j J, 0 if j / J and Z be the downshift matrix coincide with the polynomial roots of p(z) Example 24 Let p(z) = c 5 z 5 + c 4 z c z + c 0 Then (A, B), with 0 c 0 c A = 0 c 2 and B = c c 4 c 5 is a companion pencil of p(z) Remark 25 When considering the existing QZ-algorithms by Boito, Eidelman, and Gemignani [4,5] we notice that the algorithms are based on the property that both A and B in (2) are of unitary-plus-rank-one form, a structure maintained under QZ-iterates This property also holds for the new pencil matrices in Theorem 2 implying that with modest modifications the existing algorithms could compute the eigenvalues of the new pencils as well 3 Fiedler companion factorization The matrix A of the pencil (A, B) is the companion matrix of the monic polynomial whose coefficients are defined by v As shown by Fiedler [9], companion matrices can be factored in n core transformations A core transformation X i is a modified identity matrix, where a 2 2 submatrix, for i = 0 and i = n one entry, on the diagonal is replaced by an arbitrary matrix; for all core transformations the subindex i links to the submatrix X i (i : i +, i : i + ) differing from the identity For the factorization of the companion matrix the core transformations A 0, A i,, A n, typically named Fiedler factors, have the form A 0 (, ) = v Note that A0 s active part is restricted to the upper left matrix entry Later B n s active part is restricted to the lower right matrix entry

5 A NOTE ON COMPANION PENCILS 5 and A i (i : i, i : i) = [ ] 0 v i for i = 2,, n For example, the factorization of A, with n = 5 equals 0 v 0 v 2 A = 0 v 3 0 v 4 = v 5 = A 0 A A 2 A 3 A 4, [ ][ v 0 ][ ] v 2 0 [ ] v 3 0 [ ] v 4 0 v 5 where only the essential parts of the core transformations are shown Moreover, Fiedler also proved [9] that reordering the Fiedler factors arbitrarily has no effect on the eigenvalues of their product matrix Thus for all permutations σ of {0,, n } the eigenvalues of A σ(0) A σ(n ) equal those of A 0 A n The product A σ(0) A σ(n ) is often referred to as a Fiedler companion matrix In the setting of companion pencils Fiedler factorizations have been used, eg, in [, 2] A similar factorization exists for B, but since B is upper triangular two sequences of core transformations are required For n = 5 we have B = ][ w ][ w2 0 ][ w3 0 [ ][ w4 0 w5 0 ][ ] 0 0 [ 0 0 ] [ 0 0 ] [ ] 0 0 = B 5 B 4 B 3 B 2 B G G 2 G 3 G 4, with B i (i : i +, i : i + ) = [ ] w i 0 and Gi (i : i +, i : i + ) = [ 0 0 ] We will now generalize Fiedler s reordering identity of the companion matrix factorization to the companion pencil case Theorem 3 Let (A, B) be a companion pencil of p(z), with A and B factored in core transformations A = A 0 A n and B = B n B G G n as in Theorem 2 Then, for every permutation σ : {,, n } {,, n } the pencil (A σ, B σ ), with A σ = A 0 A σ() A σ(n ), B σ = B n B σ(n ) B σ() G σ() G σ(n ) satisfies p(z) = det(zb σ A σ ) Proof We will form F σ = A σ B σ() B σ(n ) = A σbσ B n and show that F σ is a Fiedler companion matrix determined by σ and related to the monic polynomial p(z) = z n + c n z n + + c 0, missing coefficient c n Note that all the B i for i < n are invertible as is B n, as otherwise the original polynomial would have had a leading coefficient 0 2 The resulting pencil (F σ, B n ) is equivalent to the companion pencil (2) as one can prove by using the similarity transformations described in [9, Lemma 22] and altering the reasoning slightly not to touch transformation F n The Fiedler companion matrix F σ can be factored into core transformations, the Fiedler factors, F 0 F σ() F σ(n ), with F 0 (, ) = c 0 and F i (i : i +, i : i + ) = 2 In fact, Bn does not need to be inverted, so the proof still holds even for singular B n

6 6 J L AURENTZ, T MACH, R VANDEBRIL, AND D S WATKINS [ 0 ] c i The matrix Fn does not commute with B n Consequently the necessary transformations involve only F,, F n 2 To construct F σ = A σ Bσ B n we will multiply the pencil (A σ, B σ ) on the right with factors to eliminate parts in B σ and to move them to A σ s side Core transformations X i and X j commute if i j >, so we can reorder A σ = A 0 A σ() A σ(n ) as A σ = A σ = A 0 (A j A j 2 A )(A j2 A j2 2 A j ) (A n A n 2 A js ), We define j 0 = This reordering appeared also in [9], and contains factors (A jl+ A jl+ 2 A jl +A jl ) in which each core transformation A j is positioned to the right of A j, where j = j l+, j l+ 2,, j l + The ordering between these larger factors is, however, different A jl, is positioned to the left of A jl, with l = 0,, s We can also reorder the factors B i and G i to get B σ = B σ = B n (B js B n ) (B B j )(G j G ) (G n G js ), Let us now form the product of the two inner factors w w j S = (B B j )(G j G ) = Let r be the largest index with j r = j + r, which is in fact the number of factors (A jl+ A jl+ 2 A jl ) containing only a single matrix This means that the matrix A σ actually equals A σ = A 0 (A j A j 2 A )A j A jr (A jr+ A jr ) (A n A js ), where there are now r factors consisting of a single matrix One can show that S commutes with (G j2 G j+) (G n G js ) but not with G j,, G jr These factors G j,, G jr, when applied from the right on S, will each move w j one column to the right The matrix S = ((G j2 G j+) (G n G js )) S(G j2 G j+) (G n G js ), is an elimination matrix Thus inverting S is equivalent to changing all w i into w i By multiplying the pencil with S from the right, we can change it to ( ) A σ S, B n (B js B n ) (B j B j2 )(G j2 G j ) (G n G js ) Again one can show that S commutes with (A j2 A j+) (A n A js ) Further A j,, A jr will each move w j one column back to the left, so that (A j2 A j+) (A n A js ) S ((A j2 A j+) (A n A js )) = S We can now combine (A j A j 2 A )S, yielding (A j A )S = = (F j F ) v 2 w v j w j By repeating this procedure the remaining factors in B σ can be moved to the left and F σ = A 0 (F j F j 2 F )(F j2 F j2 2 F j ) (F n F n 2 F js )

7 A NOTE ON COMPANION PENCILS 7 Remark 32 We would like to emphasize that in Fiedler factorizations it is possible to reposition the A 0 term, so that it can appear everywhere in the factorization In the pencil case, however, this is in general not possible Nor A 0 nor B n can be moved freely, unless in some specific configurations, of which the Fiedler factorization with all B i s identities is one Let us now look at some examples of different shapes for companion pencils Example 33 Let p(z) = 2z 4 + (sin(t) + cos(t))z 3 + (3 + i)z 2 4z 2 A standard Hessenberg-upper triangular pencil (A, B) could look like 0 2 A = A 0 A A 2 A 3 = and sin(t) B = B B 2 B 3 B 4 = i cos(t) 2 Another option would be a CMV-like ordering 0 2 A = A 0 A 2 A A 3 = and sin(t) i i B = B 2 B B 3 B 4 = 0 cos(t) = cos(t) 2 2 The advantage is that A is pentadiagonal and B the product of a pentadiagonal and a permutation matrix, properties that can be exploited when developing QR or QZ algorithms Example 34 In this example we will demonstrate ( the effect of) the permutation σ Let p(z) = c 8 z 8 + c z + c 0 and σ = Then we have v 6 0 v 0 v v 3 0 v 4 v 5 v 7 0 v 8, w w w 3 0 w 4 w 8 w w 5 w , A 0A σ() A σ(7) B 8B σ(7) B σ() G σ() G σ(7) where only the active parts of the Fiedler factors are shown We see that the shape is the same in A and G but mirrored in B In the stacking notation above, one should replace each active matrix part by a full matrix; the notation is unambiguous as two blocks positioned on top of each other commute so their ordering does not play any role

8 8 J L AURENTZ, T MACH, R VANDEBRIL, AND D S WATKINS Remark 35 We focused on a specific factorization linked to the B-matrix However, the matrix B could also be factored as (eg, let n = 5) ][ ] ][ 0 0 [ ][ 0 0 w 0 ][ B = [0 ][ 0 0 w 2 0 ][ ] 0 w 3 0 [ ] 0 w 4 w5 The factorization differs, but results similar to Theorem 3 can be proved 4 Factoring companion pencils and product eigenvalue problems Even though Theorem 2 is probably the most useful in practice, it is still only a special case of a general product setting, where the polynomial rootfinding problem is written as a product eigenvalue problem [3] Theorem 4 Consider p(z), a polynomial with complex coefficients p(z) = c n z n + c n z n + + c z + c 0 Define A, B, and F (k) (for k =,, m) based on the n-tuples v, w, and f (k) v = c 0, v i = 0, for i = 2,, n, w n = c n, w i = 0, for i =,, n, m = 0, and f (k) n k= f (k) i = c i, for i =,, n, as follows 0 c A = Z ve T n =, B = Z H Z + we T n = 0 0 f (k) and F (k) = I f (k) e T n =, f (k) n with Z the downshift matrix Then the product A( m k= F (k) )Bn equals the companion matrix in (2) Proof Straightforward multiplication of the elimination matrices provides the result The factors F (k) commute and thus the product notation is unambiguous Theorem 2 is a simple consequence of Theorem 4 The inverse of F (k) is again an elimination matrix, where only a sign change of the elements determined by f (k) needs to be affected; as a result, we have for all l < m that the pencil (A l k= F (k), Bn l+ m (F (k) ) ) shares the eigenvalues of the companion c n

9 A NOTE ON COMPANION PENCILS 9 matrix Moreover, instead of explicitly computing the product one could also use the factorization to retrieve the eigenvalues of the pencil Remark 42 Each of the factors F (k) can be written in terms of his Fiedler factorization as proposed in Section 3 Moreover, Theorem 3 can be generalized to the product setting, where each of the factors F (k) obeys the ordering imposed by the permutation σ The proof proceeds similarly to Theorem 3; instead of having to move only one matrix B from the right to the left entry in the pencil, one has to move now each of the factors, one by one to the left 5 Matrix Polynomials In this section we will generalize the results for classical polynomials to matrix polynomials Let p(z) = C n z n +C n z n + +C z +C 0 be a matrix polynomial having coefficients C i C m m Then p(z) matches the determinant det (zb A), with (5) 0 m V I m 0 m V 2 A = I m 0 m V n I m V n I m W I m W 2 and B = I m W n W n where V i+ + W i = C i, V = C 0, W n = C n, I m equals the identity matrix of size m m, and 0 m stands for the zero matrix of size m m In the following corollary it is shown that the additional freedom in the splitting can be used to form block-matrices A and B with structured blocks Corollary 5 Let p(z) = C n z n +C n z n + +C z+c 0 be a matrix polynomial with coefficients C i C m m Let further C 0 be skew-hermitian (C H 0 = C 0 ) and C n Hermitian (C n = C H n ) Then there are V = [ V V 2 V n ] T and W = [ W W 2 W n ] T as in (5), so that all Vi are skew-hermitian and all W i are Hermitian Proof V is skew-hermitian and ( W n is Hermitian ) For( all other) coefficients C i we can use the splitting V i+ = 2 Ci Ci H and Wi = 2 Ci + Ci H In Section 4 we already showed that we can distribute the coefficients over many more factors to obtain a product eigenvalue problem If C 0 and C n are of special form, one can even obtain a matrix product with structured matrices Corollary 52 Let p(z) = C n z n +C n z n + +C z+c 0 be a matrix polynomial with coefficients C i C m m, C 0 unitary-plus-rank-, and C n = I m Then there exist m upper-triangular and unitary-plus-rank- matrices F (k) and a unitary-plus-rank- matrix A, so that the eigenvalues of the matrix polynomial coincide with those of the product eigenvalue problem A( m k= F (k) )Bn Proof Adapting the notation from Theorem 4 to fit matrix polynomials as in (5) and letting V i and W i as in (5) and F (k) i as in Theorem 4 be matrices The desired property holds for the m(n ) matrices F (k) i, with (k =,, m;,

10 0 J L AURENTZ, T MACH, R VANDEBRIL, AND D S WATKINS i =,, n ; and s and t general indices) F (k) i (s, t) = { C i (s, t), t = k, 0, t k, as every F (k) takes one column out of the coefficient matrices 6 Conclusions & Future work A generalization of Fiedler s factorization of companion matrices to companion pencils was presented It was shown that the pencil approach can be seen as a specific case of a product eigenvalue problem, and it was noted that all results are applicable to the matrix polynomial case Forthcoming investigations focus on exploiting these splittings in rootsolvers, based on companion pencils to fastly obtain reliable solutions Acknowledgments We d like to thank Vanni Noferini and Leonardo Robol for the fruitful discussions on the conditioning of these pencils during their visit to the Dept of Computer Science, November 204 Furthermore we d like to express our gratitude to the referees, who pinpointed some shortcomings and errors; their careful reading has significantly improved the presentation and readability References E N Antoniou and S Vologiannidis, A new family of companion forms of polynomial matrices, Electronic Journal of Linear Algebra (2004), E N Antoniou, S Vologiannidis, and N Karampetakis, Linearizations of polynomial matrices with symmetries and their applications, Intelligent Control, 2005 Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation, June 2005, pp R Bevilacqua, G M Del Corso, and L Gemignani, A CMV based eigensolver for companion matrices, arxiv preprint arxiv: (204) 4 P Boito, Y Eidelman, and L Gemignani, Implicit QR for rank-structured matrix pencils, BIT 54 (203), 85 5, Implicit QR for companion-like pencils, arxiv preprint arxiv: (204) 6 F de Terán, Froilán M Dopico, and J Pérez, Backward stability of polynomial root-finding using Fiedler companion matrices, IMA Journal of Numerical Analysis (204) 7 B Eastman, I-J Kim, BL Shader, and KN Vander Meulen, Companion matrix patterns, Linear Algebra and its Applications 463 (204), A Edelman and H Murakami, Polynomial roots from companion matrix eigenvalues, Mathematics of Computation 64 (995), no 20, M Fiedler, A note on companion matrices, Linear Algebra and its Applications 372 (2003), G F Jónsson and S Vavasis, Solving polynomials with small leading coefficients, SIAM Journal on Matrix Analysis and Applications 26 (2004), no 2, D S Mackey, N Mackey, C Mehl, and V Mehrmann, Vector spaces of linearizations for matrix polynomials, SIAM Journal on Matrix Analysis and Applications 28 (2006), no 4, P Van Dooren and P Dewilde, The eigenstructure of an arbitrary polynomial matrix: Computational aspects, Linear Algebra and its Applications 50 (983), D S Watkins, Product eigenvalue problems, SIAM Review 47 (2005), no, 3 40

11 A NOTE ON COMPANION PENCILS Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Road, OX2 6GG Oxford, UK address: URL: Department of Computer Science, KU Leuven, Celestijnenlaan 200A, 300 Leuven (Heverlee), Belgium address: URL: Department of Computer Science, KU Leuven, Celestijnenlaan 200A, 300 Leuven (Heverlee), Belgium address: URL: Department of Mathematics, Washington State University, Pullman, WA , USA address: URL:

Factoring block Fiedler Companion Matrices

Factoring block Fiedler Companion Matrices Chapter Factoring block Fiedler Companion Matrices Gianna M. Del Corso, Federico Poloni, Leonardo Robol, and Raf Vandebril Abstract When Fiedler published his A note on Companion matrices in 00 in Linear

More information

A factorization of the inverse of the shifted companion matrix

A factorization of the inverse of the shifted companion matrix Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 A factorization of the inverse of the shifted companion matrix Jared L Aurentz jaurentz@mathwsuedu Follow this and additional

More information

Fast computation of eigenvalues of companion, comrade, and related matrices

Fast computation of eigenvalues of companion, comrade, and related matrices BIT Numer Math (204) 54:7 30 DOI 0007/s0543-03-0449-x Fast computation of eigenvalues of companion, comrade, and related matrices Jared L Aurentz Raf Vandebril David S Watkins Received: 6 December 202

More information

Fast and backward stable computation of roots of polynomials

Fast and backward stable computation of roots of polynomials Fast and backward stable computation of roots of polynomials Jared L Aurentz, Thomas Mach, Raf Vandebril, and David S Watkins Jared L Aurentz Mathematical Institute, University of Oxford aurentz@mathsoxacuk

More information

A multiple shift QR-step for structured rank matrices

A multiple shift QR-step for structured rank matrices A multiple shift QR-step for structured rank matrices Raf Vandebril, Marc Van Barel and Nicola Mastronardi Raf Vandebril Department of Computer Science KU Leuven, Belgium Raf.Vandebril@cs.kuleuven.be Marc

More information

arxiv: v1 [math.na] 24 Jan 2019

arxiv: v1 [math.na] 24 Jan 2019 ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report

More information

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016.

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016. Polynomial eigenvalue solver based on tropically scaled Lagrange linearization Van Barel, Marc and Tisseur, Francoise 2016 MIMS EPrint: 201661 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Downloaded 09/18/18 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/18/18 to Redistribution subject to SIAM license or copyright; see SIAM J MATRIX ANAL APPL Vol 39, No 3, pp 1245 1269 c 2018 Society for Industrial and Applied Mathematics FAST AND BACKWARD STABLE COMPUTATION OF ROOTS OF POLYNOMIALS, PART II: BACKWARD ERROR ANALYSIS;

More information

Implicit double shift QR-algorithm for companion matrices

Implicit double shift QR-algorithm for companion matrices Implicit double shift QR-algorithm for companion matrices marc Van Barel, Raf Vandebril, Paul Van Dooren en Katrijn Frederix Marc Van Barel Department of Computer Science KU Leuven, Belgium Marc.VanBarel@cs.kuleuven.be

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

YET ANOTHER ALGORITHM FOR THE SYMMETRIC EIGENVALUE PROBLEM. Dedicated to Bill Gragg on the occasion of his 80th birthday.

YET ANOTHER ALGORITHM FOR THE SYMMETRIC EIGENVALUE PROBLEM. Dedicated to Bill Gragg on the occasion of his 80th birthday. YET ANOTHER ALGORITHM FOR THE SYMMETRIC EIGENVALUE PROBLEM JARED L. AURENTZ, THOMAS MACH, RAF VANDEBRIL, AND DAVID S. WATKINS Dedicated to Bill Gragg on the occasion of his 80th birthday. Abstract. In

More information

A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS.

A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS. A UNIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS M I BUENO, F M DOPICO, J PÉREZ, R SAAVEDRA, AND B ZYKOSKI Abstract The standard way of solving the polynomial eigenvalue

More information

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases

A Framework for Structured Linearizations of Matrix Polynomials in Various Bases A Framework for Structured Linearizations of Matrix Polynomials in Various Bases Leonardo Robol Joint work with Raf Vandebril and Paul Van Dooren, KU Leuven and Université

More information

c 2015 Society for Industrial and Applied Mathematics

c 2015 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 36, No 3, pp 942 973 c 215 Society for Industrial and Applied Mathematics FAST AND BACKWARD STABLE COMPUTATION OF ROOTS OF POLYNOMIALS JARED L AURENTZ, THOMAS MACH, RAF VANDEBRIL,

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science On Deflations in Extended QR Algorithms Thomas Mach Raf Vandebril Report TW 634, September 2013 KU Leuven Department of Computer Science Celestijnenlaan 200A B-3001 Heverlee (Belgium) On Deflations in

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (21) 116:177 212 DOI 1.17/s211-1-32-y Numerische Mathematik Implicit double shift QR-algorithm for companion matrices Marc Van Barel Raf Vandebril Paul Van Dooren Katrijn Frederix Received:

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Structures preserved by matrix inversion Steven Delvaux Marc Van Barel Report TW 414, December 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31 Heverlee (Belgium)

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

A SIMPLIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS.

A SIMPLIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS. A SIMPLIFIED APPROACH TO FIEDLER-LIKE PENCILS VIA STRONG BLOCK MINIMAL BASES PENCILS. M. I. BUENO, F. M. DOPICO, J. PÉREZ, R. SAAVEDRA, AND B. ZYKOSKI Abstract. The standard way of solving the polynomial

More information

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math. Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv:1307.186v1 [math.na] 8 Jul 013 Roberto Bevilacqua, Gianna M. Del Corso and Luca Gemignani

More information

Structured Matrix Methods for Polynomial Root-Finding

Structured Matrix Methods for Polynomial Root-Finding Structured Matrix Methods for Polynomial Root-Finding Luca Gemignani Department of Mathematics University of Pisa Largo B. Pontecorvo 5, 5617 Pisa, Italy gemignan@dm.unipi.it ABSTRACT In this paper we

More information

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS M I BUENO, M MARTIN, J PÉREZ, A SONG, AND I VIVIANO Abstract In the last decade, there has been a continued effort to produce families

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Rank structures preserved by the QR-algorithm: the singular case Steven Delvaux Marc Van Barel Report TW 4, August 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31

More information

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 988 1001 Applications and Applied Mathematics: An International Journal (AAM) Numerical Experiments

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial

More information

Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade

Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade Generic skew-symmetric matrix polynomials with fixed rank and fixed odd grade by Andrii Dmytryshyn and Froilán M. Dopico UMINF-17/07 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE-901 87 UMEÅ SWEDEN

More information

Eigen-solving via reduction to DPR1 matrices

Eigen-solving via reduction to DPR1 matrices Computers and Mathematics with Applications 56 (2008) 166 171 www.elsevier.com/locate/camwa Eigen-solving via reduction to DPR1 matrices V.Y. Pan a,, B. Murphy a, R.E. Rosholt a, Y. Tang b, X. Wang c,

More information

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ELA

ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to

More information

A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices

A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices D. A. Bini a,1 P. Boito a,1 Y. Eidelman b L. Gemignani a,1 I. Gohberg b a Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Inverse eigenvalue problems linked to rational Arnoldi, and rational (non)symmetric Lanczos Thomas Mach Marc Van Barel Raf Vandebril Report TW 629, June 213 KU Leuven Department of Computer Science Celestijnenlaan

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Fiedler Companion Linearizations and the Recovery of Minimal Indices. De Teran, Fernando and Dopico, Froilan M. and Mackey, D.

Fiedler Companion Linearizations and the Recovery of Minimal Indices. De Teran, Fernando and Dopico, Froilan M. and Mackey, D. Fiedler Companion Linearizations and the Recovery of Minimal Indices De Teran, Fernando and Dopico, Froilan M and Mackey, D Steven 2009 MIMS EPrint: 200977 Manchester Institute for Mathematical Sciences

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.

PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian, Swanand R Khare and Madhu N Belur Abstract This paper formulates the problem

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

Key words. Companion matrix, quasiseparable structure, QR iteration, eigenvalue computation,

Key words. Companion matrix, quasiseparable structure, QR iteration, eigenvalue computation, A FAST IMPLICIT QR EIGENVALUE ALGORITHM FOR COMPANION MATRICES D. A. BINI, P. BOITO, Y. EIDELMAN, L. GEMIGNANI, AND I. GOHBERG Abstract. An implicit version of the QR eigenvalue algorithm given in [D.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Compact rational Krylov methods for nonlinear eigenvalue problems Roel Van Beeumen Karl Meerbergen Wim Michiels Report TW 651, July 214 KU Leuven Department of Computer Science Celestinenlaan 2A B-31 Heverlee

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

Optimal Scaling of Companion Pencils for the QZ-Algorithm

Optimal Scaling of Companion Pencils for the QZ-Algorithm Optimal Scaling of Companion Pencils for the QZ-Algorithm D Lemonnier, P Van Dooren 1 Introduction Computing roots of a monic polynomial may be done by computing the eigenvalues of the corresponding companion

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k 2 of the form

1. Introduction. Throughout this work we consider n n matrix polynomials with degree k 2 of the form FIEDLER COMPANION LINEARIZATIONS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M DOPICO, AND D STEVEN MACKEY Abstract A standard way of dealing with a matrix polynomial P (λ) is to convert

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn

Structure preserving stratification of skew-symmetric matrix polynomials. Andrii Dmytryshyn Structure preserving stratification of skew-symmetric matrix polynomials by Andrii Dmytryshyn UMINF 15.16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE- 901 87 UMEÅ SWEDEN Structure preserving stratification

More information

Invariance properties in the root sensitivity of time-delay systems with double imaginary roots

Invariance properties in the root sensitivity of time-delay systems with double imaginary roots Invariance properties in the root sensitivity of time-delay systems with double imaginary roots Elias Jarlebring, Wim Michiels Department of Computer Science, KU Leuven, Celestijnenlaan A, 31 Heverlee,

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

THE STABLE EMBEDDING PROBLEM

THE STABLE EMBEDDING PROBLEM THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science A convex optimization method to solve a filter design problem Thanh Hieu Le Marc Van Barel Report TW 599, August 2011 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A

More information

Inverses of regular Hessenberg matrices

Inverses of regular Hessenberg matrices Proceedings of the 10th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2010 27 30 June 2010. Inverses of regular Hessenberg matrices J. Abderramán

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control

More information

Spectrally arbitrary star sign patterns

Spectrally arbitrary star sign patterns Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line Karl Meerbergen en Raf Vandebril Karl Meerbergen Department of Computer Science KU Leuven, Belgium

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS SECINS 5/54 BSIC PRPERIES F EIGENVUES ND EIGENVECRS / SIMIRIY RNSFRMINS Eigenvalues of an n : there exists a vector x for which x = x Such a vector x is called an eigenvector, and (, x) is called an eigenpair

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond

Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond Froilán M. Dopico Departamento de Matemáticas Universidad Carlos III de Madrid, Spain Colloquium in honor of Paul Van Dooren on becoming Emeritus

More information

Matrix Computations and Semiseparable Matrices

Matrix Computations and Semiseparable Matrices Matrix Computations and Semiseparable Matrices Volume I: Linear Systems Raf Vandebril Department of Computer Science Catholic University of Louvain Marc Van Barel Department of Computer Science Catholic

More information

Unitary rank structured matrices

Unitary rank structured matrices Journal of Computational and Applied Mathematics 25 (28) 49 78 www.elsevier.com/locate/cam Unitary rank structured matrices Steven Delvaux, Marc Van Barel Department of Computer Science, Katholieke Universiteit

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Contour integral solutions of Sylvester-type matrix equations

Contour integral solutions of Sylvester-type matrix equations Contour integral solutions of Sylvester-type matrix equations Harald K. Wimmer Mathematisches Institut, Universität Würzburg, 97074 Würzburg, Germany Abstract The linear matrix equations AXB CXD = E, AX

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

MATRIX AND LINEAR ALGEBR A Aided with MATLAB Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical

More information