A factorization of the inverse of the shifted companion matrix

Size: px
Start display at page:

Download "A factorization of the inverse of the shifted companion matrix"

Transcription

1 Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article A factorization of the inverse of the shifted companion matrix Jared L Aurentz jaurentz@mathwsuedu Follow this and additional works at: Recommended Citation Aurentz, Jared L (203), "A factorization of the inverse of the shifted companion matrix", Electronic Journal of Linear Algebra, Volume 26 DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository For more information, please contact scholcom@uwyoedu

2 A FACTORIZATION OF THE INVERSE OF THE SHIFTED COMPANION MATRIX JARED L AURENTZ Abstract A common method for computing the zeros of an n-th degree polynomial is to compute the eigenvalues of its companion matrix C A method for factoring the shifted companion matrix C ρi is presented The factorization is a product of 2n essentially 2 2 matrices and requires O(n) storage Once the factorization is computed it immediately yields factorizations of both (C ρi) and (C ρi) The cost of multiplying a vector by (C ρi) is shown to be O(n), and therefore, any shift and invert eigenvalue method can be implemented efficiently This factorization is generalized to arbitrary forms of the companion matrix Key words Polynomial, Root, Companion matrix, Shift and invert AMS subject classifications 65F8, 5A8 Introduction Finding the roots of an n-th degree polynomial is equivalent to finding the eigenvalues of the corresponding companion matrix It has been shown in [] that the companion matrix can be factored into a product of essentially 2 2 matrices Moreover, these factors can be reordered arbitrarily to produce different forms of the companion matrix Recently, methods have been proposed in [2] that exploit this structure to compute the entire set of roots in O(n 2 ) time For large n computing the entire set of roots may be infeasible One option for computing a subset of the roots is to use a shift and invert strategy We will show that the shifted companion matrix, the inverse of the shifted companion matrix and the adjoints of both these matrices can be written in a factored form that requires O(n) storage and can be applied to a vector in O(n) operations This means that the inverse power method or any subspace iteration method can be used to compute a modest subset of the spectrum in O(n) time In Section 2, we introduce the idea of core transformation and describe the Fielder factorization In Section 3, we state and prove the main factorization theorems In Section 4, we extend the factorization to related operators In Section 5, we give some numerical examples that illustrate the use of the factorization to compute a few roots of a polynomial of high degree Received by the editors on August 23, 202 Accepted for publication on May 4, 203 Handling Editor: Bryan L Shader Department of Mathematics, Washington State University, Pullman, WA , USA (jaurentz@mathwsuedu) 23

3 232 JL Aurentz 2 The factored companion matrix In order to describe the Fiedler factorization we introduce the idea of a core transformation A core transformation is any matrix G i that is the identity matrix everywhere except at the intersection of rows and columns i and i + These matrices are essentially 2 2 and can be viewed as transformations that act on either two adjacent rows or two adjacent columns Graphically we represent a core transformation with an arrowed bracket,, where the arrowheads represent the rows that are transformed It is easy to see that two core transformations G i and G j commute, G i G j = G j G i as long as i j > This can be seen graphically as two brackets that don t overlap, = = If i j, then the factors do not commute in general This appears graphically as overlapping brackets, Fiedler showed that for an n-th degree, monic polynomial p(x) = x n +a x n + +a n x+a n the associated upper-hessenberg companion matrix a a n a n 0 C = 0 can be written as a product of n core transformations, C = C C 2 C n C n For i n, the C i have the form I i C i = a i 0, (2) I n i

4 A Factorization of the Inverse of the Shifted Companion Matrix 233 where I k is the k k identity matrix For i = n, there is a special factor, C n = diag{,,, a n } Fiedler also showed that if the factors {C,C 2,,C n } are multiplied together in an arbitrary order, the resulting product is similar to C and therefore can also be considered a companion matrix for p(x) In any such product, many of the factors commute, so we only need to keep track of the ones that don t To do this we introduce the position vector p This is an (n )-tuple that takes the value p i = 0 if C i is to the left of C i+, and p i = if C i+ is to the left of C i The products of the C i are not unique to the position vector p since C i and C j commute if i j > This means the position vector p represents an equivalence class of companion matrices For example, when n = 4, the product C = C C 3 C 2 C 4 is equivalent to the products C 3 C C 2 C 4, C C 3 C 4 C 2 and C 3 C C 4 C 2 since they are all consistent with the same position vector p = {0,,0} Definition 2 Let {G,G 2,,G n } be a set of core transformations and let p be a position vector that describes the relative order of the G i Any product of these core transformations that is consistent with p will be written as {G,G 2,,G n } p (a) Upper-Hessenberg (b) Lower-Hessenberg (c) CMV-pattern Fig 2: Graphical representation of some factorizations Example 22 (Upper-Hessenberg matrix) For the upper-hessenberg companion matrix the factorization has position vector p = {0,0,,0} This means that C can be written as C = p {C,C 2,,C n } = C C 2 C n This is represented graphically in Figure 2a as a descending sequence of core transformations Example 23 (Lower-Hessenberg matrix) For the lower-hessenberg companion matrix the factorization has position vector p = {,,,} This means that C can be written as C = p {C,C 2,,C n } = C n C n C This is represented graphically in Figure 2b as an ascending sequence of core transformations

5 234 JL Aurentz Example 24 (CMV matrix) For the CMV companion matrix the factorization has position vector p = {0,,0,,} This means that C can be written as C = p {C,C 2,,C n } = (C C 3 )(C 2 C 4 ) This is represented graphically in Figure 2c as a zigzag sequence of core transformations 3 Factorization theorems In this section, we will show that for the upper- Hessenberg companion matrix C there exists a sequence of 2n core transformations whose product equals C ρi We will then extend the result to companion matrices with arbitrary position vector p Before we state and prove the main results we introduce the Horner vector, h(ρ) C n This vector contains all the intermediate steps used when applying Horner s rule to the polynomial p, at the shift ρ The entries are defined recursively, h (ρ) = ρ+a, h i+ (ρ) = ρh i (ρ)+a i+ (3) Notice that h n (ρ) = p(ρ) We also introduce two new types of core transformations: R i = I i ρ 0 I n i (32) for i =,,n, I i H i = h i (ρ) 0 I n i (33) for i =,,n, and H n = diag{i n, h n (ρ)} Here I k is still the k k identity matrix Theorem 3 For an n-th degree, monic polynomial, p(x) = x n + a x n + + a n x + a n, with a n 0, the shifted, upper-hessenberg companion matrix C associated with p(x), can be written as the product of 2n core transformations, C ρi = H H 2 H n 2 H n H n R n R n 2 R 2 R, with H i and R i defined as in (32) and (33), respectively Proof The case n = is trivial, C ρi = [ a ρ] = [ h (ρ)] = H n For n >,

6 A Factorization of the Inverse of the Shifted Companion Matrix 235 assume that after k steps, we have C ρi = I k h k (ρ) a k+ H H k ρ R k R We compute the next factor by first performing an interchange of rows k and k +, I k h k (ρ) a k+ ρ = I k 0 0 I n k I k ρ h k (ρ) a k+ Then we use the correct Gauss transformation to eliminate the (k +,k) entry, I k ρ h k (ρ) a k+ = I k 0 h k (ρ) I n k I k ρ 0 h k+ (ρ) Combining the row interchange with the Gauss transform we get the factor H k, I k I k h k (ρ) a k+ ρ = H ρ k 0 h k+ (ρ)

7 236 JL Aurentz The last step is to eliminate the (k,k +) entry using another Gauss transform, I k ρ 0 h k+ (ρ) = I k 0 0 h k+ (ρ) I k ρ 0 This core transformation is exactly R k, I k I k ρ 0 h k+ (ρ) = 0 0 h k+ (ρ) R k I n k Putting it all together we get I k I k h k (ρ) a k+ ρ = H h k+(ρ) a k+2 k ρ R k, which implies that C ρi = I k h k+(ρ) a k+2 H H k ρ R k R Example 32 (Upper-Hessenberg factors) When n = 3 the shifted, upper- Hessenberg companion matrix admits the factorization C ρi = H H 2 H 3 R 2 R, h (ρ) a 2 a 3 ρ = ρ h (ρ) 0 h 2 (ρ) 0 h 3 (ρ) ρ ρ 0 0

8 A Factorization of the Inverse of the Shifted Companion Matrix 237 It should be noted that this factorization is a natural extension of the Fiedler factorization in the sense that when ρ = 0, H i = C i and R i = I for all i, C 0I = H H n H n R n R = C C n C n In order to extend the factorization to companion matrices with arbitrary position vectors p, we need the following lemma Lemma 33 For an n-th degree, monic polynomial, p(x) = x n +a x n + + a n x+a n, with a n 0, the companion matrix C associated with p(x) and position vector p, has only one non-zero diagonal entry, namely C, = a Proof The case n = and n = 2 are trivial For the induction step it is sufficient to note that the k-th diagonal entry of C is uniquely determined by the factors C k and C k and the (k ) entry of p This is true because core transformations only act locally For p k = 0, the k-th diagonal of C is the k-th diagonal entry of C k C k, I k 2 a k a k C k C k = I n k The k-th diagonalentry is 0 For p k =, the k-th diagonalof C is the k-th diagonal entry of C k C k, I k 2 a k 0 C k C k = a k I n k The importance of Lemma 33 is that every companion matrix formed via the Fiedler factors has the same main diagonal Definition 34 Let P i C n n for i =,,n be the following diagonal matrix 0 P i = 0 I n i

9 238 JL Aurentz When i = 0, P i = I n Corollary 35 For an n-th degree, monic polynomial, p(x) = x n +a x n + +a n x+a n, with a n 0, the shifted, companion matrix C associated with p(x) and position vector p, can be written as {H,C 2,,C n } ρp p Proof By Lemma 33 we know the main diagonal of C ρi, a ρ h (ρ) C ρi = ρ = ρ ρ ρ Now we subtract off ρp and leave the rest of C ρi in the matrix D, C ρi = D ρp where h (ρ) D = 0 0 Since C and D only differ in the (,) entry D has the same off diagonal entries as C The (,) entry of C was determined by the factor C Replacing the (,) entry of C with h (ρ) transforms C into H Therefore,the matrix D can be written as D = p {H,C 2,,C n }, and therefore, C ρi = p {H,C 2,,C n } ρp Theorem 36 For an n-th degree, monic polynomial, p(x) = x n + a x n + +a n x+a n, with a n 0, the shifted, companion matrix C associated with p(x) and position vector p, can be written as the product of 2n core transformations, C ρi = A A 2 A n 2 A n H n B n B n 2 B 2 B,

10 A Factorization of the Inverse of the Shifted Companion Matrix 239 where A i = H i and B i = R i when p i = 0 and A i = R T i and B i = H i when p i = with H i and R i defined as in (32) and (33), respectively Proof The case n = is the same as in Theorem 3 For n >, the induction step has two cases depending on the value of p k For the first case, we start by assuming that p k = 0 and after k steps C ρi has been factored as C ρi = A A k ( p {H k,c k+,,c n } ρp k )B k B, where the A i and B i are chosen exactly as in the statement of the theorem We begin by removing the factor H k exactly as in Theorem 3 This does not affect the rest of the C i for i > k + because non-intersecting core transformations commute, A A k H k ( p {I,C k+,,c n } ρh k P k ) B k B Let s take a closer look at the product ρh k P k, ρh k P k = 0 0 ρ 0 ρh k (ρ) ρ This product can be written as ρh k P k = F k +ρp k+, where F k = 0 0 ρ 0 ρh k (ρ) 0 Since F k is non-zero only at the intersection of rows and columns k and k+, the only C i in the product p {I,C k+,,c n } that F k interacts with is C k+ Computing

11 240 JL Aurentz the sum we get I k ρ 0 C k+ F k = 0 h k+ (ρ) 0 0 I n k 2 Multiplying by the correct Gauss transform on the right to eliminate the (k,k + ) entry we get I k I k 0 0 ρ 0 C k+ F k = 0 h k+ (ρ) which is exactly I n k 2 C k+ F k = H k+ R k Putting this all together we obtain ( ) H k p {I,C k+,,c n } ρh k P k = and since P k+ R k = P k+, we obtain ( ) H k p {I,C k+,,c n } ρh k P k Using the induction hypothesis we get our result, C ρi = H k ( p {H k+,c k+2,,c n }R k ρp k+ ), = H k ( p {H k+,c k+2,,c n } ρp k+ )R k I n k 2 A A k H k ( p {H k+,c k+2,,c n } ρp k+ )R k B k B For the second case we start by assuming that p k = and after k steps C ρi has been factored as C ρi = A A k ( p {H k,c k+,,c n } ρp k )B k B

12 A Factorization of the Inverse of the Shifted Companion Matrix 24 We begin by removing the factor H k but this time we factor on the right since p k = tells us that H k is to the right of C k+, ) A A k ( p {I,C k+,,c n } ρp k H k H k B k B Again taking a closer look at the product ρp k H k, 0 ρp k H 0 0 k = ρ ρh k (ρ) ρ This product can be written as where Fk T = ρp k H k = F T k +ρp k ρ ρh k (ρ) 0 Since Fk T is non-zeroonly at the intersectionofrowsand columns k and k+, the only C i in the product p {I,C k+,,c n } that F k interacts with is C k+ Computing the sum we get I k 0 0 C k+ Fk T = ρ h k+ (ρ) 0 0 I n k 2 Multiplying by the correct Gauss transform on the left to eliminate the (k +,k) entry we get I k I k C k+ Fk T = ρ 0 0 h k+ (ρ) I n k 2 I n k 2

13 242 JL Aurentz which is exactly Putting this all together we get ( p {I,C k+,,c n } ρp k H k C k+ F T k = R T kh k+ ( R T k ) H k = p {H k+,c k+2,,c n } ρp k+ ) H k, and since (Rk T) P k+ = P k+, we obtain ( ) p {I,C k+,,c n } ρp k H k H k = ( ) p {H k+,c k+2,,c n } ρp k+ H k R T k Using the induction hypothesis we get our result, C ρi = A A k R T k ( p {H k+,c k+2,,c n } ρp k+ ) H k B k B It should be noted that Theorem 3 is a special case of Theorem 36 Example 37 (CMV factors) When n = 3 the shifted, CMV companion matrix with p = {,0} admits the factorization C ρi = R T H 2 H 3 R 2 H, h (ρ) 0 a 2 ρ a 3 = ρ 0 ρ h 2 (ρ) 0 h 3 (ρ) ρ 0 h (ρ) 0 Schematically, the factorization looks like (C ρi) = when n = 5 The only difference from one shifted companion matrix to another is which side of the V a particular H i, R i or R T i resides

14 A Factorization of the Inverse of the Shifted Companion Matrix Extensions In this section, we will see that this factorization gives easy access to some other operators Remember what C ρi looks like, C ρi = A A n H n B n B In order to use a shift and invert strategy to compute the roots it is necessary to have access to the inverse of the shifted operator Here we get the factored inverse of the shifted matrix for free, (C ρi) = B Bn H n A n A Example 4 (CMV factors) When n = 3 the inverse of the shifted, CMV companion matrix with p = {, 0} admits the factorization (C ρi) = (H ) (R 2 ) (H 3 ) (H 2 ) (R T ), ρ 2 ρ a 3 h 3 (ρ) ρa 2 +a 3 ρh (ρ) h (ρ)a 3 = ρ h 2 (ρ) 0 h (ρ) ρ 0 h 3 (ρ) 0 0 ρ h 2 (ρ) Some sparse eigenvalue methods, for instance unsymmetric-lanczos, require access to the adjoint We also get the factored adjoint for free, (C ρi) = B B n H na n A The adjoint of the inverse is available as well, ((C ρi) ) = (A ) (An ) (Hn ) (Bn ) (B ) It is interesting to note that all of these operators have a similar sparsity structure More over the cost of applying any of these operators to a vector is the same The cost to compute the Horner vector is roughly 2n Once the Horner vector is computed each of the H i and R i or their inverses, transposes or adjoints must be applied to the vector Since each of these core transformations requires 2 flops, except H n, the flop count sums to roughly 4n So the total flop count for applying the operator is roughly 6n and only 4n if the Horner vector has already been computed

15 244 JL Aurentz 5 Numerical experiments In this section, we give two examples of using a shift and invert strategy to compute subsets of the roots of a polynomial of high degree To compute the roots we used the shifted inverse of the upper-hessenberg companion matrix in conjunction with a version of the unsymmetric-lanczos algorithm as outlined in [3] The scaling coefficients were chosen such that the resulting tridiagonal matrix was complex symmetric Its eigenvalues could then be computed using the complex symmetric eigensolver described in [4] Implicit restarts were performed using a Krylov-Schur-like method described in [3] Forthefirstexample,wecomputedthe0rootsofthepolynomialp(x) = x 0000 i that are closest to Since these roots are known, we can compute the relative error exactly λ λ λ 2 λ 3 λ 4 λ 5 λ 6 λ 7 λ 8 λ 9 λ 0 κ(λ) λ µ µ Table 5: Error table for computed roots of x 0000 i Table 5 shows the relative errors of the computed root λ against the exact root µ Since we are running unsymmetric-lanczos, we can compute a condition number for the roots If v is the right eigenvector and w is the left eigenvector associated with λ then κ(λ) is defined as κ(λ) = v 2 w 2 w v Figure 5 shows the 0 computed roots of i = closest to against the 20 actual roots of i = closest to In this example, the shift and invert strategy computed the 0 closest roots without leaving any gaps For the second example, we

16 A Factorization of the Inverse of the Shifted Companion Matrix 245 Fig 5: Computed Roots of x 0000 i against Actual Roots compute the 0 roots closest to i = of a millionth degree, complex polynomial with coefficients whose real and imaginary parts are both normally distributed with mean 0 and variance λ i λ i λ i λ i λ i λ i λ i λ i λ i λ i Table 52: Computed roots near i = for a millionth degree random polynomial Table52showstheestimatesofthe0rootsnearesti = thatwerecomputed using our implementation Table 53 shows severalresidualsfor the computed roots The value p(λ) λp (λ) comes

17 246 JL Aurentz λ κ(λ) p(λ) λp (λ) Cv λv C v λ λ λ λ λ λ λ λ λ λ Table 53: Error table for computed roots of a millionth degree random polynomial from the first order Taylor expansion of p(µ), where µ is the exact root, centered at the computed root λ, This implies that If µ λ is small then p(λ) p (λ) simply divide by the magnitude of λ, p(µ) = p(λ)+p (λ)(µ λ)+o ( (µ λ) 2) µ λ = p(λ) p (λ) +O( µ λ 2) infinity norm residual of the Ritz pair (λ,v) is a good error estimate To get a relative error estimate p(λ) λp (λ) The last column of Table 53 is the 6 Conclusions We have shown that arbitrary shifted, Fiedler companion matrices can be factored into 2n essentially 2 2 matrices and therefore require O(n) storage This factorization gives immediate access to the factorization of the inverse of the shifted companion matrix as well We have also shown that the cost of applying either of these factorizationsto a vector is O(n) making it possible to employ any shift and invert eigenvalue method efficiently Acknowledgment I thank the editor and referee for their careful reading of the manuscript and for their questions, criticisms, and suggestions, which lead to significant improvements in the paper

18 A Factorization of the Inverse of the Shifted Companion Matrix 247 REFERENCES [] M Fiedler A note on companion matrices Linear Algebra and its Applications, 372:325 33, 2003 [2] JL Aurentz, R Vandebril, and DS Watkins Fast computation of the zeros of a polynomial via factorization of the companion matrix SIAM Journal on Scientific Computing, 35(): , 203 [3] DS Watkins The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods SIAM, Philadelphia, 2007 [4] JK Cullum and RA Willoughby Lanczos Algorithms for Large Symmetric Eigenvalue Computations, Vol Theory Birkhauser, Boston, 985

A note on companion pencils

A note on companion pencils Contemporary Mathematics A note on companion pencils Jared L Aurentz, Thomas Mach, Raf Vandebril, and David S Watkins Abstract Various generalizations of companion matrices to companion pencils are presented

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

Fast computation of eigenvalues of companion, comrade, and related matrices

Fast computation of eigenvalues of companion, comrade, and related matrices BIT Numer Math (204) 54:7 30 DOI 0007/s0543-03-0449-x Fast computation of eigenvalues of companion, comrade, and related matrices Jared L Aurentz Raf Vandebril David S Watkins Received: 6 December 202

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

Note on deleting a vertex and weak interlacing of the Laplacian spectrum

Note on deleting a vertex and weak interlacing of the Laplacian spectrum Electronic Journal of Linear Algebra Volume 16 Article 6 2007 Note on deleting a vertex and weak interlacing of the Laplacian spectrum Zvi Lotker zvilo@cse.bgu.ac.il Follow this and additional works at:

More information

A note on simultaneous block diagonally stable matrices

A note on simultaneous block diagonally stable matrices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 7 2013 A note on simultaneous block diagonally stable matrices I. Bras A. C. Carapito carapito@ubi.pt P. Rocha Follow this and additional

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Potentially nilpotent tridiagonal sign patterns of order 4

Potentially nilpotent tridiagonal sign patterns of order 4 Electronic Journal of Linear Algebra Volume 31 Volume 31: (2016) Article 50 2016 Potentially nilpotent tridiagonal sign patterns of order 4 Yubin Gao North University of China, ybgao@nuc.edu.cn Yanling

More information

Minimum rank of a graph over an arbitrary field

Minimum rank of a graph over an arbitrary field Electronic Journal of Linear Algebra Volume 16 Article 16 2007 Minimum rank of a graph over an arbitrary field Nathan L. Chenette Sean V. Droms Leslie Hogben hogben@aimath.org Rana Mikkelson Olga Pryporova

More information

Nilpotent matrices and spectrally arbitrary sign patterns

Nilpotent matrices and spectrally arbitrary sign patterns Electronic Journal of Linear Algebra Volume 16 Article 21 2007 Nilpotent matrices and spectrally arbitrary sign patterns Rajesh J. Pereira rjxpereira@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Non-trivial solutions to certain matrix equations

Non-trivial solutions to certain matrix equations Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Polynomial numerical hulls of order 3

Polynomial numerical hulls of order 3 Electronic Journal of Linear Algebra Volume 18 Volume 18 2009) Article 22 2009 Polynomial numerical hulls of order 3 Hamid Reza Afshin Mohammad Ali Mehrjoofard Abbas Salemi salemi@mail.uk.ac.ir Follow

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Generalization of Gracia's Results

Generalization of Gracia's Results Electronic Journal of Linear Algebra Volume 30 Volume 30 (015) Article 16 015 Generalization of Gracia's Results Jun Liao Hubei Institute, jliao@hubueducn Heguo Liu Hubei University, ghliu@hubueducn Yulei

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

The spectrum of the edge corona of two graphs

The spectrum of the edge corona of two graphs Electronic Journal of Linear Algebra Volume Volume (1) Article 4 1 The spectrum of the edge corona of two graphs Yaoping Hou yphou@hunnu.edu.cn Wai-Chee Shiu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Generalized left and right Weyl spectra of upper triangular operator matrices

Generalized left and right Weyl spectra of upper triangular operator matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017 Article 3 2017 Generalized left and right Weyl spectra of upper triangular operator matrices Guojun ai 3695946@163.com Dragana S. Cvetkovic-Ilic

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

On the maximum positive semi-definite nullity and the cycle matroid of graphs

On the maximum positive semi-definite nullity and the cycle matroid of graphs Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 16 2009 On the maximum positive semi-definite nullity and the cycle matroid of graphs Hein van der Holst h.v.d.holst@tue.nl Follow

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Recognition of hidden positive row diagonally dominant matrices

Recognition of hidden positive row diagonally dominant matrices Electronic Journal of Linear Algebra Volume 10 Article 9 2003 Recognition of hidden positive row diagonally dominant matrices Walter D. Morris wmorris@gmu.edu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices

The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 22 2013 The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices Ruifang Liu rfliu@zzu.edu.cn

More information

RETRACTED On construction of a complex finite Jacobi matrix from two spectra

RETRACTED On construction of a complex finite Jacobi matrix from two spectra Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional

More information

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

arxiv: v1 [math.na] 24 Jan 2019

arxiv: v1 [math.na] 24 Jan 2019 ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers

More information

The symmetric linear matrix equation

The symmetric linear matrix equation Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 54 2012 A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp 770-781 Aristides

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices University of Wyoming Wyoming Scholars Repository Mathematics Faculty Publications Mathematics 9-1-1997 A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices Bryan L. Shader University

More information

On the Feichtinger conjecture

On the Feichtinger conjecture Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 35 2013 On the Feichtinger conjecture Pasc Gavruta pgavruta@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Matrix Computations and Semiseparable Matrices

Matrix Computations and Semiseparable Matrices Matrix Computations and Semiseparable Matrices Volume I: Linear Systems Raf Vandebril Department of Computer Science Catholic University of Louvain Marc Van Barel Department of Computer Science Catholic

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Krylov Subspaces. The order-n Krylov subspace of A generated by x is Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Eigenvalues and eigenvectors of tridiagonal matrices

Eigenvalues and eigenvectors of tridiagonal matrices Electronic Journal of Linear Algebra Volume 15 Volume 15 006) Article 8 006 Eigenvalues and eigenvectors of tridiagonal matrices Said Kouachi kouachisaid@caramailcom Follow this and additional works at:

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

The Jordan forms of AB and BA

The Jordan forms of AB and BA Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Pairs of matrices, one of which commutes with their commutator

Pairs of matrices, one of which commutes with their commutator Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Note on the Jordan form of an irreducible eventually nonnegative matrix

Note on the Jordan form of an irreducible eventually nonnegative matrix Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 19 2015 Note on the Jordan form of an irreducible eventually nonnegative matrix Leslie Hogben Iowa State University, hogben@aimath.org

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

On the distance spectral radius of unicyclic graphs with perfect matchings

On the distance spectral radius of unicyclic graphs with perfect matchings Electronic Journal of Linear Algebra Volume 27 Article 254 2014 On the distance spectral radius of unicyclic graphs with perfect matchings Xiao Ling Zhang zhangxling04@aliyun.com Follow this and additional

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

On cardinality of Pareto spectra

On cardinality of Pareto spectra Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Factoring block Fiedler Companion Matrices

Factoring block Fiedler Companion Matrices Chapter Factoring block Fiedler Companion Matrices Gianna M. Del Corso, Federico Poloni, Leonardo Robol, and Raf Vandebril Abstract When Fiedler published his A note on Companion matrices in 00 in Linear

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

The third smallest eigenvalue of the Laplacian matrix

The third smallest eigenvalue of the Laplacian matrix Electronic Journal of Linear Algebra Volume 8 ELA Volume 8 (001) Article 11 001 The third smallest eigenvalue of the Laplacian matrix Sukanta Pati pati@iitg.ernet.in Follow this and additional works at:

More information

Inequalities involving eigenvalues for difference of operator means

Inequalities involving eigenvalues for difference of operator means Electronic Journal of Linear Algebra Volume 7 Article 5 014 Inequalities involving eigenvalues for difference of operator means Mandeep Singh msrawla@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information