Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Size: px
Start display at page:

Download "Computing the pth Roots of a Matrix. with Repeated Eigenvalues"

Transcription

1 Applied Mathematical Sciences, Vol. 5, 2011, no. 53, Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical Sciences, Universiti Sains Malaysia USM, Penang, Malaysia Abstract Computing the pth roots of matrices can arise in certain computations. In this paper, we describe a method for computing the root of an arbitrary n n real matrix with repeated eigenvalues and no eigenvalues in R based on fundamental formula i.e. linear combination of constituent matrices. Some examples are provided to show the strength and weaness of presented method to compare with other usual methods for computing the roots of matrix A. Mathematics Subject Classifications: 15A24, 65F30 Keywords: Matrix pth roots, fundamental formula, repeated eigenvalue, constituent matrix 1 Introduction Suppose that p 2 is a positive integer and that A C n n has no negative real eigenvalues. The solution of the matrix equation X p A = Corresponding author addresses: sadeghi.usm@gmail.com A. Sadeghi, izani@cs.usm.my A.I. Ismail, azhana@usm.my A. Ahmad.

2 2646 A. Sadeghi et al which is called pth roots of matrix A [7]. For the scalar case, n = 1, it is nown that every nonzero complex number has p distinct roots. For n > 1, a matrix pth roots may not exist or there may be infinitely many solutions for X p = A [16]. In particular, an important issue is the computation of the principal pth root of A. The unique matrix X such that X p = A and whose eigenvalues are either zero or in the segment {z C\{0} : argz π/p} is called the principal pth root and is denoted by A 1/p [8]. Hereafter, in this paper, pth root shall mean principal pth root. Applications which are requiring the computation of matrix pth roots arise in systems theory. For example in computing matrix sector function [16] which is defined by secta =A p 1/p A Other applications are in matrix differential equations, Marov process and some nonlinear matrix equations [11]. Many authors have investigated about methods for computing the pth root of matrices. The methods are normally based on iteration or Schur normal form. According to Greco and Iannazzo [7], for the case p = 2, some stable iterations based on the Newton method have been proposed. The first was the Denman and Beavers iteration and many others have followed. For p>2, some ind of preprocessing of the matrix A is required. Greco and Iannazzo [7] list some general and stable algorithms that are available. A family of rational iterations as well as its application to the computation of the matrix pth roots has been proposed by Iannazzo [13]. Furthermore, Padé family iteration for computing the pth root has been presented by Lasziewicz and Zeita [15]. For the Schur normal form approach, the Schur normal form of A is computed, say by Q AQ = R, where Q is unitary and R is upper triangular and then one solves the equation Y p R = 0 and deduces X = QY Q. As Y is upper triangular, then the equation Y p R = 0 is solved by a recursion on the elements of Y [7]. A Schur method for computing pth root of a matrix that generalize earlier methods was proposed by Smith [16]. In addition a Schur-Newton method for computing matrix pth root and its inverse has been developed by Guo and Higham [9]. Recently, Greco and Iannazzo in [7] presented a Schur algorithm for the computation of the principal root of a real matrix having no nonpositive eigenvalues with real arithmetic. In this paper we describe and study a method for computing the pth roots of a matrix which have repeated eigenvalues by employing expansion series and linear combination of constituent matrices. Chang in [6] used this method to compute the function of square

3 Computing the pth roots of a matrix with repeated eigenvalues 2647 matrices with repeated eigenvalues such as e A, A 1/2 and I+A 1. The method will involve some special matrices such as Jordan, Vandermonde, Modal, and Krylov matrices. Some formulas for evaluating constituent matrices and matrix pth roots are derived. The outline of the paper is as follows: In Section 2 some basic relations and formulas are given. Useful identities for computing constituent matrix are proposed in Section 3. In Section 4, a general formula for computing matrix pth roots is presented. Numerical examples are provided in Section 5 and the conclusions are in Section 6. 2 Preliminary In this section, some of the basic notations and relations for computing the roots of matrices by fundamental formula are recalled. 2.1 Fundamental formula Suppose that the n n matrix A has m eigenvalues λ with multiplicities m determined by solving the n-degree characteristic polynomial pλ, pλ = detλi A = n a l λ n l = λ λ m, l=0 =0 =0 m = n 2.1 It was observed in several articles [2], [4] and [6] that one approach to compute the a l is to use the following recurrent formulas : a l = 1 l TrAB l 1, l =1,...,n 2.2 with a 0 =1,B 0 = I, and B n =0. B l = AB l 1 + a l I, l =1,...,n 2.3 It is nown that an analytical function of matrices can be expressed as a linear combination of constituent matrices [4]. Let fz =z 1/p be an analytical function in a simply connected domain of the complex variable z. Thus, for a square matrix A with several multiple eigenvalues, the fundamental formula can be replaced by m 1 A 1/p f j λ = Z j 2.4 j! =0

4 2648 A. Sadeghi et al for computing matrix pth roots, where constituent matrices Z j which depend only on the matrix A and not on the function fz =z 1/p. In continuing this paper we recall fundamental formula for the relation 2.4. In the special case, if matrix A has distinct eigenvalues, the relation 2.4 can be expressed as A 1/p = =0 λ 1/p Z where λ, =0,...,m 1 are the distinct eigenvalues of A and Z 0, =0,...,m 1 are the corresponding constituent idempotent matrices. It is clear that the constituent matrices associated with repeated eigenvalues can be evaluated for computing roots of a matrix by fundamental formula. There are many different options for computing constituent matrices. The relationship between special matrices and constituent matrix will next be discussed. 2.2 Special matrices In this part the role of special matrices to compute pth roots are explained. It is based on decomposition matrices for computing the function of a matrix given in [6]. Suppose that the matrix A has repeated eigenvalues. Then A has Jordan canonical form as follows: A = ZJZ 1 = ZdiagJ 1,...,J n Z where Z is a nonsingular matrix and λ J =... 1 Cm m, =1,...,m λ Other important matrices for computing constituent matrix is the Companion matrix C. A relationship exists between the Jordan matrix J and companion matrix C and that is the similarity matrix transformation [3,6]: C = VJV 1 = V diagj 1,...,J n V 1 2.8

5 Computing the pth roots of a matrix with repeated eigenvalues 2649 C T = WJW 1 = W diagj 1,...,J n W with V and W beings generalized Vandermonde matrix and modal matrix respectively. For given λ and m, the generalized Vandermonde matrix V can be partitioned V =[V 0,...,V,...,V ] where sub-matrices V are of order n m with elements given in [2,4]: j V j = λ i j, =0,...,m 1; i =0,...,n 1; j =0,...,m i and by setting i = m 1,...,0 into the following recurrent formulas: V 1 ij = 1 m 1 W m 1 ij d p 1 V 1 pj d 0 p=i where d l = n m l p=0 =0,...,m 1; i =0,...,n 1; j =0,...,m 1 n p m + l If the modal matrix W can be partitioned as a p λ n 1 i j p, =0,...,m 1; l =0,...,m W =[W 0,...,W,...,W ] 2.14 then sub-matrices W are of order n m with the elements [3] n 1 i j n 1 j p W ij = a p λ n 1 i j p, 2.15 i p=0 =0,...,m 1; i =0,...,n 1; j =0,...,m 1 It can be observed that if i = m 1, then the summations in 2.12 and 2.15 are zero. In addition, the element of W 1 can be obtained as follows W 1 ij = 1 m 1 V m 1 ij d p 1 W 1 d pj p=i+1 =0,...,m 1; i =0,...,n 1; j =0,...,m 1

6 2650 A. Sadeghi et al According to [6], the relationship between the given matrix A and its companion matrix C, with a common characteristic polynomial, is given by the similarity matrix transformation, A = K 1 CK 2.17 where a similarity matrix K transforms C into A, and vice versa which is called Krylov matrix. Now, we are ready to present some relations for computing constituent matrix. 3 Computing constituent matrix When the multiplicities m of eigenvalues λ are nown, then constituent matrices N j associated with the Jordan matrix can readily be expressed. For this purpose suppose that N = J λ I, then we have N = , N2 = ,...,Nj = where the non-zero bloc matrix N j is the jth power of N. It can be seen that N j has only one s the jth super-diagonal. It becomes an identity matrix when j = 0 and becomes a null matrix when j>m 1. Therefore we have [6]: N j = diag0,...,n j,...,0 3.1 It follows from 2.8 and 2.9 that the constituent matrices X j associated with the companion matrix C can be found by similarity matrix transformation from the constituent matrices N j of the Jordan matrix J, when either V and V 1 or W and W 1 have been computed: X j = VN j V 1, =0,...,m 1; j =0,...,m Xj T = WN jw 1, =0,...,m 1; j =0,...,m From 2.17, the constituent matrices Z j of A is then computed by the similarity matrix transformations [6]: Z j = K 1 X j K, =0,...,m 1; j =0,...,m 1 3.4

7 Computing the pth roots of a matrix with repeated eigenvalues 2651 It can be seen that for a special case, if the given matrix A is a companion matrix then the constituent matrices corresponding to companion matrix are determined by V V 0m 1 j V V 1 0m 1 j.. X j =.... or X T j =.. V n V n 1m 1 j W W 0m 1 j W n W n 1m 1 j. V 1 n V 1 n 1m 1 j W W 1. W 1 n W 1 0m 1 j. n 1m 1 j where the elements of the matrix are from either generalized Vandermonde matrix or modal matrix with their respective inverses [4]. Several alternative derivations of constituent matrices Z j for a given matrix A can now be expressed. The Z j should be evaluated directly by one of the following linear combinations of matrix [6]: n 1 Z j = V 1 jl A l, =0,...,m 1; j =0,...,m n 1 Z j = l=0 l=0 W 1 jlb n 1 l, =0,...,m 1; j =0,...,m where the entries V 1 ij, and W 1 ij are given by 2.12, and 2.16, respectively; and the A l is the power of matrix A, the matrix coefficients B n 1 l, of the adjoint matrix Qλ = adjλi A are derived from B n 1 l = n 1 l p=0 a p A n 1 l p, l =0,...,n Computing the pth roots of the matrix In this section the method to compute the pth roots of the matrices with repeated eigenvalues by utilizing linear combination of constituent matrices is described. By differentiating j times from analytical function fz =z 1/p and substituting into the fundamental formula, so the following relation for computing the matrix pth roots of matrix A can be obtained A 1/p = =0 m 1 1 j!.1 p 1 p p j +1λ1/p j Z j 4.1

8 2652 A. Sadeghi et al It is crystal clear that special cases p =2, and 3 have several applications in matrix computations. Therefore, the relation 4.1 can be reformulated as follows. Case p = 2 [6]: and Case p =3: A 1/3 = =0 A 1/2 = m 1 =0 m 1 1 j+1 2j! 2 2j 2j 1j! 2 λ1/2 j Z j 4.2 j j j.j! For special matrices J and C, the relation 4.1 can be expressed as follows: and J 1/p = C 1/p = =0 =0 m 1 m 1 λ 1/3 j Z j j!.1 p 1 p p j +1λ1/p j N j j!.1 p 1 p p j +1λ1/p j X j 4.5 For checing our numerical computation, the following identities for the constituent matrices can be used [6]: Z 2 0 = Z 0, 4.6 Z j+s, = r Z j Z rs =A λiz rs = 0, r, 4.7 A l = =0 minl,m 1 For the special case, if l =0, 1, and 2, 4.8 can be expressed as =0 λ l j Z j, l =0, 1, 2, Z 0 = I, 4.9

9 Computing the pth roots of a matrix with repeated eigenvalues 2653 A = =0 λ Z 0 + Z 1, 4.10 A 2 = =0 λ 2 Z 0 +2λ Z 1 + Z 2, 4.11 Computing pth roots of a matrix can be written as Algorithm 1. Algorithm 1: Computing the pth roots of a matrix 1. Compute characteristic polynomial and find multiplicity of eigenvalues for matrix A; 2. Compute related generalized Vandermonde matrix or Modal matrix by using the equation 2.11 or 2.15; 3. Compute Constituent matrix by employing the equation 3.5 or 3.6; 4. Compute matrix the pth roots by using the relation 4.1; 5. End. It is nown that, in this algorithm, the Vandermonde and Modal matrices and their inverses can easily be computed. 5 Numerical experiments In this section some numerical examples are given. All computations have been implemented by using MATLAB 7.6 with roundoff u = Matrices that were used have repeated eigenvalues with equal and different multiplicities. Higham s Matrix Function Toolbox [10] has also been used. Numerical experiments are presented to compare the behavior of presented method and five other methods such as A 1/p = exp 1 loga based method method 1, Schur method method 2 [16], Newton iteration method method 3 [14], matrix sign method method 4 [1] and Schur-Newton p method method 5 [9] for computing the pth root of matrices. In addition, the accuracy is estimated in terms of the errors: e X = X p A 5.1

10 2654 A. Sadeghi et al Res X = X p A A 5.2 and ρ X = A X p X p 1 i=0 X p 1 i T X i 5.3 where X is the computed pth roots of A and is any norm We use the Frobniuos 1/2. norm defined as A F = i j a ij 2 Note that ρ X was presented by Guo in [9]. The results are summarized in Tables 1 to 6, where p is an integer such that p 2 and e X, Res X and ρ X are defined in 5.1, 5.2 and 5.3. Example 1: For the first example, 4 4 matrix is considered A = The pth root of A for p =5, 17, 52, 128, 625, 1001 will be computed by 4.1. The characteristic polynomial of A is obtained as follows: pλ =λ 4 10λ 3 +37λ 2 60λ +36 which becomes pλ =λ 2 2 λ 3 2. The eigenvalues and its multiplicities are λ 0 =2, m 0 =2 λ 1 =3, m 1 =2 It is clear that m 0 + m 1 =4=n. From fundamental formula 4.1 the pth root of A can be computed by A 1/p = λ 1/p 0 Z p λ1/p 1 1 Z 01 + λ 1/p 1 Z p λ1/p 1 1 Z 11 where the constituent matrices Z 00,Z 01,Z 10,Z 11 can be evaluated by 3.5, or 3.6 whenever the inverse matrices V 1 or W 1 are evaluated. Chang in [6] showed that all of relations the 3.5, or 3.6 give the same result for computing constituent matrix. So,

11 Computing the pth roots of a matrix with repeated eigenvalues 2655 without loss of generality, the first formula is chosen and generalized Vandermonde matrix and its inverse can be computed V =, V = Now, according to 5.5 constituent matrices can be obtained Z 00 =, Z 01 = Z 10 = , Z 11 = Hence, A 1/p for p =5, 17, 52, 128, 625, 1001 can be computed as follows A 1/5 =, A 1/17 = A 1/52 = A 1/625 = , A 1/128 =, A 1/1001 = Therefore, we can examine our computations by the relations 4.9 to It can be concluded that: A 1/5 5 = 5 λ 1/5 0 Z λ 4/5 1 Z 01 + λ 1/5 1 Z λ 4/5 1 Z 11 = λ 0 Z 00 + Z 01 + λ 1 Z 10 + Z 11 = I. In the similar manner, for p =17, 52, 128, 625, 1001 we have A 1/p p = p λ 1/p 0 Z p λ1/p 1 1 Z 01 + λ 1/p 1 Z p λ1/p 1 1 Z 11 = λ 0 Z 00 + Z 01 + λ 1 Z 10 + Z 11 = I. Tables 1-3 show errors for all the methods considered. It can be concluded that our method can compute the pth roots of matrices successfully and efficiently. In addition, errors are comparable with methods 1, 2, 3, 5 and certainly better than the Newton

12 2656 A. Sadeghi et al Table 1: Comparing error e X among different methods for Example 1 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e e e e e e e e e e e e e e-012 Table 2: Comparing error Res X among different methods for Example 1 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e e e e e e e e e e e e e e-013 iteration method Method 3. Furthermore, for large p matrix sign method Method 4 breadown, while the presented method can compute the pth root of a matrix for large p. Example 2: For the next example, consider upper triangular 8 8 matrix A = which has characteristic polynomial The eigenvalues and its multiplicities are pλ =λ 2 6 λ 3 2 λ 0 =2, m 0 =6

13 Computing the pth roots of a matrix with repeated eigenvalues 2657 Table 3: Comparing error ρ X among different methods for Example 1 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-019 λ 1 =3, m 1 =2 We observe that m 0 + m 1 =8=n. By using fundamental formula 4.1 the pth root of A can be expressed as follows: A 1/p = j + j! p p p 121/p j Z 0j j + j! p p p 131/p j Z 1j Now, the constituent matrix associated to matrix A can be computed. The generalized Vandermonde matrix can be expressed as follows: V = After computing the inverse of V and constituent matrices, A 1/p for p =12, 52, 365, 1000 are computed as follows A 1/12 = A 1/52 =

14 2658 A. Sadeghi et al A 1/365 = A 1/1000 = Table 4: Comparing error e X among different methods for Example 2 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e-021 Table 5: Comparing error Res X among different methods for Example 2 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e-014 Table 6: Comparing error ρ X among different methods for Example 2 p presented method Method 1 Method 2 Method 3 Method 4 Method e e e e e e e e e e e e e e e e e e e e e e e-021

15 Computing the pth roots of a matrix with repeated eigenvalues 2659 Tables 4-6 show errors for all the methods considered. It can be observed that presented method can compute the pth roots of matrices successfully and efficiently. Furthermore, it can be concluded that errors are comparable with methods 1, 2, 3, 5 and definitely better than the Newton iteration method Method 3. Also, while the presented method can compute the pth root of a matrix for large p successfully, matrix sign method Method 4 breadown for large p. 6 Conclusion We have utilized the theory of matrix functions and the wor of [6] to develop methods for computing the pth roots of matrices. The method that has been described can compute the pth roots of a matrix for large p with good accuracy. In addition, this method does not have any problem in convergence since it is not an iterative method. However it should be noted that the methods will only wor for matrices with repeated no negative eigenvalues.

16 2660 A. Sadeghi et al References [1] D. A. Bini, N. J. Higham and B. Meini, Algorithms for the matrix pth root, Numer Algor, 39, , [2] F.C. Chang, Evaluation of an analytical function of an arbitrary matrix with multiple eigenvalues, Proc. IEEE Lett. 65, , [3] F.C. Chang, Evaluation of constituent matrices of a companion matrix with repeated eigenvalues, Proc. IEEE Lett. 65, , [4] F.C. Chang, Evaluation of an analytical function of a companion matrix with multiple eigenvalues, Proc. IEEE Lett. 63, , [5] F.C. Chang, A direct approach to the constituent matrices of an arbitrary matrix with multiple eigenvalues, Proc. IEEE Lett. 65, , [6] F.C. Chang, Function of a square matrix with repeated eigenvalues, Appl. Math. Comput, 160, , [7] F. Greco and B. Iannazzo. A binary powering Schur algorithm for computing primary matrix roots, Numer Algor, /s , [8] C. H. Guo, On Newton s method and Halley s method for the principal pth root of a matrix, Linear Algebra Appl, 43211, , [9] C. H. Guo and N. J. Higham. A Schur-Newton method for the matrix pth root and its inverse. SIAM J. Matrix Anal. Appl, 283, , [10] N. J. Higham. The Matrix Function Toolbox. higham /mctoolbox [11] N. J. Higham. Functions of Matrices: Theory and Computation. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, [12] W. D. Hosins and D. J.Walton. A faster, more stable method for computing the pth roots of positive defnite matrices. Linear Algebra Appl, 26, , [13] B. Iannazzo. On the Newton method for the matrix pth root. SIAM J. Matrix Anal. Appl, 282, , 2006.

17 Computing the pth roots of a matrix with repeated eigenvalues 2661 [14] B. Iannazzo. A family of rational iterations and its application to the computation of the matrix pth root. SIAM J. Matrix Anal. Appl, 304, , [15] B. Lasziewicz and K. Zieta. A Pade family of iterations for the matrix sector function and the matrix pth root. Numer. Linear Alg. Appl, 16, , [16] M. I. Smith. A Schur algorithm for computing matrix pth roots. SIAM J. Matrix Anal. Appl, 244, , Received: February, 2011

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

Solving Systems of Fuzzy Differential Equation

Solving Systems of Fuzzy Differential Equation International Mathematical Forum, Vol. 6, 2011, no. 42, 2087-2100 Solving Systems of Fuzzy Differential Equation Amir Sadeghi 1, Ahmad Izani Md. Ismail and Ali F. Jameel School of Mathematical Sciences,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

The Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009.

The Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009. The Complex Step Approximation to the Fréchet Derivative of a Matrix Function Al-Mohy, Awad H. and Higham, Nicholas J. 2009 MIMS EPrint: 2009.31 Manchester Institute for Mathematical Sciences School of

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Recurrence Relations between Symmetric Polynomials of n-th Order

Recurrence Relations between Symmetric Polynomials of n-th Order Applied Mathematical Sciences, Vol. 8, 214, no. 15, 5195-522 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ams.214.47525 Recurrence Relations between Symmetric Polynomials of n-th Order Yuriy

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Stable iterations for the matrix square root

Stable iterations for the matrix square root Numerical Algorithms 5 (997) 7 4 7 Stable iterations for the matrix square root Nicholas J. Higham Department of Mathematics, University of Manchester, Manchester M3 9PL, England E-mail: higham@ma.man.ac.u

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Computing Real Logarithm of a Real Matrix

Computing Real Logarithm of a Real Matrix International Journal of Algebra, Vol 2, 2008, no 3, 131-142 Computing Real Logarithm of a Real Matrix Nagwa Sherif and Ehab Morsy 1 Department of Mathematics, Faculty of Science Suez Canal University,

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Padé approximants of (1 z) 1/p and their applications to computing the matrix p-sector function and the matrix pth roots.

Padé approximants of (1 z) 1/p and their applications to computing the matrix p-sector function and the matrix pth roots. Padé approximants of (1 z) 1/p and their applications to computing the matrix p-sector function and the matrix pth roots Krystyna Ziȩtak March 17, 2015 Outline 1 Coauthors 2 sign and p-sector functions

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES November 1, 1 POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES FRITZ KEINERT AND SOON-GEOL KWON,1 Abstract Two-direction multiscaling functions φ and two-direction multiwavelets

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED 2 p -ARY TREE

THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED 2 p -ARY TREE Proyecciones Vol 3, N o, pp 131-149, August 004 Universidad Católica del Norte Antofagasta - Chile THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED p -ARY TREE OSCAR ROJO Universidad Católica del Norte,

More information

Eigenpairs and Similarity Transformations

Eigenpairs and Similarity Transformations CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

MATRIX ARITHMETIC-GEOMETRIC MEAN AND THE COMPUTATION OF THE LOGARITHM

MATRIX ARITHMETIC-GEOMETRIC MEAN AND THE COMPUTATION OF THE LOGARITHM SIAM J. MATRIX ANAL. APPL. Vol. 37, No., pp. 719 743 c 016 Society for Industrial and Applied Mathematics MATRIX ARITHMETIC-GEOMETRIC MEAN AND THE COMPUTATION OF THE LOGARITHM JOÃO R. CARDOSO AND RUI RALHA

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC

BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London New York Washington, D.C. Contents Preface Examples Major results/proofs

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

CAAM 335 Matrix Analysis

CAAM 335 Matrix Analysis CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points

More information

Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants

Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

A more accurate Briggs method for the logarithm

A more accurate Briggs method for the logarithm Numer Algor (2012) 59:393 402 DOI 10.1007/s11075-011-9496-z ORIGINAL PAPER A more accurate Briggs method for the logarithm Awad H. Al-Mohy Received: 25 May 2011 / Accepted: 15 August 2011 / Published online:

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

LA BUDDE S METHOD FOR COMPUTING CHARACTERISTIC POLYNOMIALS

LA BUDDE S METHOD FOR COMPUTING CHARACTERISTIC POLYNOMIALS LA BUDDE S METHOD FOR COMPUTING CHARACTERISTIC POLYNOMIALS RIZWANA REHMAN AND ILSE C.F. IPSEN Abstract. La Budde s method computes the characteristic polynomial of a real matrix A in two stages: first

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2011.

Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2011. Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm Al-Mohy, Awad H. and Higham, Nicholas J. 2011 MIMS EPrint: 2011.83 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

The kernel structure of rectangular Hankel and Toeplitz matrices

The kernel structure of rectangular Hankel and Toeplitz matrices The ernel structure of rectangular Hanel and Toeplitz matrices Martin H. Gutnecht ETH Zurich, Seminar for Applied Mathematics ØØÔ»»ÛÛÛºÑ Ø º Ø Þº» Ñ 3rd International Conference on Structured Matrices

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

RANK REDUCTION AND BORDERED INVERSION

RANK REDUCTION AND BORDERED INVERSION Mathematical Notes, Misolc, Vol. 2., No. 2., (21), pp. 117 126 RANK REDUCTION AND BORDERED INVERSION Aurél Galántai Institute of Mathematics, University of Misolc 3515 Misolc Egyetemváros, Hungary matgal@gold.uni-misolc.hu

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Computing the Action of the Matrix Exponential

Computing the Action of the Matrix Exponential Computing the Action of the Matrix Exponential Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Awad H. Al-Mohy 16th ILAS

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Chap 4. State-Space Solutions and

Chap 4. State-Space Solutions and Chap 4. State-Space Solutions and Realizations Outlines 1. Introduction 2. Solution of LTI State Equation 3. Equivalent State Equations 4. Realizations 5. Solution of Linear Time-Varying (LTV) Equations

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and

More information

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh SQUARE ROOTS OF x MATRICES Sam Northshield SUNY-Plattsburgh INTRODUCTION A B What is the square root of a matrix such as? It is not, in general, A B C D C D This is easy to see since the upper left entry

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Incomplete exponential sums over finite fields and their applications to new inversive pseudorandom number generators

Incomplete exponential sums over finite fields and their applications to new inversive pseudorandom number generators ACTA ARITHMETICA XCIII.4 (2000 Incomplete exponential sums over finite fields and their applications to new inversive pseudorandom number generators by Harald Niederreiter and Arne Winterhof (Wien 1. Introduction.

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

Testing matrix function algorithms using identities. Deadman, Edvin and Higham, Nicholas J. MIMS EPrint:

Testing matrix function algorithms using identities. Deadman, Edvin and Higham, Nicholas J. MIMS EPrint: Testing matrix function algorithms using identities Deadman, Edvin and Higham, Nicholas J. 2014 MIMS EPrint: 2014.13 Manchester Institute for Mathematical Sciences School of Mathematics The University

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Linear perturbations of general disconjugate equations

Linear perturbations of general disconjugate equations Trinity University From the SelectedWorks of William F. Trench 1985 Linear perturbations of general disconjugate equations William F. Trench, Trinity University Available at: https://works.bepress.com/william_trench/53/

More information

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS The correct choice of a coordinate system (or basis) often can simplify the form of an equation or the analysis of a particular problem. For example, consider

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information