The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

Size: px
Start display at page:

Download "The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix"

Transcription

1 The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of Mathematics, National Chung Cheng University, Chia-Yi 62, Taiwan. Abstract The eigenvalue shift technique is the most well-known and fundamental tool for matrix computations. Applications include the search of eigeninformation, the acceleration of numerical algorithms, the study of Google s PageRank. The shift strategy arises from the concept investigated by Brauer [] for changing the value of an eigenvalue of a matrix to the desired one, while keeping the remaining eigenvalues and the original eigenvectors unchanged. The idea of shifting distinct eigenvalues can easily be generalized by Brauer s idea. However, shifting an eigenvalue with multiple multiplicities is a challenge issue and worthy of our investigation. In this work, we propose a new way for updating an eigenvalue with multiple multiplicities and thoroughly analyze its corresponding Jordan canonical form after the update procedure. Keywords: eigenvalue shift technique, Jordan canonical form, rank-k updates.. Introduction The eigenvalue shift technique is a will-established method of mathematics in science and engineering. It arises from the research of eigenvalue computations. As we know, the power method is the most common and easiest algorithm to find the dominant eigenvalue for a given matrix A C n n, but it is impracticable to look for a specific eigenvalue of A. In order to compute one specific eigenvalue, we need to apply the power method to the inverse of the Corresponding author addresses: chiang@nfu.edu.tw (Chun-Yueh Chiang), mlin@math.ccu.edu.tw (Matthew M. Lin ) The first author was supported by the National Science Council of Taiwan under grant NSC-25-M The second author was supported by the National Science Council of Taiwan under grant NSC99-25-M-94--MY2. Preprint submitted to Elsevier May 3, 22

2 shifted matrix (A µi n ) with an appropriately chosen shift value µ C. Here, I n is an n n identity matrix. This is the well-known shifted inverse power method [2]. Early applications of eigenvalue shift techniques are focused on the stability analysis of dynamical systems [3], the computation of the root of a linear or non-linear equation by using preconditioning approaches [4]. Only recently, the study of eigenvalue shift approaches has been widely applied to the study of inverse eigenvalue problems for reconstructing a structured model from prescribed spectral data [5, 6, 7, 8], the structure-preserving double algorithm for solving nonsymmetric algebraic matrix Riccati equation [9, ], and the Google s PageRank problem [, 2]. Note that the common goal of the applications of the shift techniques stated above is to replace some unwanted eigenvalues so that the stability or acceleration of the prescribed algorithms can be obtained. Hence, a natural idea, the main contribution of our work, is to propose a method for updating the eigenvalue of a given matrix and provide the precise Jordan structures of this updated matrix. For a matrix A C n n and a number µ, if (λ, v) is an eigenpair of A, then matrices A + µi n and µa have the same eigenvector v corresponding to the eigenvalue λ + µ and µλ, respectively. However, these two processes are to translate or scale all eigenvalues of A. Our work here is to change a particular eigenvalue, which is enlightened through the following important result given by Brauer []. Theorem.. (Brauer). Let A be a matrix with Av = λ v for some nonzero vector v. If r is a vector so that r v =, then for any scalar λ, the eigenvalues of the matrix  = A + (λ λ )vr, consist of those of A, except that one eigenvalue λ of A is replaced by λ. Moreover, the eigenvector v is unchanged, that is, Âv = λ v. To demonstrate our motivation, consider an n n complex matrix A. Let A = V JV be a Jordan matrix decomposition of A, where J is the Jordan normal form of A and V is a matrix consisting of generalized eigenvectors of A. Denote the matrix J as Jk (λ) J =, J 2 where J k (λ) is a k k Jordan block corresponding to eigenvalue λ given by λ. λ J k (λ) C k k.... λ λ and J 2 is an (n k) (n k) matrix composed of a finite direct sum of Jordan blocks. Then, partition matrices V and V conformally as V = [ V V 2 ], (V ) = [ V 3 V 4 ], 2

3 with V, V 3 C n k and V 2, V 4 C n (n k). A natural idea of updating the eigenvalue appearing at the top left corner of the matrix J, but keeping V and V unchanged, is to add a rank k matrix A = V J V3 to the original matrix A so that Jk (λ) + J A + A = V V J 2 has the desired eigenvalue. Here, J is a particular k k matrix, which makes J k (λ) + J a new Jordan matrix. Observe that the updated matrix A + A preserves the structures of matrices V and V, but changes the unwanted eigenvalue via the new Jordan matrix J + J. In other words, if we know the Jordan matrix decomposition in prior, we can update the eigenvalue of A without no difficulty. Note that V3 and V are matrices composed of the generalized left and right eigenvectors, respectively, and V3 V = I k. In practice, given any generalized left and right eigenvectors corresponding to J k (λ), the condition V3 V = I k is not true in general. Hence, the eigenvalue shift approach stated above cannot not be applied. In this paper, we want to investigate an eigenvalue shift technique for updating the eigenvalue of the matrix A, provided that partial generalized left and right eigenvectors are given. We show that after the shift technique the first half generalized eigenvectors are kept the same. Indeed, the study of the invariant of the first half generalized eigenvectors is essential for finding the stabilizing solution of an algebraic Riccati Equation and is the so-called Schur method or invariant subspace method [3, 4]. To advance our research we organize this paper as follows. In Section 2, several useful features of the generalized left and right eigenvectors are discussed. In particular, we investigate the principle of generalized biorthogonality of generalized eigenvectors. This principle is then applied to the study of the eigenvalue shift technique in Section 3 and 4. Finally, the conclusions and some open problems are given in Section The Principle of Generalized Biorthogonality For a given n n square matrix A, let σ(a) be the set of all eigenvalues of A. We say that two vectors u and v in C n are orthogonal if u v =. In our study, we are seeking the orthogonality of eigenvectors of a given matrix. The feature is know as the principle of biorthogonality and is discussed in [5, Theorem.4.7] as follows: Theorem 2.. If A C n n and if λ, λ 2 σ(a), with λ λ 2, then any left eigenvector of A corresponding to λ is orthogonal to any right eigenvector of A corresponding to λ 2. Theorem 2. tells us the principle of biorthogonality with respect to any two distinct eigenvalues. Next, We want to enhance this feature to generalized left and right eigenvectors. 3

4 Theorem 2.2. [Generalized Biorthogonality Property] Let A C n n and let λ, λ 2 σ(a). Suppose {u i } p i= and {v i} q i= are the generalized left and right eigenvectors corresponding to the Jordan block J p (λ ) and J q (λ 2 ), respectively. Then (a) If λ λ 2, (b) If λ = λ 2, u i v j =, i p, j q. u i v j = u i v j+, 2 i p, j q, (a) u i v j =, 2 i + j max{p, q}. (b) Proof. Let U and V be two matrices defined by U = [ u... u p ] C n p, V = [ v... v q ] C n q. By the definitions of {u i } p i= and {v i} q i=, we have That is, U A = J p (λ )U, AV = V J q (λ 2 ). U AV = (U V )J q (λ ) = J p (λ )(U V ). (2) Define components x i,j = u i v j, x i, =, and x,j =, for i =,..., p and j =,..., q. Then (2) implies that (λ λ 2 )x i,j = x i,j x i,j, i p, j q. (3) It follows from (3) and the assumptions of x i,j that if λ λ 2, then x i,j = for i p and j q. This proves (a). By (3), we see that if λ = λ 2, x i,j = x i,j for i p and j q. It follows that x i,j =, for 2 i + j max{p, q} and elements x i,j coincide on the matrix diagonals i + j = s for any max{p, q} + s p + q. This completes the proof of (b). Note that the values x i,j defined in the proof of Theorem 2.2 imply that if λ = λ 2, the matrix X = [x i,j ] p q is indeed a lower triangular Hankel matrix. Also, by (a), we have the following useful results. Corollary 2.. Let A C n n and let λ σ(a). Suppose {u i } p i= and {v i} q i= are the generalized left and right eigenvectors corresponding to the Jordan block J p (λ) and J q (λ), respectively. Then, if p and q are even, then if p and q are odd, then u i v j =, i p 2, j q 2 ; (4) u i v j =, u p+ v j =, u i v q+ =, i p 2 2 2, j q 2. (5) 4

5 Note that Corollay 2. provides only the necessary conditions for two generalized left and right eigenvectors to be orthogonal. It is possible that two generalized eigenvectors are orthogonal, even if they are not fitted in the contraints given in (4) and (5). This phenomenon can be observed by the following two examples. We first provide all possible types of generalized eigenvectors of a Jordan matrix with one and two Jordan blocks, respectively. We then come up with two extreme cases for each Jordan matrix under discussion. One is with the smallest number of orthogonal generalized left and right eigenvectors. The other is with the largest number of orthogonal ones. Example 2.. Let A = J 2k (λ). Then all of the generalized left and right eigenvectors {u i } 2k i= C2k and {v i } 2k i= C2k, respectively, can be written as u i = i a i j+ e 2k j+, v i = j= i c i j+ e j, i 2k, where a i and b i are arbitrary complex numbers and a b. Moreover, if a i = b i = for all i, then j= u i v j =, i, j k, u i v j, k + i, j 2k, and if a = b = and a i = b i = for all i >, then u i v j =, i + j 2k +, u i v j, i + j = 2k +. The second example demonstrates the orthogonal properties of the generalized eigenvectors between two Jordan blocks. Example 2.2. Let A = J k (λ) J k (λ). Then all of the generalized left eigenvectors of the first Jordan block J k (λ) (the upper left corner) and the right generalized eigenvectors of the second Jordan block J k (λ) (the lower right corner) can be written as u i = i a j e k j+ + c j e 2k j+, v i = j= i c i j+ e j + d i j+ e k+j, i k, j= respectively, where a i, b i, c i and d i are arbitrary complex numbers and (a 2 + b 2 )(c 2 + d 2 ) >. Moreover, if a i = b i = c i = d i = for all i, then for all i, j k u i v j =, i + j < k +, u i v j, i + j k +, and if a i = d i =, b i = c i = for all i, then u i v j =, i, j k. 5

6 Given a matrix A C n n, let us close this section with the study of the mapping of the resolvent operator (A λi n ) on its generalized left and right eigenvectors. Evidently, the orthogonal property between two generalized left and right eigenvectors will be influenced after the mapping. This influence will play a crucial role in proposing an eigenvalue shift technique later on. Theorem 2.3. Let A be a matrix in C n n and let λ σ(a). Suppose {u i } p i= and {v i } p i= be the generalized left and right eigenvectors corresponding to J p(λ ). Let λ be a complex number and λ σ(a). Then (a) For i p, (A λi n ) v i = u i (A λi n ) = i j= i j= ( ) i j v j (λ λ) i j+, ( ) i j u i (λ λ) i j+. (b) For i + j p and i, j, u i (A λi n) v j =. Proof. Set λ C and λ σ(a). Since (A λi n )v = (λ λ)v, (A λi n ) v v = (λ λ). Also, (A λi n)v 2 = (λ λ)v 2 + v. It follows that (A λi n ) v 2 = v 2 (A λi n ) v (λ λ) = 2 j= ( ) 2 j v j (λ λ) 2 j+. Subsequently, a similar proof can be given without difficulty to different generalized left and right eigenvectors, that is, (a) follows. Apply Theorem 2.2 and part (a). It is true that u i (A λi n) v j =, for i + j p and i, j. The result of (b) is established. Now we have enough tools to analyze the eigenvalue shift problems. We first establish a result for eigenvalues with even algebraic multiplicities and then generalize it to eigenvalues with odd algebraic multiplicities. We show how to shift a desired eigenvalue of a given matrix without changing the remaining ones. 3. Eigenvalue of Even Algebraic Multiplicities In this section, we propose a shift technique to move an eigenvalue with even algebraic multiplicities to a desired one. We claim that based on our approach, we can keep parts of generalized eigenvectors unchanged. We also investigate all of the possible Jordan structures after the shift at the end of this section. 6

7 Theorem 3.. Let A C n n, let λ σ(a) with algebraic multiplicity 2k and geometric multiplicity, and let {u i } 2k i= and {v i} 2k i= be the left and right Jordan chains for λ. Define two matrices U := [ u u 2 u k ], V := [ v v 2 v k ]. (6) If matrices R C k n and L C n k are, respectively, the right and left inverses of V and U satisfying U L = R V = I k, (7) then the shifted matrix  := A + (λ λ )R R 2 (8) where R := [ V L ] and R 2 := [ R U ], has the following properties: (a) The eigenvalues of  consist of those of A, except that the eigenvalue λ of A is replaced by λ. (b) Âv = λ v, u  = u λ, Âv i+ = λ v i+ + v i, u i+â = λ u i+ + u i, for i =,..., k. That is, ÂV = V J k (λ ), U  = J k (λ )U. Proof. This proof can be obtained by considering the characteristic polynomial of Â, that is, if λ σ(a), then det(â λi n) = det(a λi n ) det ( I n + (λ λ )R R2(A λi n ) ) (9) = det(a λi n ) det ( I 2k + (λ λ )R2(A λi n ) ) R ( ) 2k λ λ = det(a λi n ) ( by Theorem 2.2 and Theorem 2.3 ). λ λ Since det(a λi n ) is a polynomial with a finite number of zeros, we may choose a small perturbation ϵ > such that det(a (λ + ϵ)i n ), that is, A (λ + ϵ)i n is invertible. This implies that if λ σ(a), we can consider the characteristic polynomial det(â (λ + ϵ)i n) so that det(â (λ + ϵ)i n) = det(a (λ + ϵ)i n ) ( ) 2k λ (λ + ϵ). λ (λ + ϵ) Observe that both sides of the above equation are continuous functions of ϵ. By the so-called continuity argument method, letting ϵ gives that It follows that (a) holds. det(â λi n) = det(a λi n ) ( ) 2k λ λ. λ λ 7

8 To prove (b), we see that by Theorem 2.2, Âv = Av + [(λ λ )R R 2]v = λ v + (λ λ )v = λ v. Âv i+ = Av i + [(λ λ )R R 2]v i = λ v i+ + v i, i =,..., k. The same approaches can be carried out to obtain u  = u λ and u i+â = λ u i+ + u i, for i =,..., k. Next, it is natural to ask about the eigenstructure of this updated matrix  defined in Theorem 3.. To this end, we use the notions given in Theorem 3.. Assume without loss of generality that A C 2k 2k, and that P = [ v... v 2k ] is a nonsingular matrix so that P AP = J 2k (λ ), U By Theorem 2.2, we know that the matrix product P C 2k 2k is a lower triangular Hankel matrix, where U = u k+... u 2k. This suggests a way for constructing matrices U and V in (6), that is, U = P Ik and V = P, Q where Q is some nonsingular and lower triangular Hankel matrix in C k k. Also, the right and left inverses of (7) can be denoted by R = P Ik S2 S, L = P Q, where S and S 2 are arbitrary k k matrices. It follows from (8) that  = A + (λ λ )(V R + LU ) ( ( )) Ik S = P J 2k (λ ) + (λ λ ) S2 Q + P I k Jk (λ = P ) (λ λ )(S + S 2 Q ) P J k (λ ) U That is,  T := Jk (λ ) C, () J k (λ ) for some matrix C C k k. Based on (), the next two results discuss the possible eigenspace and eigenstructure types of the matrix T (i.e., Â). To facilitate our discussion below, we use {e i } i k to denote the standard basis in R k from now on. 8

9 Lemma 3.. Let C be a matrix in C k k. If T is a matrix given by Jk (λ) C T =, () J k (λ) then T has the eigenspace E λ = span {[ e span {[ e ], ] },, if c e k, =, Ce =. } Nk Ce, if c e k, =, Ce. {} e, if c k,. span corresponding to the eigenvalue λ, where Nk = λi k J k (λ) and C = [c i,j ] k k. Proof. Given two vectors x and x 2 in C k, let v = x x 2 be an eigenvector of T with respect to λ. It follows that or, equivalently, T v = λv, (J k (λ) λi k )x = Cx 2, (J k (λ) λi k )x 2 =. (2) The possible types of eigenspace can be obtained by directly solving (2). Lemma 3. suggests a possible characterization of all Jordan canonical forms of T. Note that if x is an eigenvector of the matrix T associated with a Jordan block J p (λ), then its corresponding Jordan canonical basis γ, a cycle of generalized eigenvectors of T, can be expressed as γ = {v p, v p,..., v }, where v i = (T λ) p i (x) for i < p and v p = x. In this sense, we classify the possible eigenstructure types by using the notions defined in Lemma 3. as follows: Case. c k, =, Ce =. Then, T J k (λ) J k (λ) with two cycles of generalized eigenvectors taken as γ = γ 2 = {[ e ] [ ek,..., ]}, ] Nk Ce, 2,..., [ e e 2 k i=2 N k i+ k Ce i e k. 9

10 Case 2. c k, =, Ce. Then T J k (λ) J k (λ) with two cycles of generalized eigenvectors taken as { } e ek γ =,...,, k Nk Ce γ 2 =,..., N k i+ k Ce i e i=. Case 3. c k,. Then T J 2k (λ) with the cycle of generalized eigenvectors taken as { ck, e γ = ck, e,..., k, k Nk Ce,..., N k i+ k Ce i i=. e Here, we have identified all possible eigenstructure types of the matrix T. This observation leads directly to the following result. Theorem 3.2. Let C be a matrix in C k k. Then, the Jordan canonical form of the matrix Â, defined by equation (8), is either J k(λ) J k (λ) or J 2k (λ). Note that Theorem 3.2 also implies that the geometric multiplicity of Â, defined in (), is at most two. On the other hand, an analogous approach can seemingly be used to derive a shift technique and characterize the corresponding eigenstructure for the eigenvalue with odd algebraic multiplicities. Indeed, this analysis for the odd case is much more complicated due to the orthogonal property caused by the middle eigenvector as it can be seen in Theorem 2.2 and a different approach is needed for our discussion. e k e k 4. Eigenvalue with Odd Algebraic Multiplicities Let us now consider the eigenvalue with odd algebraic multiplicities. From (a), we know that if p = q and p, q are odd integers, then the inner product u p+ v p+ 2 2 is [ a constant. ] But, [ it is not clear ] whether the product is zero or not. Since u... u p and v... v p are two nonsingular matrices, it implies that the inner product u p+ v p+ 2 2 eigenvalue with odd algebraic multiplicities.. The following result shows how to change an Theorem 4.. Let A C n n, and λ be an eigenvalue of A having algebraic multiplicity 2k + and geometric multiplicity, and let {u i } 2k+ i= and {v i } 2k+ i= be the left and right Jordan chains for λ. Define two matrices U := u u 2 u k, V := v v 2 v k (3)

11 and a vector r = u k+ vk+ u. If matrices R C k n and L C n k are, respectively, the right and left inverses of V and U k+ satisfying then the shifted matrix R V = U L = I k, (4)  := A + (λ λ )R R 2 (5) where R := [ V v k+ L ] and R 2 := [ R r U ], has the following properties: (a) The eigenvalues of  consist of those of A, except that the eigenvalue λ of A is replaced by λ. (b) Âv = λ v, u  = u λ, Âv i+ = λ v i+ + v i, u i+â = λ u i+ + u i, for i =,..., k.that is to say, ÂV = V J k (λ ) U  = J k (λ )U Proof. Since R2(A λi) R = R (A λi) V R (A λi n ) v k+ R (A λi) L λ λ r (A λi) L U (A λi) L, it follows from Theorem 2.2 and Theorem 2.3 that the characteristic polynomial of  satisfies det(â λi n) = det(a λi n ) det ( I n + (λ λ )R R2(A λi n ) ) = det(a λi n ) det ( I 2k+ + (λ λ )R2(A λi n ) ) R ( ) 2k+ λ λ = det(a λi n ), λ λ if λ σ(a). For the case λ σ(a), a small perturbation of λ is applied to make A λi nonsingular. Thus, a similar proof like the one in Theorem 3. is followed. This proves (a). We see by direct computation and Theorem 2.2 that Âv = Av + [(λ λ )R R2]v = λ v + (λ λ )v = λ v, ei+ Âv i+ = λ v i+ + v i + (λ λ )R = λ v i+ + v i, i =,..., k. In an analogous way, we can obtain u  = u λ and u i+â = λ u i+ + u i, for i =,..., k, and (b) follows.

12 Theorem 4. provides us a way to update the eigenvalue of a given Jordan block of odd size. Consequently, we are interested in analyzing possible eigenstructure types of the shifted matrix Â. We start this discussion in an analogous way from Section 3 by using the notions defined in Theorem 4.. Assume for simplicity that the given matrix A is of size (2k + ) (2k + ) and P = v... v 2k+ is a nonsingular matrix such that P AP = J 2k+ (λ ). Ik It follows that V = P. Set U = P for some lower triangular Hankel Q matrix Q C k k, and the vector v k+ = P ê k+, where ê k+ is a unit vector in R 2k+ with a in position k+ and s elsewhere. Corresponding to formula (4) and the vector r given in Theorem 4., we define R = P Ik S2 S, L = P Q, r = P w, /w where S and S 2 are arbitrary matrices in C (k+) k, w = [w i ] k is a vector in C k+, and w is the complex conjugate of w. Then the shifted matrix (5) satisfies  = A + (λ λ )(V R + v k+ r + LU ) [ ( )] Ik S = P J 2k+ (λ ) + (λ λ ) S2 Q + ê k+ w/w + P I k J k (λ ) (λ λ )ŝ (λ λ )(S + [ I = P I k ] S 2 Q ) k λ (λ λ )(ŝ 2 + w 2... w k+ ) P, J k (λ ) where ŝ is the first column of S and ŝ 2 is the last row of S 2. This implies that after the shift approach the matrix  is similar to an upper triangular matrix J k (λ) a C S := λ b, (6) J k (λ) for some matrix C C k k and vectors a, b C k. This above similarity property allows us to discuss the eigeninformation of the shifted matrix Â. That is, the eigeninformation of  can be studied indirectly by considering the upper triangular matrix T of (6) Lemma 4.. Let C be a matrix in C k k and let a, b be two vectors in C k. If S is given by J k (λ) a C S = λ b, J k (λ) 2

13 then S has the eigenspace span e, N kce, if b =, a k, c k, =. e span e N k ( c k, a k a + Ce ), c k, a k, e if b =, a k, c k,. span e, N ka E λ =, if b =, a k =, c k, or b, a k =. span e, N k(a + Ce ), N kce, e e if b =, a k =, c k, =. span e, if b, a k. (7) corresponding to the eigenvalue λ, where Nk = λi k J k (λ), b = e b, a k = e k a, and C = [c i,j ] k k. Proof. Corresponding to the eigenvalue λ, let v = x x 2 x 3 be an eigenvector of S, where x, x 3 C k and x 2 C. It is true that the eigenvector v satisfies or, equivalently, Sv = λv, (J k (λ) λi k )x = x 2 a Cx 3, x 2 = b x 3, (J k (λ) λi k )x 3 =. We then have the all possible types of eigenspace by discussing the cases given in (7) step by step. Similarly, we proceed to discuss all possible Jordan canonical forms of this shifted matrix (5) through the discussion of this special block upper triangular matrix S given in (6). One point should be made clear first. Due to the effects of vectors a and b in S. It is too complicate to evaluate the generalized 3

14 eigenvectors in an analogous way as we did in Section 3. We, therefore, seek to determine the Jordan canonical forms by means of some kind of matrix transformation so that the structure and eigeninfomration of S are simplified and unchanged respectively. With this in mind, we start by choosing an invertible matrix I k y W Y = z C (2k+) (2k+). I k It is easy to check that Y = I k y W + yz z I k We then transform matrix S in (6) by using Y and Y such that Y SY = J k(λ) a (J k (λ) λi)y C λ b + z (J k (λ) λi), J k (λ) where C = [ c i,j ] k k = W J k (λ) J k (λ)w + C + yb az + (J k (λ) λi)yz. Specify the vectors a = [a i ] k and b = [b i ] k by their components, and let the matrix D = [d i,j ] k k = C + yb az + (J k (λ) λi)yz. Notice, first, that if we choose y = [ y a a 2... a k ] and z = [ b 2 b 3... b k z k ], then for any y and z k C we have ã := a (J k (λ) λi)y = and b := b +z (J k (λ) λi) = [ b ]. (8) a k. Next, observe that (W J k (λ) J k (λ)w ) e i = w 2,. w k, w,i w 2,i. w k,i w k,i w k,i, i =,, 2 i k, 4

15 where W = [w i,j ] k k. We then want to eliminate the elements in C by choosing so that w i+, = d i,, i k, w i+,j = w i,j + d i,j, i k, 2 j k, c i,j =, i < k, j k, (9a) c k,j = j d k l,j l, j k. (9b) l= From formulae (8) and (9), it follows that by choosing appropriate vectors y, z and a matrix W, we can simplify the matrix S such that S S := J k(λ) ã C λ b J k (λ) where ã = a k e k, b = b e are vectors in C k, and C = [ c i,j ] k k satisfies c i,j =, for i k and j k. Note that C is a matrix with all entries equal to zero except the ones on the last row and is said to be in lower concentrated form [6]. This observation can be used to classify all possible eigenstructure types of S though the discussion of S. We list all the classifications as follows: Case. b, a k. a. If c k, =... = c k,i = and c k,i+, for i k, then S J 2k+ with the cycle of generalized eigenvectors taken as γ = b a k e,..., b a k e k, b,,...,, τ,..., τ k i, e e i f f k i, where f = e i+ C k, f j = e i+j + j τ s e j s C k, 2 j k i, b s= τ j = e k Cf j a k C, j k i. b. If c k, =... = c k,k =, then S J 2k+ with the cycle of generalized eigenvectors taken as 5

16 γ = b a k e,..., b a k e k, b, e,..., e k. Case 2. b =, a k. 2a. If c k, =... = c k,k = and c k,k, then S J k+ J k with the cycles of generalized eigenvectors taken as a k e a k e k γ =,...,,, γ 2 =,...,, c k,k a k. e e k e k 2b. If c k, =... = c k,i = and c k,i+, for i < k, then S J 2k i J i+ with the cycles of generalized eigenvectors taken as { } m m2k i γ =,...,, where m j = n j = γ 2 = n e,..., { ck,i+ e j, j k, e i n 2k i k, k + j 2k i., c k,i+ a k e i+. (k+), j < k i +, [ j k+i α s ej k+i s], k i + j < 2k 2i, [ s= k i 2 [ a k s= k i 2 s= α s e j k+i s], 2k 2i j 2k i, α s c k,k s k i 2 s= α s e k s], j = 2k i, α =, α j = j α s c k,i+2+s C, j k i. c k,i+ s= Here, if k = i + 2, we should ignore the second case define in n j by using the third one directly. 6

17 2c. If c k, =... = c k,k =, then S J k+ J k with the cycles of generalized eigenvectors taken as γ = a ke,..., a ke k,, γ 2 =,...,. e e k Case 3.b, a k =. 3a. If c k, =... = c k,i = and c k,i+, for i k, then S J 2k i J i+ with the cycles of generalized eigenvectors taken as { } m m2k i γ =,...,, n n 2k i γ 2 = b,,...,. e e i where m j = n j = { ck,i+ e j, j k, k, k + j 2k i. (k+), j < k i, [ b ], j = k i, [ [ b α j k+i k i s= j k+i s= α s e j k+i s ], k i + j < 2k 2i, α s e j k+i s ], 2k 2i j 2k i, α =, α j = j α s c k,i+2+s C, j k i. c k,i+ s= Here, if k = i +, we should ignore the first case and replace the third case define in n j by using the forth one directly. 3b. If c k, =... = c k,k =, then S J k+ J k with the cycles of generalized eigenvectors taken as γ = b,...,, e e k γ 2 = e,..., e k. 7

18 Case 4. b =, ã k =. 4a. If c k,, then S J 2k J with the cycles of generalized eigenvectors taken as γ = c k,e,..., c k,e k,,..., k, ψ e ψ s e k s+ s= γ 2 =. where ψ =, ψ j = j ψ s c k,j s+ C, for j = 2,..., k. c k, s= 4b. If c k, =... = c k,i = and c k,i+, for i k, then S J 2k i J i J, with the cycles of generalized eigenvectors taken as γ = m,..., m 2k i, n n 2k i γ 2 =,...,, e e i γ 3 =, where m j = n j = { ck,i+ e j, j k, k, k + j 2k i. (k+), j < k i +, j k+i s= k i s= α s e j k+i s, k i + j < 2k 2i, α s e j k+i s, 2k 2i j 2k i, α =, α j = j α s c k,i+2+s C, for j =,..., k i. c k,i+ s= 8

19 Here, if k = i +, we should ignore the second case define in n j by using the third one directly. 4c. If c k, =... = c k,k =, then A J k J k J, with the cycles of generalized eigenvectors taken as γ =,...,, e e k γ 2 = e,..., e k, γ 3 =. Now, we have all possible eigenstructure types of the matrix S. In view of such classifications above, it turns out that the geometric multiplicity of Â, defined in (5), is at most three. 5. Conclusions and Open Problems The methods developed in this paper are used to shift an eigenvalue with algebraic multiplicities greater than, in the sense that the first half of corresponding generalized eigenvectors are kept unchanged. From the point of view of applications, the approach has the advantage that one can apply this shift technique to speed up or stabilizing a given numerical algorithm. It is true that there are many different kinds of eigenvalue shift problems for a wide range of applications in science and engineering. In the most cases, we are required to shift partial (not all) eigenvalues of a given Jordan block, change multiple eigenvalues simultaneously, or replace complex conjugate eigenvalues of a real matrix. Such questions are under investigation and will be reported elsewhere. Acknowledgement The authors wish to thank Professor Wen-Wei Lin (National Chiao Tung University) for many interesting and valuable suggestions on the manuscript. This research work is partially supported by the National Science Council and the National Center for Theoretical Sciences in Taiwan. References [] A. Brauer, Limits for the characteristic roots of a matrix. IV. Applications to stochastic matrices, Duke Math. J. 9 (952)

20 [2] G. H. Golub, C. F. Van Loan, Matrix computations, 3rd Edition, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 996. [3] R. Bellman, Introduction to matrix analysis, Vol. 9 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 997, reprint of the second (97) edition, With a foreword by Gene Golub. [4] C. T. Kelley, Iterative methods for linear and nonlinear equations, Vol. 6 of Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 995, with separately available software. [5] M. T. Chu, G. H. Golub, Structured inverse eigenvalue problems, Acta Numer. (22) 7. doi:.7/s URL [6] M. T. Chu, G. H. Golub, Structured inverse eigenvalue problems, Acta Numer. (22) 7. doi:.7/s URL [7] D. Chu, M. T. Chu, W.-W. Lin, Quadratic model updating with symmetry, positive definiteness, and no spill-over, SIAM J. Matrix Anal. Appl. 3 (2) (29) doi:.37/ URL [8] M. M. Lin, B. Dong, M. T. Chu, Semi-definite programming techniques for structured quadratic inverse eigenvalue problems, Numer. Algorithms 53 (4) (2) doi:.7/s URL [9] C.-H. Guo, B. Iannazzo, B. Meini, On the doubling algorithm for a (shifted) nonsymmetric algebraic Riccati equation, SIAM J. Matrix Anal. Appl. 29 (4) (27) 83. doi:.37/ URL [] D. A. Bini, B. Iannazzo, F. Poloni, A fast Newton s method for a nonsymmetric algebraic Riccati equation, SIAM J. Matrix Anal. Appl. 3 () (28) doi:.37/ URL [] R. A. Horn, S. Serra-Capizzano, A general setting for the parametric Google matrix, Internet Math. 3 (4) (26) URL [2] G. Wu, Eigenvalues and Jordan canonical form of a successively rankone updated complex matrix with applications to Google s PageRank problem, J. Comput. Appl. Math. 26 (2) (28)

21 doi:.6/j.cam URL [3] A. J. Laub, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Control 24 (6) (979) doi:.9/tac URL [4] C.-H. Guo, Efficient methods for solving a nonsymmetric algebraic Riccati equation arising in stochastic fluid models, J. Comput. Appl. Math. 92 (2) (26) doi:.6/j.cam URL [5] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge University Press, Cambridge, 99, corrected reprint of the 985 original. [6] C. R. Johnson, E. A. Schreiner, Explicit Jordan form for certain block triangular matrices, in: Proceedings of the First Conference of the International Linear Algebra Society (Provo, UT, 989), Vol. 5, 99, pp doi:.6/ (9)976-w. URL 2

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating Applied Mathematical Sciences, Vol. 4, 200, no. 9, 905-9 A New Method to Find the Eigenvalues of Convex Matrices with Application in Web Page Rating F. Soleymani Department of Mathematics, Islamic Azad

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Lec 2: Mathematical Economics

Lec 2: Mathematical Economics Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

Chapter 5. Eigenvalues and Eigenvectors

Chapter 5. Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Computing the pth Roots of a Matrix. with Repeated Eigenvalues Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class). Math 3, Fall Jerry L. Kazdan Problem Set 9 Due In class Tuesday, Nov. 7 Late papers will be accepted until on Thursday (at the beginning of class).. Suppose that is an eigenvalue of an n n matrix A and

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Diagonalization MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Motivation Today we consider two fundamental questions: Given an n n matrix A, does there exist a basis

More information

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian

More information

Jordan Normal Form Revisited

Jordan Normal Form Revisited Mathematics & Statistics Auburn University, Alabama, USA Oct 3, 2007 Jordan Normal Form Revisited Speaker: Tin-Yau Tam Graduate Student Seminar Page 1 of 19 tamtiny@auburn.edu Let us start with some online

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Problems for M 10/26:

Problems for M 10/26: Math, Lesieutre Problem set # November 4, 25 Problems for M /26: 5 Is λ 2 an eigenvalue of 2? 8 Why or why not? 2 A 2I The determinant is, which means that A 2I has 6 a nullspace, and so there is an eigenvector

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

On a quadratic matrix equation associated with an M-matrix

On a quadratic matrix equation associated with an M-matrix Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 2, pp. 305 32 c 2005 Society for Industrial and Applied Mathematics JORDAN CANONICAL FORM OF THE GOOGLE MATRIX: A POTENTIAL CONTRIBUTION TO THE PAGERANK COMPUTATION

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

MATH 1553 PRACTICE MIDTERM 3 (VERSION B) MATH 1553 PRACTICE MIDTERM 3 (VERSION B) Name Section 1 2 3 4 5 Total Please read all instructions carefully before beginning. Each problem is worth 10 points. The maximum score on this exam is 50 points.

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Linear Algebra Practice Final

Linear Algebra Practice Final . Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information