Solving the Sylvester Equation AX-XB=C when $\sigma(a)\cap\sigma(b)\neq\emptyset$

Size: px
Start display at page:

Download "Solving the Sylvester Equation AX-XB=C when $\sigma(a)\cap\sigma(b)\neq\emptyset$"

Transcription

1 Electronic Journal of Linear Algebra Volume 35 Volume 35 Article Solving the Sylvester Equation AX-XBC when $\sigma(a)\cap\sigma(b)\neq\emptyset$ Nebojša Č. Dinčić Faculty of Sciences and Mathematics, University of Niš, ndincic@hotmail.com Follow this and additional works at: Part of the Algebra Commons Recommended Citation Dinčić, Nebojša Č.. (219), "Solving the Sylvester Equation AX-XBC when $\sigma(a)\cap\sigma(b)\neq\emptyset$", Electronic Journal of Linear Algebra, Volume 35, pp DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu.

2 SOLVING THE SYLVESTER EQUATION AX XB C WHEN σ(a) σ(b) NEBOJŠA Č. DINČIĆ Abstract. The method for solving the Sylvester equation AX XB C in the complex matrix case, when σ(a) σ(b), by using Jordan normal form, is given. Also, the approach via the Schur decomposition is presented. Key words. Sylvester equation, Jordan normal form, Schur decomposition. AMS subject classifications. 15A An introduction. The Sylvester equation (1.1) AX XB C, where A C m m, B C n n and C C m n are given, and X C m n is an unknown matrix, is a very popular topic in linear algebra, with numerous applications in control theory, signal processing, image restoration, engineering, solving ordinary and partial differential equations, etc. It is named for famous mathematician J. J. Sylvester, who was the first to prove that this equation in matrix case has unique solution if and only if σ(a) σ(b) 17. One important special case of the Sylvester equation is the continuous-time Lyapunov matrix equation (AX + XA C). A C In 1952, Roth 15 proved that for operators A, B on finite-dimensional spaces is similar to B A if and only if the equation AX XB C has a solution X. Rosenblum 14 proved that if A B and B are operators such that σ(a) σ(b), then the equation AX XB Y has a unique solution X for every operator Y. If we define the linear operator T on the space of operators by T (X) AX XB, the conclusion of the Rosenblum theorem can be restated: T is invertible if σ(a) σ(b). Kleinecke showed that when A and B are operators on the same space and σ(a) σ(b), then the operator T is not invertible, see 14. For the exhaustive survey on these topics, please see 2 and 18 and the references therein. In recent paper of Li and Zhou, an extensive review of literature can be found, with a brief classification of existing methods for solving the Sylvester equation. Drazin 6 recently gave another equivalent condition. Theorem For any field F, any r, s N and any square matrices A M r (F ), B M s (F ) whose eigenvalues all lie in F, the following three properties of the pair A, B are equivalent: i) for any given polynomials f, g F t, there exists h F t such that f(a) h(a) and g(b) h(b); ii) A and B share no eigenvalue in common; iii) for r s matrices X over F, AX XB X. Received by the editors on January 1, 218. Accepted for publication on December 2, 218. Handling Editor: Froilán Dopico. Faculty of Sciences and Mathematics, University of Niš, PO Box 224, 18 Niš, Serbia (ndincic@hotmail.com). The author was supported by the Ministry of Science, Republic of Serbia, grant no

3 Nebojša Č. Dinčić 2 We recall some results where the solution to the Sylvester equation is given in various forms. Proposition 1.2. (2, p. 9) Let A and B be operators such that σ(b) {z : z < ρ} and σ(a) {z : z > ρ} for some ρ >. Then the solution of the equation AX XB Y is X A Y B n. n We will often use the following special form of the previous result. Proposition 1.3. In the complex matrix case, the Sylvester equation AX XB C is consistent if and only if σ(a) σ(b) and the solution is given by for invertible A, and by for invertible matrix B. X A 1 X A k CB k A k CB k B 1 Proposition Let A and B be operators whose spectra are contained in the open right half plane and the open left half plane, respectively. Then the solution of the equation AX XB Y can be expressed as X e ta Y e tb dt. Proposition Let Γ be a union of closed contours in the plane, with total winding numbers 1 around σ(a) and around σ(b). Then the solution of the equation AX XB Y can be expressed as X 1 (A ζ) 1 Y (B ζ) 1 dζ. 2πi Γ Solving the Sylvester equation in the case when σ(a) σ(b) is rather complicated and is not thoroughly studied yet. According to 3, only Ma 13, and Datta and Datta 4 treated the particular cases when B( A) has only simple eigenvalues or C and B( A T ) are unreduced Hessenberg matrices, respectively. In the paper 3, two special cases, when both A and B are symmetric or skew-symmetric matrices, were considered, by using the notion of the eigenprojection. In 11, it is shown that in the case σ(a) σ(b) (A R m m, B R n n ), the solution of the Sylvester equation is X q(a) 1 η(a, C, B), which is a polynomial in the matrices A, B and C. Also, it is shown that when σ(a) σ(b), the solution set is contained in that of the linear algebraic equation q(a)x η(a, C, B). Recall that q(s) s n + β s + + β 1 s + β is the characteristic polynomial of B and η(a, C, B) n k1 β kη(k 1, A, C, B), where η(k, A, C, B) k i Ak i CB i. In 5, the equation AX X B is considered by using the Kronecker canonical form of the matrix pencil A + λb, where X denotes either transpose or the conjugate transpose of X. Recently, Li and Zhou used the method based on spectral decompositions of the matrices A and B. Note that this method also works in the case when spectra of A and B are not disjoint.

4 3 Solving the Sylvester Equation AX XB C When σ(a) σ(b) In the paper 18, homogeneous and nonhomogeneous Sylvester-conjugate matrix equation of the forms AX + BY XF and AX + BY XF + R are considered. In this paper, we are dealing with the Sylvester equation in complex matrix case, when σ(a) σ(b). Because the operator T (X) AX XB is not invertible in this case, as Kleinecke showed, we must find the consistency condition, and general solution in the case of consistency. We will use the Jordan normal form for the matrices A and B (similar method as Gantmacher used in 7, Ch. VIII, but for homogeneous case only!), so the Sylvester equation will become the set of the simpler equations of the form J m (λ)z ZJ n (µ) W, where W C m n is given matrix, Z C m n is unknown matrix, and λ, µ are complex numbers. In the paper, we are dealing mainly with such simpler equations, discussing the consistency condition and describing the algorithm for finding the general solution. Finally, we give the result characterizing the most general case, as well as the approach via Schur decomposition. 2. Reducing the problem to the most elementary equations. Let the matrices A C m m and B C n n be given, and suppose that W σ(a) σ(b) {λ 1,..., λ s }. It is a well-known fact (see e.g., 1) that any square complex matrix can be reduced to the Jordan canonical form. Therefore, there exist regular matrices S C m m and T C n n such that A SJ A S 1 and B T J B T 1, or, more precisely, JA1 (2.2) A S J A2 S 1, B T JB1 J B2 T 1, where J A1 (respectively, J B1 ) consists of just those Jordan matrices corresponding to the eigenvalues from the σ(a) \ W (respectively, σ(b) \ W ), and J A2 and J B2 are those Jordan matrices corresponding to the eigenvalues from the set W : J A2 diag{j(λ 1 ; p 11, p,..., p 1,k1 ),..., J(λ s ; p s1, p s2,..., p s,ks )}, J B2 diag{j(λ 1 ; q 11, q,..., q 1,l1 ),..., J(λ s ; q s1, q s2,..., q s,ls )}. Here p ij, j 1, k i, i 1, s, and q ij, j 1, l i, i 1, s, are natural numbers, and k i and l i are geometric multiplicities of the eigenvalue λ i, i 1, s, of A and B, respectively; their algebraic multiplicities are given by m i p i1 + p i2 + + p i,ki and n i q i1 + q i2 + + q i,li, where i 1, s. We will use the notation J(λ; t 1,..., t k ) diag{j t1 (λ),..., J tk (λ)} J t1 (λ) J tk (λ), where J ti (λ), i 1, k, is the Jordan block J ti (λ). λ 1 λ λ 1 λ Recall that a matrix A is called non-derogatory if every eigenvalue of A has geometric multiplicity 1, if and only if corresponding to each distinct eigenvalue is exactly one Jordan block 1, p If we substitute (2.2) in (1.1), and denote S 1 XT Y and S 1 CT D, the equation becomes JA1 JB1 (2.3) Y Y D. J A2 J B2 t i t i.

5 Nebojša Č. Dinčić 4 We will partition the matrices Y Y ij 2 2 and D D ij 2 2 in accordance with the partition of J A and J B. Now (2.3) becomes: JA1 J A2 Y11 Y Y 21 Y 22 Y11 Y Y 21 Y 22 which reduces to the following four simpler Sylvester equations: JB1 J A1 Y 11 Y 11 J B1 D 11, J A1 Y Y J B2 D, J A2 Y 21 Y 21 J B1 D 21, J A2 Y 22 Y 22 J B2 D 22. J B2 D11 D D 21 D 22 By Proposition 1.3, the first three equations have the unique solutions, because σ(a 1 ) σ(b 1 ), σ(a 1 ) σ(b 2 ), σ(a 2 ) σ(b 1 ), given by the appropriate power series. The fourth equation does not have a unique solution (when it is consistent), because σ(a 2 ) σ(b 2 ) W, so we turn our attention to this case. Therefore, throughout the paper, we can, without loss of generality, consider the Sylvester equation in the form AX XB C, where A and B are square complex matrices (in general of different size) already reduced to their Jordan forms and such that σ(a) σ(b). If we partition X X ij s s and C C ij s s in accordance with already given decompositions for A and B, and put them in the equation AX XB C, we have s 2 simpler Sylvester equations of the form: (2.4) J(λ i ; p i1,..., p i,ki )X ij X ij J(λ j ; q j1,..., q j,lj ) C ij, i, j 1, s. For i j we have σ(j(λ i ; p i1,..., p i,ki )) {λ i } {λ j } σ(j(λ j ; q j1,..., q j,lj )), and by Proposition 1.3, this case also has the unique solution. Therefore, among s 2 equations in (2.4) there are s 2 s of them which are uniquely solvable, and remaining s equations (for i j) after appropriate translation reduce to s i1 (k i l i ) simple Sylvester equations of the form, J pi,u ()X (ii) u,v X (ii) u,v J qi,v () C (ii) u,v, u 1, k i, v 1, l i, i 1, s. Because of that, we will investigate this important particular case in detail Solving the equation J m ()X XJ n () C. From the previous consideration, we see that Jordan matrices play important role in the paper. Recall that multiplying some matrix C C m n by the Jordan block (or transposed Jordan block) corresponding to acts like shifting the rows (or columns) and inserting a row (or a column) of all zeros where it is necessary: J m ()C shifts up, CJ n () shifts to the right, J T m()c shifts down, while CJ T n () shifts to the left. More precisely, the following Lemma holds. Lemma 2.1. For given matrix C c ij C m n we have: { ci+1,j, i 1, m 1, i) J m ()C ij, i m. { ci,j 1, j 2, n, ii) CJ n () ij, j 1. { iii) J m () T ci 1,j, i 2, m, C ij, i 1.

6 5 Solving the Sylvester Equation AX XB C When σ(a) σ(b) iv) CJ n () T ij { ci,j+1, j 1, n 1,, j n. Moreover, for α, m 1 and β, n 1, { v) J m () α CJ n () β ci+α,j β, i 1, m α and j β + 1, n, ij, i m α + 1, m or j 1, β. { vi) (J m () T ) α CJ n () β ci α,j β, i α + 1, m and j β + 1, n, ij, i 1, α or j 1, β. Example 2.2. We consider the case A J 4 (), B J 3 (), i.e., the equation (2.5) J 4 ()X XJ 3 () C. Because of σ(a) σ(b) {} we expect no unique solution. Let us see for which C there are solutions, and then let us characterize all of them. The matrix equation (2.5) (with C c ij, X x ij C 4 3 ) gives us the following linear system: x 21 c 11, x 22 x 11 c, x 23 x c 13, x 31 c 21, x 32 x 21 c 22, x 33 x 22 c 23, x 41 c 31, x 42 x 31 c 32, x 43 x 32 c 33, c 41, x 41 c 42, x 42 c 43. We see that some conditions must be imposed to the entries of the matrix C: c 41, c 31 + c 42, c 21 + c 32 + c 43. Therefore, any matrix C for which the equation (2.5) is consistent is of the form: The solutions of (2.5) are X C c 11 c c 13 c 21 c 22 c 23 c 31 c 32 c 33 c 31 c 21 c 32. x 11 x x 13 c 11 x 11 + c x + c 13 c 21 c 11 + c 22 x 11 + c + c 23 c 31 c 21 + c 32 c 11 + c 22 + c 33 where x 11, x, x 13 are arbitrary complex numbers. We can rewrite this general solution X in the following more informative form: x 11 x x 13 x 11 x x 11 + c 11 c c 13 c 21 c 22 c c 31 c 32 c 33 c 11 c c 21 c 22, c 11 By Lemma 2.1, we can express this as: p 2 (J 3 ()) X J 4 () T ( J4 () T ) k CJ3 () k, where p 2 (J 3 ()) x 11 I 3 + x J 3 () + x 13 J 3 () 2 is arbitrary polynomial of the matrix J 3 (), with complex coefficients in general, and the degree at most 2.

7 Nebojša Č. Dinčić 6 This example motivates us to prove general result concerning this topic. Theorem 2.3. The Sylvester equation (2.6) J m ()X XJ n () C, when m n, is consistent if and only if (2.7) or, equivalently, (2.8) J m () m 1 k CJ n () k, p 1 c m k,p k, p 1, n, and its general solution (which depends on n complex parameters) is given as p (J n ()) (2.9) X + J m () T ( Jm () T ) k CJn () k, (m n) n where p is an arbitrary polynomial of the degree at most n 1. Proof. First, we show that the conditions (2.7) and (2.8) are equivalent. Let us denote S S k where S k J m () m 1 k CJ n () k, k, n 1. By Lemma 2.1.v), for α m k 1 and β k, we have { ci+m 1 k,j k, i 1, k + 1 and j k + 1, n, S k ij, i k + 2, m or j 1, k. It is easy to see that S k ij for i > j and S k ij S k+1 i+1,j+1 ; therefore, it is enough to consider only the first row (or n th column). We have { cm k,j k, j k + 1, n, S k 1j, j 1, k. Therefore, j 1 S 1j S k 1j c m k,j k, k 1, n. Note that the summation index is changed to k, j 1 because of k + 1 j n and k n 1. Hence, we proved (2.7) (2.8). ( ) : Suppose that the equation (2.6) is consistent, which means for some Ĉ there is X such that J m () X XJ n () Ĉ. We have: J m () m 1 k ĈJ n () k J m () m 1 k (J m () X XJ n ())J n () k J m () m k XJn () k J m () m 1 k XJn () k+1 J m () m X + Jm () m 1 XJn () + + J m () m n+1 XJn () J m () m 1 XJn () J m () m n+1 XJn () J m () m n XJn () n J m () m X Jm () m n XJn () n,

8 7 Solving the Sylvester Equation AX XB C When σ(a) σ(b) so the consistency condition (2.7) holds. ( ) : Now we prove, by immediate checking, that X given by (2.9) is indeed the solution under the consistency condition (2.7). Because of p (J n ()) p (J n ()) J m () J n () (m n) n (m n) n Jn () p (J n ()) p (J n ()) J n () J m n () (m n) n (m n) n J n () p (J n ()) p (J n ()) J n () m n, (m n) n (m n) n (by we denoted some submatrix which is not important now) we have: J m ()X XJ n () C J m ()J m () T ( Jm () T ) k CJn () k J m () T ( Jm () T ) k CJn () k J n () C J m ()J m () T C + ( J m ()(J m () T ) 2 J m () T ) CJ n () + + ( J m ()(J m () T ) n (J m () T ) ) CJn () ( J m ()J m () T ) ( ) I m (J m () T ) k CJ n () k + C C. Indeed, let us denote T k (J m () T ) k CJ n () k, k, n 1, and T T k. By Lemma 2.1.vi), for α β k, we have: T k ij { ci k,j k, i k + 1, m and j k + 1, n,, i 1, k or j 1, k. If we obtain that all entries in the m th row of matrix T are zero, we have the proof. We proceed: { T k mj (J m () T ) k CJ n () k cm k,j k, j k + 1, n, mj, j 1, k, and therefore, j 1 T mj T k mj c m k,j k, j 1, n. Note that again the summation index is changed to k, j 1 because of k + 1 j n and k n 1. The condition (2.7), or equivalently (2.8), ensures that T mj, j 1, n, so X given by (2.9) is, under the condition (2.7), or equivalently (2.8), indeed the solution, so the equation (2.6) is consistent. Remark 2.4. For the given matrix C C m n, m n, the set of its entries c ij such that i j m p for given p 1, n will be called p th small subdiagonal of matrix C. The condition (2.8) actually means that

9 Nebojša Č. Dinčić 8 the sum over each of n small subdiagonals must be zero, which is easy to check. However, for constructing the solutions of the more general Sylvester equations from the most simple ones, the condition (2.7) is more appropriate. We remark that the particular solution in (2.9), X p J m () T ( Jm () T ) k CJn () k, is completely determined by the matrix C, more precisely, by the independent part of C (upper (m 1) n submatrix), and it can be expressed in equivalent, but for computational purposes more practical form as:, i 1, X p ij min{i 1,j} 1 c i 1 k,j k, i 2, m. Let us step away a bit, and consider solving the equation (2.6) by the well-known Kronecker product method. We recall the vectorization operator vec given by vec : C m n C mn, vec(a) vec(a ij ) a 1 a 2 a n T, where a k denotes the k th column of the matrix A. Also, recall that the Kronecker product of A C m n and B C p q is mp nq block matrix A B a ij B m n. If we solve the equation J m ()X XJ n () C, m n, by using the well-known method with the operator vec and the Kronecker product, we have (2.1) (I n J m () J n () T I m ) vec(x) vec(c); this linear system is consistent if and only if r(i n J m () J n () T I m, C) r(i n J m () J n () T I m ). The system (2.1) in matrix form (zero submatrices are omitted) is: J m () I m J m () I m I m J m () I m J m () X 1 X 2. X n C 1 C 2. C n, i.e., J m ()X 1 C 1, J m ()X 2 C 2 + X 1,..., J m ()X n C n + X ().

10 9 Solving the Sylvester Equation AX XB C When σ(a) σ(b) We can infer the consistency condition in the following way: J m () m X n J m () m 1 (J m ()X n ) J m () m 1 (C n + X () ) n J m () m +k C k. k1 We shall not pursue this approach further, because the obtained matrix of the system is an mn mn sparse matrix, and for the consistency of the equation we need to check rank conditions and to use some generalized inverses. For Example 2.2, where m 4, n 3, the previous formula gives: 3 J 4 () k C k k1 c 21 c 31 c 41 + c 32 c 42 + c 43, which is in accordance with the conclusion from Example 2.2. We return to the main flow of the paper. Theorem 2.5. The Sylvester equation (2.11) J m ()X XJ n () C, when m n, is consistent if and only if (2.) m 1 J m () k CJ n () k, and its general solution is (2.13) X m (n m) q m 1 (J m ()) m 1 J m () k C ( J n () T ) k Jn () T, where q m 1 is an arbitrary polynomial of the degree at most m 1. Proof. If we take transpose of the original equation (2.11), and then multiply it by 1, we have J n () T X T X T J m () T C T, which is similar, but not the same as the equation (2.6), so we cannot directly apply Theorem 2.3. But if we notice that for any k N 1 1 J k () T P 1 k J k()p k, where P k P 1 k k k

11 Nebojša Č. Dinčić 1 is so-called exchange matrix, the equation becomes P n J n () T X T P m P n X T J m () T P m P n C T P m, P n J n () T P n P n X T P m P n X T P m P m J m () T P m P n C T P m, J n ()Y Y J m () D, where P n X T P m Y and P n C T P m D. Only now we can apply Theorem 2.3 to the equation J n ()Y Y J m () D. We have the consistency condition (2.) because of m 1 m 1 J n () n k 1 DJ m () k J n () k P n C T P m J m () k ( m 1 ) P n (P n J n ()P n ) k C T (P m J m ()P m ) k P m ( m 1 ) P n (J n () T ) k C T (J m () T ) k P m ( m 1 P n J m () k CJ n () n k 1 ) T P m. General solution of J n ()Y Y J m () D is, by Theorem 2.3: p m 1 (J m ()) ( Y + Jn () T ) k+1 DJm () k. (n m) m By the substitutions X (P n Y P m ) T, D P n C T P m, we have the solution (2.13): X P m p m 1 (J m () T ) P m p m 1 (J m () T ) m (n m) m 1 m (n m) P n + P m (J m () T ) k D T J n () k+1 P n P m P n m m 1 P m (J m () T ) k (P m CP n )J n () k+1 P n m 1 P m m (n m) p m 1 (J m () T )P m (P m J m () T P m ) k C(P n J n ()P n ) k+1 m 1 m (n m) p m 1 (J m ()) J m () k C(J n () T ) k+1. The condition (2.) actually means that the sum of elements over each of m small subdiagonals (c ij, i j p 1, where p 1, m) must be zero. We also remark that the particular solution in (2.13), m 1 X p J m () k C ( J n () T ) k Jn () T, is completely determined by the matrix C, more precisely, by the independent part of C (rightmost m (n 1) submatrix), and it can be expressed in equivalent, but for computational purposes, a more

12 11 Solving the Sylvester Equation AX XB C When σ(a) σ(b) practical form as: X p ij min{m i, j} Corollary 2.6. The homogeneous Sylvester equation is consistent and its general solution is c i+k,j+1+k, j 1, n 1,, j n. J m ()X XJ n () p (J n ()), m n, X (m n) n m (n m) p m 1 (J m ()), m n. where p is an arbitrary polynomial of the degree at most n 1. The rank of the solution X is given by r(x) r(p s 1 (J s ())) s min,s 1 {k : p (k) () }, s min{m, n}. Corollary 2.7. All matrices commuting with J m () are given by X p m 1 (J m ()). Any polynomial such that p m 1 () gives a nonsingular X. Also s 1 σ(x) {p m 1 ()}. From Theorem 2.3 and Theorem 2.5 in the case m n, we have the following result. Corollary 2.8. The Sylvester equation is consistent if and only if and its general solution is J n ()X XJ n () C, J n () k CJ n () k J n () k CJ n () k, n 2 X p (J n ()) J n () k C ( J n () T ) k+1 n 2 ( q (J n ()) + Jn () T ) k+1 CJn () k, where p and q are arbitrary polynomials of the degree at most n Solving the equation J m (λ)x XJ n (µ) C for λ, µ C. If we observe that J k (λ) λi k + J k (), k N, the equation (2.14) J m (λ)x XJ n (µ) C

13 Nebojša Č. Dinčić becomes (2.15) (λ µ)x + J m ()X XJ n () C. Case λ µ: Reduces to already solved equation (2.6) or (2.11). Case λ µ: Suppose, without loss of generality, that λ. By Proposition 1.3, we have the solution X J m (λ) 1 J m (λ) k CJ n (µ) k. We will try to avoid this infinite sum, as well as finding all k th powers of the inverse of a Jordan matrix. If we rewrite (2.15) as (J m () + (λ µ)i m ) X XJ n () C, i.e., J m (λ µ)x XJ n () C, J m (λ µ) is nonsingular, so we have the unique solution by Proposition 1.3. Theorem 2.9. The Sylvester equation (2.14) for λ µ has the unique solution X (J m (λ µ)) (k+1) CJ n () k m 1 J m () k C(J n (µ λ)) (k+1). Remark 2.1. Finding the inverses of the powers of nonsingular Jordan blocks appearing in previous theorem is not hard. Indeed, by using some matrix functional calculus, it is easy to prove that m 1 ( ) k + s 1 J m (α) k ( 1) s α (k+s) J m () s, k N, α. s s 3. The case when σ(a) σ(b) {}. In this section, we solve the Sylvester equation AX XB C in the case A diagj m1 (), J m2 (),..., J mp () and B diagj n1 (), J n2 (),..., J nq (), where m 1 m 2 m p > and n 1 n 2 n q > and m m p m, n n q n. Lemma 3.1. Let A J m1 () J mp (), m 1 m p >, and define We have: A k : J m1 () m1 1 k J mp () mp 1 k, k 1, m p 1. i) A 1, ii) A k+1 A AA k+1 A k, k 1, m p 2, iii) A k I ( i 1, p) m i k + 1, iv) (I ± A k ) 1 I A k, k 1, m p /2 1. The proof follows by the definition of A k. Note that from (ii) it particularly follows A A AA and A 1 A AA 1 A. Remark 3.2. Recall that J 1 () I 1, because of 1. In the sequel, N m denotes the set {1, 2,..., m}.

14 13 Solving the Sylvester Equation AX XB C When σ(a) σ(b) Definition 3.3. Let M and N be two disjoint subsets of N m N n such that M N N m N n. The matrix mask given by the set M is mapping Π M : C m n C m n, Π M (A) A M, which maps any matrix A a ij C m n to the matrix A M whose entries are given by { aij, (i, j) M, A M ij, (i, j) N. If A C m n is partitioned on submatrices as A A ij p q, and the set M N p N q is given (therefore, N is also known), then Π M (A) A M, where { Aij, (i, j) M, A M ij, (i, j) N. Note that the mapping Π M is linear and idempotent. Also, when M is known, it means we know N too, because N is the complement of M. The binary relation described by the set M will frequently be represented by a matrix also denoted by M, with entries M m ij given as m ij 1 if (i, j) M, m ij if (i, j) N. Note that M by construction has upper quasitriangular structure. The matrix mask method is the subject of another my paper which is still not finished, so we will not pursue the topic here more than it is necessary. We will be interested in the case when A diagj m1 (), J m2 (),..., J mp () and B diagj n1 (), J n2 (),..., J nq () are given and then the sets M and N depend only on the sizes of appropriate Jordan blocks: (3.16) M {(i, j) N p N q : m i n j }, N {(i, j) N p N q : m i < n j }. Lemma 3.4. Suppose that A diagj m1 (), J m2 (),, J mp () C m m and B diagj n1 (), J n2 (),, J nq () C n n, where m 1 m 2 m p > and n 1 n 2 n q > and C C m n. For the set M N p N q (equivalently, matrix M C p q ) given by (3.16), we have: i) ((A T ) s CB t ) M (A T ) s C M B t, (A s C(B T ) t ) M A s C M (B T ) t, s, t N, ii) (AC CB) M AC M C M B. Proof. Suppose C is decomposed in accordance with A and B as C C ij p q. i) The element in the position (i, j) for both left and right hand matrix is the same, namely { (Jmi () T ) s C ij J nj () t, (i, j) M,, (i, j) N, so the equality holds. The second equality is proven analogously. ii) We have desired equality because the element in the position (i, j) of the left and right hand matrix is the same: { Jmi ()C ij C ij J nj (), (i, j) M,, (i, j) N. Let us split the matrix C as C C M + C N, where C M Π M (C) and C N Π N (C); in the analogous way X X M + X N. The reason for such splitting is the fact that solving simple Sylvester equation (2.6) is different whether m n or m < n, as shown in Theorem 2.3 and Theorem 2.5.

15 Nebojša Č. Dinčić 14 Theorem 3.5. Let A J m1 () J mp (), m 1 m p >, and B J n1 () J nq (), n 1 n q >. Suppose M and N to be as in (3.16), C M Π M (C), C N Π N (C) and d min{m 1, n 1 }. The equation AX XB C is consistent if and only if (3.17) or, in more condensed form, d 1 A k C M B k, d 1 A k A k d 1 A k C N B k, CM C N The particular solution is given as X p X M + X N, where (3.18) X M (A T ) k+1 C M B k, X N n 1 1 B k B k m 1 1. A k C N (B T ) k+1, and the solutions of the homogeneous equation AX XB are given by p nj 1(J nj ()) X h Xij h p q, Xij h, (i, j) M, (mi n j) n j mi (n j m i) p mi 1(J mi ()), (i, j) N. Proof. ( ) : Let X be a solution, so A X XB C. By Lemma 3.4, we have: C M (A X XB) M A X M X M B, C N (A X XB) N A X N X N B. Now we have d 1 d 1 A k C M B k (A k A X M B k A k XM B k+1) d 1 (A k 1 XM B k A k XM B k+1) A 1 XM A d 1 XM B d, because of Lemma 3.1. In the analogous way one can prove the second equality in (3.17). ( ) : We show that X p X p M + Xp N, where Xp M and Xp N are given by (3.18), is the solution of AX XB C provided that (3.17) holds. By using Lemma 3.4, we have: n1 1 X p M n1 1 ij (A T ) k+1 C M B k ( (A T ) k+1 CB k) M The first condition in (3.17) says n j 1 ij (J mi () T ) k+1 C ij J nj () k, (i, j) M. n j 1 J mi () k C ij J nj () k, (i, j) M, ij

16 15 Solving the Sylvester Equation AX XB C When σ(a) σ(b) which by Theorem 2.3 implies that J mi ()X p M ij X p M ijj nj () C ij, (i, j) M, i.e., AX p M Xp M B C M. In the analogous way AX p N Xp N B C N can be proven (there, Theorem 2.5 is used). Therefore, we have AX p X p B AX p M Xp M B + AXp N Xp N B C M + C N C, so the equation is consistent provided that (3.17) holds. The homogeneous equation AX XB is equivalent to the set of equations of the form J mi ()X ij X ij J ni (), i 1, p, j 1, q, so by Corollary 2.6, we have the expressions for appropriate submatrices of X h. Remark 3.6. It may happen that some expressions for the consistency condition (3.17) include negative powers of J k (), but all of them will be multiplied by some zero terms, so there is no actual problem. The next theorem immediately follows from Proposition 1.3, but we restate it here because of the results from the next section. Theorem 3.7. Suppose that A(λ) J m1 (λ) J mp (λ) C m m and B(µ) J n1 (µ) J nq (µ) C n n and a max{m 1,..., m p }, b max{n 1,..., n q }. The equation A(λ)X XB(µ) C, λ µ, is consistent and its unique solution is b 1 a 1 X A(λ µ) (k+1) CB() k A() k CB(µ λ) (k+1). 4. The general case when σ(a) σ(b). Let us consider general case AX XB C when σ(a) σ(b) {λ 1,..., λ s }, C C ij s s and suppose that the matrices are in their Jordan forms: A diag{j(λ 1 ; p 11, p,..., p 1,k1 ),..., J(λ s ; p s1, p s2,..., p s,ks )}, B diag{j(λ 1 ; q 11, q,..., q 1,l1 ),..., J(λ s ; q s1, q s2,..., q s,ls )}, where p i1 p i,ki >, q j1 q j,lj >, i, j 1, s. It is not hard to see that the equations on which solvability depends are precisely of the form J(λ i ; p i1, p i2,..., p i,ki )X ii X ii J(λ i ; q i1, q i2,..., q i,li ) C ii, i 1, s, and, after translation for λ i, they reduce to the form for which Theorem 3.5 is applicable. J(; p i1, p i2,..., p i,ki )X ii X ii J(; q i1, q i2,..., q i,li ) C ii, i 1, s, The results from the previous two sections can be summed up in the following theorem. We remark the notation C ij C (ij) uv u1,ki,v1,l j C pi qj. Theorem 4.1. Suppose that A diag{j(λ 1 ; p 11, p,..., p 1,k1 ),..., J(λ s ; p s1, p s2,..., p s,ks )}, B diag{j(λ 1 ; q 11, q,..., q 1,l1 ),..., J(λ s ; q s1, q s2,..., q s,ls )}, where p i1 p i,ki >, and q j1 q j,lj >, i, j 1, s. Let us denote: d i min{p i1, q i1 }, C (ii) Π Mi (C (ii) ), C (ii) N i Π Ni (C (ii) ), where M i {(u, v) : p iu q iv }, N i {(u, v) : p iu < q iv } N ki N li, i 1, s. M i

17 Nebojša Č. Dinčić 16 The Sylvester equation AX XB C is consistent if and only if d i 1 where we used the notation Ai () k A i () k C (ii) M i C (ii) N i Bi () k B i () k, i 1, s, A i () diagj pi1 (),..., J pi,ki (), B i () diagj qi1 (),..., J qi,li (), i 1, s. In that case, the particular solution X p X p (ij) s s is given by: d i 1 X p (ii) (A i () T ) k+1 C (ii) M i B i () k d i 1 q j1 1 X p (ij) A i (λ i λ j ) (k+1) C (ij) B j () k A i () k C (ii) N i (B i () T ) k+1, i 1, s, p i1 1 A i () k C (ij) B j (λ j λ i ) (k+1) C ki lj, i j; the homogeneous solution X h diagx (ii) h X (ii) h uv, i 1, s, is given by pqiv 1(J qiv()) (piu q iv) q iv, (u, v) M i, piu (q iv p iu) p piu 1(J piu ()), (u, v) N i. Remark 4.2. The consistency condition from the previous theorem can be written in more condensed form as: d 1 A() k CM B() k A() k C N B() k, where d min{max{p i1 : i 1, s}, max{q j1 : j 1, s}}, M M 1 M s, N N 1 N s, and we used the notation A() diaga 1 (),..., A s (), B() diagb 1 (),..., B s (). Also, the diagonal blocks in X p can be given by single explicit formula X diag p d 1 ( (A() T ) k+1 C M B() k A() k C N (B() T ) k+1). The previous theorem has several important corollaries which deal with the cases when matrices A and B are non-derogatory or diagonalizable. Corollary 4.3. Suppose that matrices A and B are non-derogatory, i.e., and d i min{p i, q i }, C (ii) M i is consistent if and only if d i 1 A J p1 (λ 1 ) J ps (λ s ), B J q1 (λ 1 ) J qs (λ s ), Jpi () pi 1 k J pi () k C (ii) if p i q i, C (ii) N i C (ii) M i C (ii) N i C (ii) if p i < q i, i 1, s. The equation AX XB C Jqi () k J qi () qi 1 k, i 1, s,

18 17 Solving the Sylvester Equation AX XB C When σ(a) σ(b) and its particular solution X p X p (ij) s s is given by: X p (ii) (J pi () T ) k+1 C (ii) J qi () k, p i q i, d i 1 di 1 J pi () k C (ii) (J qi () T ) k+1, p i < q i ; q j 1 X p (ij) J pi (λ i λ j ) (k+1) C (ij) J qj () k p i 1 J pi () k C (ij) J qj (λ j λ i ) (k+1) C pi qj, i j; the homogeneous solution X h diagx (ii) h, i 1, s, is given by X (ii) h pqi 1(J qi ()) (pi q i) q i, p i q i, pi (q i p i) p pi 1(J pi ()), p i < q i. Corollary 4.4. Suppose that A and B are diagonalizable matrices, i.e., A λ 1 I k1 λ s I ks, B λ 1 I l1 λ s I ls. Then the Sylvester equation AX XB C is consistent if and only if C (ii), i 1, s, its homogeneous solution X h X (11) h X (ss) h is arbitrary block-diagonal matrix and the particular solution is X p s s with X p (ii), i 1, s, and X (ij) p X (ij) p (λ i λ j ) 1 C (ij), i j. The fact that AX XA I n for A, X C n n is well-known result, usually proven by the trace of a matrix: n tr(i n ) tr(ax XA) tr(ax) tr(xa) tr(ax) tr(ax). This result appears as the corollary of Theorem 4.1. Theorem 4.5. The matrix equation AX XA I is inconsistent. Proof. According to Theorem 4.1, if B A then d i AX XA I is consistent if and only if p i1, C (ii) M i I, i 1, s, and the equation p i1 1 p i1 1 diagj pi1 () pi1 1 k,..., J pi,ki () p i,k i 1 k diagj pi1 () k,..., J pi,ki () k diagj pi1 () pi1 1,..., J pi,ki () p i,k i 1, i 1, s, which is not true. Therefore, the equation AX XA I is inconsistent. Recall that the similar result holds on any unital Banach algebra 16, p. 351.

19 Nebojša Č. Dinčić 18 Example 4.6. Let A J 2 (1) J 1 (1) J 2 (), B J 2 (1) J 2 (1) J 3 () J 1 () (so s 2, λ 1 1, λ 2, p 11 2, p 1, p 21 2, q 11 2, q 2, q 21 3, q 22 1). We have: A 1 () J 2 () J 1 (), A 2 () J 2 (), B 1 () J 2 () J 2 (), B 2 () J 3 () J 1 (). Also d 1 min{p 11, q 11 } 2, d 2 min{p 21, q 21 } 2. We decompose matrices C and X in accordance with the forms of A and B: C (11) C () C C (21) C (22) X (11) X (), X X (21) X (22) and then further decompose C (11) and C (22) in accordance with the mask matrices M 1 p 11 q 11 and p 11 q ) and M 2 1 (because p 21 q 22 ): 1 1 (because C (11) C (11) 11 C (11) C (11) 21 C (11) 22 C (11) 11 C (11) + C (11) 21 C (11) 22 C (11) M 1 + C (11) N 1, C (22) C (22) 11 C (22) C (22) + C (22) 11 C (22) M 2 + C (22) N 2 ; similar is done for the matrix X. By Theorem 4.1, consistency conditions are (we simply write J k instead of J k () and omit zero block-matrices): 1 J 1 k 2 J k 1 J k 2 J k 1 C (11) 11 C (11) C (11) 21 C (11) 22 J k 2 J k 2 J 1 k 2 J 1 k 2, 1 J 1 k 2 J k 2 C (22) C (22) 11 J k 3 J k 1 J 2 k 3 J k 1, i.e., J 2 C (11) 11 + C (11) 11 J 2, J 2 C (11) + C (11) J 2, C (11) 21 J 2, C (11) 22 J 2, J 2 C (22), C (22) 11 J J 2 C (22) 11 J 3. If we put C c ij, X x ij C 5 8, consistency condition says that all matrices C for whom the equation is solvable are of the form C (11) C () C C (21) C (22) c 11 c c 13 c 14 c 15 c 16 c 17 c 18 c 11 c 13 c 25 c 26 c 27 c 28 c 32 c 34 c 35 c 36 c 37 c 38 c 41 c 42 c 43 c 44 c 45 c 46 c 47 c 48 c 51 c 52 c 53 c 54 c 45 c 57.

20 19 Solving the Sylvester Equation AX XB C When σ(a) σ(b) By Theorem 4.1, the particular solution X p X p (ij) is: X (11) p X (22) p 1 (A 1 () T ) k+1 C (11) B 1 () k X () p X (21) p M 1 1 J 2 () T C (11) 11 J 2 () T C (11) C (11) 21 J 2() T C (11) 22 J 2() T 1 (A 2 () T ) k+1 C (22) B 2 () k M 2 1 A 1 () k C (11) N 1 (B 1 () T ) k+1, A 2 () k C (22) N 2 (B 2 () T ) k+1 C (22) 11 J 3() T J 2 ()C (22) 11 (J 3() T ) 2 J 2 () T C (22) 2 A 1 (1) (k+1) C () B 2 () k J 2 (1) 1 C () 11 + J 2 (1) 2 C () 11 J 3() + J 2 (1) 3 C () 11 J 3() 2 J 2 (1) 1 C () C () 21 + C () 21 J 3() + C () 21 J 3() 2 C () 22 1 A 2 () k C (21) B 1 (1) (k+1) C (21) 11 J 2(1) 1 J 2 ()C (21) 11 J 2(1) 2 C (21) J 2(1) 1 J 2 ()C (22) J 2(1) 2.,, Therefore, X (11) p c 11 c c 13 c 14 c 32 c 34, X (22) p c46 c 57 c 47 c 45 c 57 c 48 X p () c 15 c 25 c 15 + c 16 2c 25 c 26 c 15 + c 16 + c 17 3c 25 2c 26 c 27 c 18 c 28 c 25 c 25 + c 26 c 25 + c 26 + c 27 c 28 c 35 c 35 + c 36 c 35 + c 36 + c 37 c 38 X p (21) c41 c 51 c c 51 c 42 c 52 c 43 c 53 c 43 c c 53 c 54 c 51 c 51 c 52 c 53 c 53 c 54 The homogeneous solution is given by X h X (ii) h or, in extended form: X (11) h where: p 1 (J 2 ()) p 1 (J 2 ()) 1 1 p (J 1 ()) 1 1 p (J 1 ()), X (22) h 2 1 p 1 (J 2 ()) p (J 1 ()) 1 1 X (11) h x 11 x x 13 x 14 x 11 x 13 x 32 x 34, X (22) h x46 x 47 x 48 x 46.

21 Nebojša Č. Dinčić 2 for arbitrary complex x 11, x, x 13, x 14, x 32, x 34, x 46, x 47 and x 48. Therefore, we found all solutions of the equation in the case when it is solvable. 5. On the dimension of the space of solutions. Let A C m m and B C n n be given matrices. For the linear operator T (X) AX XB : C m n C m n we consider the following subspaces: R(T ) {C : AX XB C for some X}, N (T ) {X : AX XB }. The subspace R(T ) is actually the set of all C such that the Sylvester equation is consistent, while N (T ) consists precisely of all solutions of the homogeneous Sylvester equation. The general solution can be seen as the set of all homogeneous solutions translated by the particular solution, so the dimension of the space of solutions is the same as the dimension of the space of all solutions to homogeneous equation. By the basic theorem for the vector space homomorphism, we have: dim R(T ) + dim N (T ) dim(c m n ) mn. This fact helps us a lot in finding the dimension of R(T ) when necessary, because it is significantly easier to deal with N (T ), as follows. If A J m (), B J n (), then by Corollary 2.6, we have: p (J n ()), m n, N (T ) (m n) n m (n m) p m 1 (J m ()), m n, and therefore, dim N (T ) min{m, n} (so dim R(T ) mn min{m, n}) and its basis is B N (T ) { In Jn (), Jn (),..., }, m n, B N (T ) { I m, Jm (),..., J m () m 1 }, m n. For the case A J m1 () J mp () and B J n1 () J nq (), by Theorem 3.5, we have N (T ) {X X ij p q : J mi ()X ij X ij J nj (), i 1, p, j 1, q}, so dim N (T ) (i,j) M n j + (i,j) N m i. In the most general case, described by Theorem 4.1, we have s dim N (T ) q kj + k1 (i,j) M k (i,j) N k p ki For the non-derogatory case, Corollary 4.3 gives s dim N (T ) min{p i, q i }, i1.

22 21 Solving the Sylvester Equation AX XB C When σ(a) σ(b) while in the diagonalizable case, given in Corollary 4.4, we have s dim N (T ) k i l i. If we test this on Example 4.6, then we have dim N (T ) ( ) + ( ) 9 which coincides with the number of independent parameters in the general solution. 6. The Schur decomposition approach. Finding the Jordan form of a given matrix having some multiple eigenvalues can be numerically unstable, so in this section, which is not closely related with previous considerations, we present some useful recursive methods for solving the Sylvester matrix equation AX XB C in the case when A and B have common eigenvalue(s). We present the consistency conditions and the algorithm for finding the solutions, based on the Schur matrix decomposition. For numerically solving the Sylvester equations, the most used methods are Bartels-Stewart 1 and Golub-Nash-van Loan 8, and their various improvements. Bartels-Stewart method uses QR-algorithm for the Schur decomposition of matrices A and B, while Golub-Nash-van Loan method uses the Hessenberg decomposition of matrices A and B. We emphasize that both methods assume disjointness of the spectra of A and B. The main idea of the Bartels-Stewart algorithm is to apply the Schur decomposition to transform Sylvester equation into a triangular linear system which can be solved efficiently by forward or backward substitutions. It is known that A C n n can be expressed as A QUQ 1, where Q is unitary matrix (i.e., Q 1 Q ), and U is upper triangular matrix, which is called a Schur form for A. Since U is similar to A, it has the same multiset of eigenvalues, and since it is triangular, those eigenvalues are the diagonal entries of U. If A is real and σ(a) R then U can be chosen to be real and orthogonal. For details on this topic, see e.g., 1, p. 79. By using the idea of the constructive proof for Schur decomposition, suppose λ σ(a) σ(b) and there exist some unitary matrices S and T such that (a and b are the multiplicities of λ in σ(a) and σ(b), respectively; U λ and V λ are appropriate eigenspaces): S λia A AS : U λ Uλ U λ Uλ, A 22 T λib B BT : V λ Vλ V λ Vλ. B 22 Note that λ / σ(a 22 ) σ(b 22 ). Now the Sylvester equation becomes: λia A S S λib B X XT T C A 22 B 22 λia A S XT S λib B XT S CT, A 22 B 22 so the equation, with S XT Y Y ij and S CT D D ij, becomes λia A Y11 Y Y11 Y λib B D11 D A 22 Y 21 Y 22 Y 21 Y 22 B 22 D 21 D 22 i1,

23 Nebojša Č. Dinčić 22 which is equivalent to the following system: (6.19) (6.2) (6.21) (6.22) A Y 21 D 11, Y (λi B 22 ) Y 11 B D A Y 22, (A 22 λi)y 21 D 21, A 22 Y 22 Y 22 B 22 D 22 + Y 21 B. From (6.19) and (6.21) we conclude the consistency condition: Also from (6.21) we have A (A 22 λi) 1 D 21 D 11. Y 21 (A 22 λi) 1 D 21. Note that A 22 and B 22 (and consequently A 22 λi and λi B 22 ) are upper triangular matrices, so some known numerical method can be applied in computing their inverses. Case 1: If σ(a) σ(b) {λ}, i.e., if λ is the only common eigenvalue (and therefore, σ(a 22 ) σ(b 22 ) ), because of (6.22), by Proposition 1.3, we have the unique Y 22 (which can be computed by using some already known numerical method for solving the Sylvester equation), while from (6.2) we have Y via Y 11 : Therefore, we found the family of solutions. Y (D A Y 22 + Y 11 B )(λi B 22 ) 1. Case 2: If there is some other common eigenvalue µ for A and B (which means µ σ(a 22 ) σ(b 22 )), the described method is applied to the equation (6.22), i.e., A 22 Y 22 Y 22 B 22 D 22 + (A 22 λi) 1 D 21 B. It remains to do backward substitution and the equation is fully solved. Because any complex matrix from C n n has at most n different eigenvalues, the algorithm terminates after at most n passes. Acknowledgments. The author thanks to his colleague Miloš Cvetković for a valuable discussion on some topics presented in the paper. The author would like to express his gratitude to the two anonymous referees and the editor, Prof. Froilán M. Dopico, for their insightful comments and suggestions which significantly improved the paper. REFERENCES 1 R.H. Bartels and G.W. Stewart. A solution of the equation AX + XB C. Comm. ACM, 15:82 826, R. Bhatia and P. Rosenthal. How and why to solve the operator equation AX XB Y. Bull. London Math. Soc., 29:1 21, Y. Chen and H. Xiao. The explicit solution of the matrix equation AX XB C. Appl. Math. Mech. (English Ed.), 16: , B.N. Datta and K. Datta. The matrix equation XA A T X and an associated algorithm for solving the inertia and stability problems. Linear Algebra Appl., 97:13 119, 1987.

24 23 Solving the Sylvester Equation AX XB C When σ(a) σ(b) 5 F. De Terán, F.M. Dopico, N. Guillery, D. Montealegre, and N. Reyes. The solution of the equation AX +X B. Linear Algebra Appl., 438: , M.P. Drazin. On a result of J. J. Sylvester. Linear Algebra Appl., 55: , F.R. Gantmacher. The Theory of Matrices, Vol. 1. Chelsea Publish. Co., New York, G.H. Golub, S. Nash and C. van Loan. A Hessenberg-Schur method for the problem AX +XB C. IEEE Trans. Automat. Control, 24:99 913, E. Heinz. Beiträge zur Störungstheorie der Spektralzerlegung. Math. Ann., 3: , R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, Q. Hu and D. Cheng. The polynomial solution to the Sylvester matrix equation. Appl. Math. Lett., 19: , 26. Z.-Y. Li and B. Zhou. Spectral decomposition based solutions to the matrix equation AX XB C. IET Control Theory Appl., :119 8, E.-C. Ma. A finite series solution of the matrix equation AX XB C. SIAM J. Appl. Math., 14:49 495, M. Rosenblum. On the operator equation BX XA Q. Duke Math. J., 23: , W.E. Roth. The equations AX Y B C and AX XB C in matrices. Proc. Amer. Math. Soc., 3: , W. Rudin. Functional Analysis, second edition. McGraw-Hill, Inc., New York, J.J. Sylvester. Sur l equations en matrices px xq. C. R. Acad. Sci. Paris, 99:67 71, , A.-G. Wu, G. Feng, G.-R. Duan, and W-J. Wu. Closed-form solutions to Sylvester-conjugate matrix equations. Comput. Math. Appl., 6:95-111, 21.

Pairs of matrices, one of which commutes with their commutator

Pairs of matrices, one of which commutes with their commutator Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Generalized left and right Weyl spectra of upper triangular operator matrices

Generalized left and right Weyl spectra of upper triangular operator matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017 Article 3 2017 Generalized left and right Weyl spectra of upper triangular operator matrices Guojun ai 3695946@163.com Dragana S. Cvetkovic-Ilic

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

The Jordan forms of AB and BA

The Jordan forms of AB and BA Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Contour integral solutions of Sylvester-type matrix equations

Contour integral solutions of Sylvester-type matrix equations Contour integral solutions of Sylvester-type matrix equations Harald K. Wimmer Mathematisches Institut, Universität Würzburg, 97074 Würzburg, Germany Abstract The linear matrix equations AXB CXD = E, AX

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

RETRACTED On construction of a complex finite Jacobi matrix from two spectra

RETRACTED On construction of a complex finite Jacobi matrix from two spectra Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional

More information

Consistency and efficient solution of the Sylvester equation for *-congruence

Consistency and efficient solution of the Sylvester equation for *-congruence Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 57 2011 Consistency and efficient solution of the Sylvester equation for *-congruence Fernando de Teran Froilan M. Dopico Follow

More information

The symmetric linear matrix equation

The symmetric linear matrix equation Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 54 2012 A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp 770-781 Aristides

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

Products of commuting nilpotent operators

Products of commuting nilpotent operators Electronic Journal of Linear Algebra Volume 16 Article 22 2007 Products of commuting nilpotent operators Damjana Kokol Bukovsek Tomaz Kosir tomazkosir@fmfuni-ljsi Nika Novak Polona Oblak Follow this and

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Multiple eigenvalues

Multiple eigenvalues Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Dijana Mosić and Dragan S Djordjević Abstract We introduce new expressions for the generalized Drazin

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS APPLICATIONES MATHEMATICAE 22,1 (1993), pp. 11 23 E. NAVARRO, R. COMPANY and L. JÓDAR (Valencia) BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

On cardinality of Pareto spectra

On cardinality of Pareto spectra Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Partial isometries and EP elements in rings with involution

Partial isometries and EP elements in rings with involution Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow

More information

Non-trivial solutions to certain matrix equations

Non-trivial solutions to certain matrix equations Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

On the closures of orbits of fourth order matrix pencils

On the closures of orbits of fourth order matrix pencils On the closures of orbits of fourth order matrix pencils Dmitri D. Pervouchine Abstract In this work we state a simple criterion for nilpotentness of a square n n matrix pencil with respect to the action

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

The Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices

The Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices Filomat 28:1 (2014, 65 71 DOI 10.2298/FIL1401065H Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat The Residual Spectrum and the

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Generalization of Gracia's Results

Generalization of Gracia's Results Electronic Journal of Linear Algebra Volume 30 Volume 30 (015) Article 16 015 Generalization of Gracia's Results Jun Liao Hubei Institute, jliao@hubueducn Heguo Liu Hubei University, ghliu@hubueducn Yulei

More information

Operators with Compatible Ranges

Operators with Compatible Ranges Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

The spectrum of the edge corona of two graphs

The spectrum of the edge corona of two graphs Electronic Journal of Linear Algebra Volume Volume (1) Article 4 1 The spectrum of the edge corona of two graphs Yaoping Hou yphou@hunnu.edu.cn Wai-Chee Shiu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Conjugacy classes of torsion in GL_n(Z)

Conjugacy classes of torsion in GL_n(Z) Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 32 2015 Conjugacy classes of torsion in GL_n(Z) Qingjie Yang Renmin University of China yangqj@ruceducn Follow this and additional

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Drazin Invertibility of Product and Difference of Idempotents in a Ring

Drazin Invertibility of Product and Difference of Idempotents in a Ring Filomat 28:6 (2014), 1133 1137 DOI 10.2298/FIL1406133C Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Drazin Invertibility of

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Hessenberg Pairs of Linear Transformations

Hessenberg Pairs of Linear Transformations Hessenberg Pairs of Linear Transformations Ali Godjali November 21, 2008 arxiv:0812.0019v1 [math.ra] 28 Nov 2008 Abstract Let K denote a field and V denote a nonzero finite-dimensional vector space over

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Note on the Jordan form of an irreducible eventually nonnegative matrix

Note on the Jordan form of an irreducible eventually nonnegative matrix Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 19 2015 Note on the Jordan form of an irreducible eventually nonnegative matrix Leslie Hogben Iowa State University, hogben@aimath.org

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information