Gershgorin type sets for eigenvalues of matrix polynomials
|
|
- Joshua Wilkins
- 5 years ago
- Views:
Transcription
1 Electronic Journal of Linear Algebra Volume 34 Volume 34 (18) Article Gershgorin type sets for eigenvalues of matrix polynomials Christina Michailidou National Technical University of Athens, Panayiotis Psarrakos National Technical University of Athens, Follow this and additional works at: Part of the Algebra Commons Recommended Citation Michailidou, Christina and Psarrakos, Panayiotis. (18), "Gershgorin type sets for eigenvalues of matrix polynomials", Electronic Journal of Linear Algebra, Volume 34, pp DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact
2 GERSHGORIN TYPE SETS FOR EIGENVALUES OF MATRIX POLYNOMIALS CHRISTINA MICHAILIDOU AND PANAYIOTIS PSARRAKOS Abstract. New localization results for polynomial eigenvalue problems are obtained, by extending the notions of the Gershgorin set, the generalized Gershgorin set, the Brauer set and the Dashnic-Zusmanovich set to the case of matrix polynomials. Key words. Eigenvalue, Matrix polynomial, The Gershgorin set, The generalized Gershgorin set, The Brauer set, The Dashnic-Zusmanovich set. AMS subject classifications. 15A18, 15A. 1. Introduction. For an arbitrary square complex matrix A C n n with standard spectrum σ), the celebrated Gershgorin (Geršgorin) circle theorem [8, 1] implies n easily computable disks, in the complex plane, centered at the diagonal entries of the matrix, whose union contains σ). The simplicity and the applications of the Gershgorin circle theorem have inspired further research in this area, resulting in hundreds of papers on Gershgorin disks and related sets such as the generalized Gershgorin set (known also as the A-Ostrowski set), the Brauer set, the Dashnic-Zusmanovich set and others (see [3, 5, 6, 18, 1] and the references therein), which are widely used in the analysis of ordinary eigenvalue problems. In this paper, we consider n n matrix polynomials of the form (1.1) P (λ) = A m λ m + A m 1 λ m A 1 λ + A, where λ is a complex variable, A, A 1,..., A m C n n with A m, and the determinant det P (λ) is not identically zero. The study of matrix polynomials, especially with regard to their spectral analysis and the solution of higher order linear differential (or difference) systems with constant coefficients, has a long history and important applications; see [9, 14, 15, 16, 17] and the references therein. A scalar µ C is called an eigenvalue of P (λ) if the system P (µ)x = has a nonzero solution x C n. This solution x is known as an eigenvector of P (λ) corresponding to the eigenvalue µ. The set of all finite eigenvalues of P (λ), σ(p ) = µ C : det P (µ) = } = µ C : σ(p (µ))}, is the finite spectrum of P (λ). The algebraic multiplicity of an eigenvalue µ σ(p ) is the multiplicity of µ as a root of the (scalar) polynomial det P (λ), and it is always greater than or equal to the geometric multiplicity of µ, that is, the dimension of the null space of matrix P (µ). Furthermore, it is said that µ = is an eigenvalue of P (λ) exactly when is an eigenvalue of the reverse matrix polynomial ˆP (λ) = λ m P (1/λ) = A λ m + A 1 λ m A m 1 λ + A m. Received by the editors on March 17, 18. Accepted for publication on November, 18. Handling Editor: Michael Tsatsomeros. Corresponding Author: Panayiotis Psarrakos. Department of Mathematics, School of Applied Mathematical and Physical Sciences, National Technical University of Athens, Zografou Campus, 1578 Athens, Greece (xristina.michailidou@gmail.com, ppsarr@math.ntua.gr). The research of the first author was supported by the Research Committee of the National Technical University of Athens (ELKE PhD Scholarships).
3 653 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials In this case, the algebraic multiplicity and the geometric multiplicity of the eigenvalue µ = of P (λ) are defined as the algebraic multiplicity and the geometric multiplicity of the eigenvalue of ˆP (λ), respectively. In the next sections, we extend the notions of the Gershgorin set (Section ), the generalized Gershgorin set (Section 3), the Brauer set (Section 4) and the Dashnic-Zusmanovich set (Section 5) to the case of matrix polynomials; see also [1, 1, 13]. In particular, in each section, we define an eigenvalues inclusion set, present basic topological and geometrical properties of this set, and give illustrative examples to verify our results. Our approach is direct, does not involve any kind of linearizations, and it is motivated by the following simple observation: Suppose that F is a set-valued function of n n matrices which associates each matrix A C n n to a region F ) C that contains the spectrum σ). If, for an n n matrix polynomial P (λ) as in (1.1), we define the set F (P ) = µ C : F (P (µ))}, then σ(p ) = µ C : σ(p (µ))} µ C : F (P (µ))} = F (P ).. The Gershgorin set..1. Definitions. Consider a square complex matrix A C n n, define the set N = 1,,..., n}, and let ) i,j denote the (i, j)-th entry of A, i, j N. For any i N, define also the nonnegative quantity r i ) = ) i,j and the i-th row Gershgorin disk G i ) = µ C : µ ) i,i r i )}. The j N \i} Gershgorin circle theorem [8, 1] states that the spectrum σ) lies in the union G) = G i ), which is known as the Gershgorin set of A. Consider now an n n matrix polynomial P (λ) as in (1.1), and define the nonnegative functions r i (P (λ)) = (P (λ)) i,j, i N, the i-th row Gershgorin sets of P (λ) j N \i} G i (P ) = µ C : G i (P (µ))} = µ C : (P (µ)) i,i r i (P (µ))}, i N, and the Gershgorin set of P (λ) It is easy to see that and G(P ) = µ C : G(P (µ))} = i N G i (P ). G i ( ˆP )\} = µ C\} : G i ( P ( µ 1 ))} = µ C\} : µ 1 G i (P ) }, i N, G( ˆP )\} = µ C\} : G ( P ( µ 1))} = µ C\} : µ 1 G(P ) }. As in the case of the eigenvalues of P (λ), we say that µ = lies in G i (P ) (resp., in G(P )) exactly when lies in G i ( ˆP ) (resp., in G( ˆP )). The Gershgorin circle theorem [8, 1] is directly generalized to the case of matrix polynomials. Theorem.1. All the (finite and infinite) eigenvalues of the matrix polynomial P (λ) lie in the Gershgorin set G(P ). Proof. For any finite eigenvalue µ σ(p ), it is apparent that σ(p (µ)) G(P (µ)), and hence, µ lies in G(P ). Moreover, if µ = is an eigenvalue of P (λ), then σ( ˆP ) G( ˆP ), and consequently, µ = lies in G(P ). i N
4 Christina Michailidou and Panayiotis Psarrakos 654 By the original proof of the Gershgorin circle theorem [8] (see also [1, Theorem 1.1]), it follows that if λ is an eigenvalue of a matrix A C n n and x = [ x 1 x x n ] T is an eigenvector of A corresponding to λ, with < x k = max x 1, x,..., x n } for some k N, then λ lies in the k-th row Gershgorin disk G k ). As a consequence, if µ is an eigenvalue of a matrix polynomial P (λ) and x = [ x 1 x x n ] T is an eigenvector of P (λ) corresponding to µ, with < x k = max x 1, x,..., x n } for some k N, then is an eigenvalue of the matrix P (µ ) with x as an associated eigenvector, and thus, G k (P (µ )), or equivalently, µ G k (P )... Basic properties. For an n n matrix polynomial P (λ) as in (1.1), it is easy to verify the following properties. Proposition.. Let i N. (i) G i (P ) is a closed subset of C. (ii) For any scalar b C\}, consider the matrix polynomials Q 1 (λ) = P (bλ), Q (λ) = bp (λ) and Q 3 (λ) = P (λ + b). Then G i (Q 1 ) = b 1 G i (P ), G i (Q ) = G i (P ) and G i (Q 3 ) = G i (P ) b. (iii) If the (i, i)-th entry of P (λ), (P (λ)) i,i, is (identically) zero, then G i (P ) = C. (iv) If all the coefficient matrices A, A 1,..., A m have their i-th rows real, then G i (P ) is symmetric with respect to the real axis. Proof. (i) Consider a scalar µ / G i (P ). Then (P (µ)) i,i > r i (P (µ)), and continuity implies that for any scalar ˆµ sufficiently close to µ, (P (ˆµ)) i,i > r i (P (ˆµ)), i.e., ˆµ / G i (P ). Thus, the set C\G i (P ) is open. (ii) Recall that µ G i (P ) if and only if G i (P (µ)), and observe that µ } G i (Q 1 ) = µ C : G i (P (bµ))} = b C : G i(p (µ)), G i (Q ) = µ C : G i (bp (µ))} = µ C : G i (P (µ))} and G i (Q 3 ) = µ C : G i (P (µ + b))} = µ b C : G i (P (µ))}. (iii) If the (i, i)-th entry of P (λ) is identically zero, then for any µ C, the origin is the center of the Gershgorin disk G i (P (µ)). Hence, the inequality (P (µ)) i,i r i (P (µ)) is satisfied trivially for all µ C. (iv) Suppose that all the coefficient matrices A, A 1,..., A m have their i-th rows real. If µ G i (P ), then m k ) i,i µ k m k ) i,j µ k, or equivalently, or equivalently, This means that µ G i (P ). m k ) i,i µ k m k ) i,i µ k m k ) i,j µ k, m k ) i,j µ k.
5 655 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials Proposition.3. For any i N, the set µ C : (P (µ)) i,i < r i (P (µ))} lies in the interior of G i (P ); as a consequence, G i (P ) µ C : (P (µ)) i,i = r i (P (µ))} = µ C : G i (P (µ))}. Proof. Consider a scalar µ C such that (P (µ)) i,i < r i (P (µ)). By continuity, there is a real ε > such that for every ˆµ C with µ ˆµ ε, (P (ˆµ)) i,i r i (P (ˆµ)). Thus, µ is an interior point of G i (P ). By applying the proof technique of Theorem 3.1 in [1], we obtain a necessary condition for the isolated points. Theorem.4. If a scalar µ C is an isolated point of G i (P ), then µ is a common root of all polynomials (P (λ)) i,j, j N ; as a consequence, µ is an eigenvalue of P (λ). Proof. For the sake of contradiction, assume that (P (µ)) i,i. Since µ is an isolated point of G i (P ), it belongs to the boundary G i (P ), and there exists an ε > such that the disk D(µ, ε) = λ C : λ µ ε} contains no other point of G i (P ). The i-th row Gershgorin set can be written as (.) Define the function G i (P ) = µ C : (P (µ)) i,i r i (P (µ))} } r i (P (µ)) = µ C : (P (µ)) i,i 1 = µ C : log r } i(p (µ)) (P (µ)) i,i. ϕ(λ) = log r i(p (λ)), λ D(µ, ε) (P (λ)) i,i [ = log (P (λ))i,1,..., (P (λ)) i,i 1, (P (λ)) i,i+1,..., (P (λ)) ] i,n, λ D(µ, ε), (P (λ)) i,i (P (λ)) i,i (P (λ)) i,i (P (λ)) i,i 1 and observe that it is subharmonic and satisfies the maximum principle [, 4]. By definition, ϕ(λ) is equal to zero on the boundary of G i (P ), nonnegative in the interior of G i (P ) and negative elsewhere; see Proposition.3. Since µ G i (P ), it follows (P (µ)) i,i = r i (P (µ)) = (P (µ)) i,j. j N \i} So, the function ϕ(λ) is equal to zero at the center µ of D(µ, ε) and negative in the rest of the disk. Since ϕ(λ) satisfies the maximum principle, it should take it s maximum value on the boundary of the disk; this is a contradiction. As a consequence, = (P (µ)) i,i = (P (µ)) i,j, j N \i} and µ is a common root of all polynomials (P (λ)) i,j, j N. Let Ω be a closed subset of C, and let µ Ω. The local dimension of the point µ in Ω is defined as the limit lim dimω D(µ, h)} (h R, h > ), where dim } denotes the topological dimension [11]. Any h + isolated point in Ω has local dimension. Any non-isolated point in Ω has local dimension if it belongs to the closure of the interior of Ω, and 1 otherwise.
6 Christina Michailidou and Panayiotis Psarrakos 656 Theorem.5. Any point of the i-th row Gershgorin set G i (P ) has local dimension either or (isolated point); in other words, the Gershgorin set of P (λ) cannot have parts with (nontrivial) curves. Proof. For the sake of contradiction, assume that the local dimension of a point µ G i (P ) is equal to 1. Then µ G i (P ) and there is an ε > such that G i (P ) λ C : λ µ ε} G i (P ). Thus, the function ϕ(λ) in (.), defined in the disk λ µ ε, takes its maximum value in (infinitely many) interior points of the disk. Since ϕ(λ) is subharmonic and satisfies the maximum principle, this is a contradiction. Hence, the local dimension of µ is either or. Next we obtain necessary and sufficient conditions for the row Gershgorin sets to be bounded. For any i N, we define the sets (.3) β i = j N : m ) i,j } and β i = N \β i = j N : m ) i,j = }. It is worth mentioning that a scalar µ C is a common root of all polynomials (P (λ)) i,1, (P (λ)) i,,..., (P (λ)) i,n if and only if the i-th row of matrix P (µ) is zero, or equivalently, if and only if G i (P (µ)) = }. By Theorem.4, if the origin is an isolated point of the row Gershgorin set G i (P ), then the i-th row of the coefficient matrix A is zero. As a consequence, if β i, then the origin is not an isolated point of G i ( ˆP ), i.e., G i (P ) is not the union of a bounded set and. Theorem.6. Suppose that for an i N, β i. (i) If i β i, then the i-th row Gershgorin set G i (P ) is unbounded if and only if G i m ). (ii) If i β i, then the i-th row Gershgorin set G i (P ) is unbounded and G i m ). Proof. (i) Suppose that i β i, i.e., m ) i,i. Let G i (P ) be unbounded. Since β i, the origin is not an isolated point of G i ( ˆP ) and there is a sequence µ l } l N in G i (P )\} such that µ l +. Then, for every l N, m k ) i,i µ k l m k ) i,j µ k l, or As l +, it follows and hence, G i m ). m µ k l k ) i,i m µ k l k ) i,j m ) i,i m ) i,j, j β i\i}. For the converse, suppose that G i m ) (or equivalently, m ) i,i r i m )). Then G i ( ˆP ) and, by definition, G i (P ) (where by hypothesis, is a non-isolated point of G i ( ˆP ), and hence, is a non-isolated point of G i (P )). (ii) Suppose that i β i, i.e., m ) i,i =.
7 657 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials It is clear that G i m ), and m 1 G i (P )\} = µ C\} : k ) i,i µ k m k ) i,j µ k m 1 = µ C\} : µ k k ) i,i m µ k k ) i,j µ m 1 µ m 1, where at least one of the coefficients m ) i,j, j N \i}, is nonzero. As a consequence, for large enough µ, µ lies in G i (P ). Thus, there exists a real number M > such that every scalar µ C with µ M lies in G i (P ), i.e., µ C : µ M} G i (P ). Remark.7. Suppose that G i m ), i.e., G i ( ˆP ). If the origin is a non-isolated point of G i ( ˆP ), then there exists a sequence µ l } l N G i ( ˆP )\} that converges to. This means that the sequence µ 1 l } l N G i (P ) is unbounded. Hence, the i-th row Gershgorin set G i (P ) is also unbounded and does not have the infinity as an isolated point. On the other hand, if the origin is an isolated point of G i ( ˆP ) (which yields β i = ), then G i (P )\} = µ C\} : µ 1 G i ( ˆP } )\} }, where the set µ C\} : µ 1 G i ( ˆP } )\} is bounded. Remark.8. Suppose that i β i and the origin is an interior point of G i m ) (or equivalently, m ) i,i and m ) i,i r i m ) < ). Then, for any sequence µ l } l N in C\} such that µ l +, and for sufficiently large l, m µ k l k ) i,i m µ k l k ) i,j, or or m k ) i,i µ k l m k ) i,j µ k l, µ l G i (P ). As a consequence, G i (P ) is unbounded, and there exists a real number M > such that µ C : µ M} G i (P ). The next result, as well as Theorem 3.7 at the end of the next section, is a special case of Theorem 3.1 in the work of D. Bindel and A. Hood [1] (see also Theorem in [13]). For clarity, we give an elementary proof. Theorem.9. Suppose that A m has no zero rows, and the Gershgorin set G(P ) is bounded. Then the number of the connected components of G(P ) is less than or equal to nm. Moreover, if G is a connected component of G(P ) constructed by ξ connected components of the row Gershgorin sets, then the total number of roots of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n in G is at least ξ and it is equal to the number of eigenvalues of P (λ) in G, counting multiplicities. Proof. Since G(P ) is bounded, Theorem.6 (i) implies that for every i N, / G i m ), i β i and the polynomial (P (λ)) i,i is of degree m (i.e., it has exactly m roots, counting multiplicities). For all coefficient matrices of P (λ), A, A 1,..., A m, consider the splitting A j = D j + F j, j =, 1,..., m,
8 Christina Michailidou and Panayiotis Psarrakos 658 where D j is the diagonal part of A j (i.e., D j is a diagonal matrix and its diagonal coincides with the diagonal of A j ) for every j =, 1,..., m. Define also the family of matrix polynomials P t (λ) = (D m + tf m )λ m + (D m 1 + tf m 1 )λ m (D 1 + tf 1 )λ + (D + tf ) = D m λ m + D m 1 λ m D 1 λ + D + t ( F m λ m + F m 1 λ m F 1 λ + F ) for t [, 1]. We observe that (.4) G(P t1 ) G(P t ), t 1 t 1. Indeed, if < t 1 t 1 and a scalar µ C lies in G i (P t1 ) for some i N, then (P t (µ)) i,i = (P t1 (µ)) i,i r i (P t1 (µ)) = t1 t r i (P t (µ)) r i (P t (µ)), and thus, µ G i (P t ). Moreover, it is apparent that if = t 1 t 1 and µ G i (P ), then µ is a root of (P (λ)) i,i and (P t (µ)) i,i = (P (µ)) i,i = r i (P (µ)) = r i (P t (µ)). By Proposition. (i), Theorem.6 and (.4), it follows that the family G(P t ), t [, 1] is a family of nondecreasing compact sets (with respect to t [, 1]). Furthermore, G(P 1 ) = G(P ), and G(P ) coincides with the roots of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n (which are eigenvalues of the diagonal matrix polynomial D m λ m +D m 1 λ m D 1 λ + D ). Keeping in mind that all polynomials det P t (λ), t [, 1], are of degree nm, the continuity of the eigenvalues of P t (λ) in t [, 1] implies that each root of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n is connected to an eigenvalue of P (λ) with a continuous curve in G(P ). Hence, any (compact) connected component of G(P ) constructed by ξ connected components of the row Gershgorin sets, G, contains at least ξ roots of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n, and each root of (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n in G is connected to an eigenvalue of P t (λ), t [, 1], with a continuous curve in G..3. Examples. The Gershgorin set is simple and gives better results than known bounds based on norms. In the next example, we compare it to some bounds given in [1]; in particular, we draw and compare the Gershgorin set to the bounds given by Lemmas. and.3 of [1]. For a matrix polynomial P (λ) as in (1.1), consider the n n matrices U i = A 1 m A i (i =, 1,..., m 1) and L i = A 1 A i (i = 1,,..., m). Then, by Lemma. of [1], every eigenvalue λ of P (λ) satisfies 1 m m 1 (.5) 1 + L j p λ 1 + U j p, 1 p. j=1 Define also the n nm matrices U = [ U U 1 U m 1 ] and L = [ L m L m 1 L 1 ]. Then, by Lemma.3 of [1], we have the following bounds: } 1 } (.6) max L m 1, 1 + max L i 1 λ max U 1, 1 + max U i 1, i=1,,...,m 1 i=1,,...,m 1 (.7) max L, 1} 1 λ max U, 1}) and (.8) I + LL 1/ λ I + UU 1/. j=
9 659 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials (a) The set G(P ) and the bounds of [1, Lemma.] for norm-. (b) The set G(P ) and the bounds of [1, Lemma.3] for norm-. (c) The set G(P ) and the bounds of [1, Lemma.3] for norm-. (d) The set G(P ). (e) The set G 1(P ). (f) The set G (P ). Figure 1: Comparing the Gershgorin set to the bounds of Lemmas. and.3 of [1]. Example.1. Consider the (real) matrix polynomial [ ] 5λ 6 + λ λ P (λ) = λ λ λ The upper bound given by (.5) for norm- is approximately equal to and can be seen in Figure 1 (a) (dashed line) compared to the Gershgorin set G(P ). The lower bound of (.5) is equal to.9 and since it is relatively small, its illustration is omitted in the figure. The lower and upper bounds given by (.6) (.8) are.181 and 3.65 for norm-1,.1111 and 3.5 for norm-, and.1394 and.678 for norm-. The upper bounds for norm- and norm- compared to the Gershgorin set G(P ) are illustrated in parts (b) and (c) of the figure, respectively. The sets G(P ), G 1 (P ) (with exactly six connected components) and G (P ) (with exactly six connected components) are given (magnified) in parts (d), (e) and (f) of Figure 1, respectively. Here and elsewhere, the eigenvalues are marked with dots. Note that Proposition. (iv) (symmetry with respect to the real axis), Theorem.6 (boundedness conditions) and Theorem.9 (distribution of eigenvalues in the connected components) are clearly verified.
10 Christina Michailidou and Panayiotis Psarrakos (a) The set G(Q) (b) The sets G1 (Q) and G (Q) (c) The set G3 (Q). Figure : The Gershgorin set of Q(λ) = Iλm A. For a matrix A Cn n, consider the matrix polynomial Q(λ) = Iλm A. Then, for any i N, Gi (Q) = µ C : λm )i,i ri )} = µ C : µ1/m Gi ) is bounded and its shape depends on the location of the origin with respect to Gi ); in particular, we have three cases illustrated in the next example. Example.11. Consider the (complex) matrix polynomial 1 3.5i 3i λ + 3.5i 3i. Q(λ) = Iλ1 A = Iλ1 i = λ1 i.5 i 7i 5 + 3i.5 + i 7i λ i The Gershgorin set G(Q) = G1 (Q) G (Q) G3 (Q), the union of the sets G1 (Q) and G (Q), and the set G3 (Q) are illustrated in parts (a), (b) and (c) of Figure, respectively. Observe that / G1 ) (since 3.5i > 3i ), and thus, G1 (Q) consists of ten connected components located symmetrically around the origin. Concerning the second row, notice that i = and G ), and hence, the set G (Q) is connected and daisy shaped. Finally, in the third row, we have 5 + 3i <.5 i + 7i (i.e., the origin lies in the interior of G3 )), and G3 (Q) is connected, with smooth boundary. 3. The generalized Gershgorin set. In this section, we extend the notion of the generalized Gershgorin set (or the A-Ostrowski set) of matrices to the case of matrix polynomials Definitions. For a square complex matrix A Cn n, denote ci ) = ri T ), i N, and for any a [, 1], define the disks Ai,a ) = µ C : µ )i,i ri )a ci )1 a, i N. Then the spectrum S of matrix A lies in the union Aa ) = Ai,a ), and consequently, it lies in the intersection A) = i N T Aa ) [18, 1], which is known as the generalized Gershgorin set (or the A-Ostrowski set) of A. a [,1] Similarly, for a matrix polynomial P (λ) as in (1.1), we define Ai,a (P ) = µ C : Ai,a (P (µ))} = µ C : P (µ)i,i ri (P (µ))a ci (P (µ))1 a
11 661 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials for any i N and a [, 1], and the generalized Gershgorin set of P (λ) A a (P ) = µ C : A a (P (µ))} = i N A(P ) = µ C : A(P (µ))} = A i,a (P ), a [, 1], a [,1] A a (P ). As before, we say that µ = lies in A i,a (P ) (resp., in A a (P ), or in A(P )) exactly when lies in A i,a ( ˆP ) (resp., in A a ( ˆP ), or in A( ˆP )). Theorem 3.1. The generalized Gershgorin set of a matrix polynomial P (λ) is a subset of the Gershgorin set of P (λ) that contains all the (finite and infinite) eigenvalues of P (λ). Proof. By definition, a scalar µ C lies in A(P ) (resp., in σ(p ), or in G(P )) if and only if lies in A(P (µ)) (resp., in σ(p (µ)), or in G(P (µ))). Since σ(p (µ)) A(P (µ)) G(P (µ)) and σ( ˆP (µ)) A( ˆP (µ)) G( ˆP (µ)) [18, 1], the proof follows readily. 3.. Basic properties. Let P (λ) be an n n matrix polynomial as in (1.1). It is easy to verify the following properties. Proposition 3.. Let a [, 1] and i N. (i) A i,a (P ) is a closed subset of C. (ii) For any scalar b C\}, consider the matrix polynomials Q 1 (λ) = P (bλ), Q (λ) = bp (λ) and Q 3 (λ) = P (λ + b). Then A i,a (Q 1 ) = b 1 A i,a (P ), A i,a (Q ) = A i,a (P ) and A i,a (Q 3 ) = A i,a (P ) b. (iii) If the (i, i)-th entry of P (λ), (P (λ)) i,i, is (identically) zero, then A i,a (P ) = C. (iv) If a (, 1), and all the coefficient matrices A, A 1,..., A m have their i-th rows and i-th columns real, then A i,a (P ) is symmetric with respect to the real axis (for a = 1 or, see Proposition. (iv) applied to P (λ) or its transpose, respectively). Proof. The proof is similar to the proof of Proposition.. Proposition.3 for the row Gershgorin sets can be readily generalized for A i,a (P ). Proposition 3.3. For any a [, 1] and i N, the set µ C : (P (µ)) i,i < r i (P (µ)) a c i (P (µ)) 1 a} lies in the interior of A i,a (P ); as a consequence, A i,a (P ) µ C : (P (µ)) i,i = r i (P (µ)) a c i (P (µ)) 1 a} = µ C : A i,a (P (µ))}. Theorems.4 and.5 can also be extended. Theorem 3.4. (For a = 1 or, see Theorem.4 applied to P (λ) or its transpose, respectively.) Let a (, 1) and i N. If µ is an isolated point of A i,a (P ), then µ is a common root of all polynomials (P (λ)) i,j, j N, or a common root of all polynomials (P (λ)) p,i, p N ; as a consequence, µ is an eigenvalue of P (λ). Proof. For the sake of contradiction, assume that (P (µ)) i,i. Since µ is an isolated point, it belongs to the boundary of A i,a (P ), and there exists an ε > such that the disk D(µ, ε) = λ C : λ µ ε}
12 Christina Michailidou and Panayiotis Psarrakos 66 contains no other point of A i,a (P ). The set A i,a (P ) can be written as A i,a (P ) = µ C : (P (µ)) i,i r i (P (µ)) a c i (P (µ)) 1 a} = µ C : r i(p (µ)) a c i (P (µ)) 1 a } 1 (P (µ)) i,i = µ C : log r i(p (µ)) a c i (P (µ)) 1 a } (P (µ)) i,i = µ C : a log r i(p (µ)) (P (µ)) i,i + (1 a) log c } i(p (µ)) (P (µ)) i,i. Consider the function (3.9) φ(λ) = a log r i(p (λ)) (P (λ)) i,i + (1 a) log c i(p (λ)) (P (λ)) i,i, λ D(µ, ε), and observe that it is subharmonic (as a sum of subharmonic functions) and satisfies the maximum principle [, 4]. By definition, φ(λ) is equal to zero on the boundary of A i,a (P ), nonnegative in the interior of A i,a (P ) and negative elsewhere; see Proposition 3.3. Furthermore, since µ A i,a (P ), it follows that the function φ(λ) is equal to zero at the center µ of D(µ, ε) and negative in the rest of the disk; this is a contradiction because φ(λ) satisfies the maximum principle. As a consequence, = (P (µ)) i,i = and the result follows readily. j N \i} a (P (µ)) i,j p N \i} (P (µ)) p,i Theorem 3.5. Let a [, 1] and i N. Any point of the i-th generalized Gershgorin set A i,a (P ) has local dimension either or (isolated point); in other words, the generalized Gershgorin set of P (λ) cannot have parts with (nontrivial) curves. Proof. For a = 1 or, see Theorem.5 applied to P (λ) or its transpose, respectively. For a (, 1), the proof is similar to the proof of Theorem.5, using the (subharmonic) function φ(λ) in (3.9) instead of ϕ(λ) in (.). Recalling the definition of β i and β i in (.3), we define similarly the sets (3.1) γ j = p N : m ) p,j } and γ j = N \γ j = p N : m ) p,j = }, and we obtain (as in Theorem.6) necessary and sufficient conditions for the sets A i,a (P ) to be bounded. Note that, by Theorem 3.4, if µ C is an isolated point of A i,a (P ) for some a (, 1) and i N, then µ is a common root of all polynomials (P (λ)) i,j, j N, or a common root of all polynomials (P (λ)) p,i, p N. As a consequence, if the sets β i and γ i are nonempty, then the origin is not an isolated point of A i,a ( ˆP ), i.e., A i,a (P ) is not the union of a bounded set and. Theorem 3.6. (For a = 1 or, see Theorem.6 applied to P (λ) or its transpose, respectively.) Let a (, 1), and suppose that for an i N, the sets β i and γ i are nonempty. (i) If i β i (thus, i γ i ), then A i,a (P ) is unbounded if and only if A i,a m ). (ii) If i β i (thus, i γ i ), then A i,a (P ) is unbounded and A i,a m ). 1 a,
13 663 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials Proof. (i) Suppose that i β i, i.e., m ) i,i. Let A i,a (P ) be unbounded. Since the sets β i and γ i are nonempty, the origin is not an isolated point of A i,a ( ˆP ) and there is a sequence µ l } l N in A i,a (P )\} such that µ l +. Then, for every l N, m k ) i,i µ k l a m k ) i,j µ k l p N \i} m k ) p,i µ k l 1 a, or m µ k l k ) i,i As l +, it follows m ) i,i m µ k l k ) i,j a m ) i,j a j β i\i} p γ i\i} p N \i} m µ k l k ) p,i 1 a m ) p,i, 1 a, and hence, A i,a m ). For the converse, suppose that A i,a m ) (or equivalently, m ) i,i r i m ) a c i m ) 1 a ). Then A i,a ( ˆP ) and, by definition, A i,a (P ). (ii) Suppose that i β i, i.e., m ) i,i =. It is clear that A i,a m ). Moreover, A i,a (P )\} m 1 = µ C\} : k ) i,i µ k m 1 µ k = µ C\} : k ) i,i µ m a m k ) i,j µ k m µ k k ) i,j µ m p N \i} a p N \i} 1 a m k ) p,i µ k 1 a m µ k k ) p,i, where at least one of the coefficients m ) i,j, j N \i}, and one of the coefficients m ) p,i, p N \j}, are nonzero. As a consequence, for large enough µ, µ lies in A i,a (P ). Thus, there exists a real number M > such that every scalar µ C with µ M lies in A i,a (P ), i.e., µ C : µ M} A i,a (P ). As mentioned in the paragraph before Theorem.9, we conclude this section with a result which is a special case of Theorem 3.1 of [1]. Theorem 3.7. (For a = 1 or, see Theorem.9 applied to P (λ) or its transpose, respectively.) Let a (, 1), and suppose that for every i N, the sets β i and γ i are nonempty. If A a (P ) is bounded, then the number of connected components of A a (P ) is less than or equal to nm. Moreover, if G is a connected component of A a (P ) constructed by ξ connected components of the sets A 1,a (P ), A,a (P ),..., A n,a (P ), then the total number of roots of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n in G is at least ξ and it is equal to the number of eigenvalues of P (λ) in G, counting multiplicities. µ m
14 Christina Michailidou and Panayiotis Psarrakos 664 (a) The Gershgorin set. (b) The generalized Gershgorin set. Figure 3: Comparing the Gershgorin set and the generalized Gershgorin set. Proof. Since A a (P ) is bounded, Theorem 3.6 (i) implies that for every i N, i β i γ i and the polynomial (P (λ)) i,i is of degree m (i.e., it has exactly m roots, counting multiplicities). Based on the continuity of the eigenvalues of a matrix polynomial with respect to its matrix coefficients and the boundedness of A a (P ), one can complete the proof, following the arguments of the proof of Theorem Examples. Example 3.8. Consider the matrix polynomial of Example.1. In Figure 3, one can see the improvement of the Gershgorin set due to the use of column sums in addition to row sums 1. Moreover, Proposition 3. (iv) (symmetry with respect to the real axis), Theorem 3.6 (boundedness conditions) and Theorem 3.7 (distribution of eigenvalues in the connected components) are verified. Example 3.9. Consider the matrix polynomial 4.λ i 4λ P (λ) = λ 3 λ + 4. λ + i λ + λ 1 In Figure 4, the Gershgorin and generalized Gershgorin sets of P (λ) are illustrated. It is worth mentioning that µ 1 = and µ = ( are isolated points of A(P ), and the third columns of the matrices P (µ 1 ) = ) ( ) P and P (µ ) = P are zero; see Theorems.4 and In Figures 3 (b) and 4 (b), the generalized Gershgorin set A(P ) is estimated by the intersection of the sets A a(p ), a =,.5,.1,...,.95, 1.
15 665 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials (a) The Gershgorin set (b) The generalized Gershgorin set. Figure 4: Comparing the Gershgorin set and the generalized Gershgorin set. 4. The Brauer set. In this section, we introduce the Brauer set of matrix polynomials, similarly to the Gershgorin set and the generalized Gershgorin set Definitions. The Brauer set of a matrix A C n n is defined as B) = i 1 i N j=1 B i,j ), where B i,j ) = µ C : µ ) i,i µ ) j,j r i )r j )}. The Brauer set consists of n(n 1) Cassini ovals, lies in the Gershgorin set, and contains all the eigenvalues of the matrix [3, 1]. For a matrix polynomial P (λ) as in (1.1) and i, j N with j < i, we define the (i, j)-th Brauer set of P (λ) B i,j (P ) = µ C : B i,j (P (µ))} = µ C : (P (µ)) i,i (P (µ)) j,j r i (P (µ))r j (P (µ))}, and the Brauer set of P (λ) B(P ) = µ C : B(P (µ))} = i 1 i N j=1 B i,j (P ). We also say that µ = lies in B i,j (P ) (resp., in B(P )) exactly when lies in B i,j ( ˆP ) (resp., in B( ˆP )). The Brauer theorem for matrices [3, 1] is generalized to the case of matrix polynomials as well as the Gershgorin circle theorem. Theorem 4.1. The Brauer set of a matrix polynomial P (λ) is a subset of the Gershgorin set of P (λ) that contains all the (finite and infinite) eigenvalues of P (λ). Proof. By definition, a scalar µ C lies in B(P ) (resp., in σ(p ), or in G(P )) if and only if lies in B(P (µ)) (resp., in σ(p (µ)), or in G(P (µ))). Since σ(p (µ)) B(P (µ)) G(P (µ)) and σ( ˆP (µ)) B( ˆP (µ)) G( ˆP (µ)) [3, 1], the proof follows readily.
16 Christina Michailidou and Panayiotis Psarrakos Basic properties. Let P (λ) be an n n matrix polynomial as in (1.1). It is easy to verify the following properties. Proposition 4.. Consider two integers i, j N with j < i. (i) B i,j (P ) is a closed subset of C. (ii) For any scalar b C\}, consider the matrix polynomials Q 1 (λ) = P (bλ), Q (λ) = bp (λ) and Q 3 (λ) = P (λ + b). Then B i,j (Q 1 ) = b 1 B i,j (P ), B i,j (Q ) = B i,j (P ) and B i,j (Q 3 ) = B i,j (P ) b. (iii) If the (i, i)-th or the (j, j)-th entry of P (λ) is (identically) zero, then B i,j (P ) = C. (iv) If all the coefficient matrices A, A 1,..., A m have their i-th and j-th rows real, then B i,j (P ) is symmetric with respect to the real axis. Proof. The proof is similar to the proof of Proposition.. Propositions.3 and 3.3 for the Gershgorin set and generalized Gershgorin set, respectively, can be readily extended to the case of the Brauer set. Proposition 4.3. For any i, j N with j < i, the set µ C : (P (µ)) i,i (P (µ)) j,j < r i (P (µ)) r j (P (µ))} lies in the interior of B i,j (P ); as a consequence, B i,j (P ) µ C : (P (µ)) i,i (P (µ)) j,j = r i (P (µ))r j (P (µ))} = µ C : B i,j (P (µ))}. As in Theorem.4 for the Gershgorin set and Theorem 3.4 for the generalized Gershgorin set, we obtain a necessary condition for the isolated points of B i,j (P ). Theorem 4.4. If µ is an isolated point of the (i, j)-th Brauer set B i,j (P ), 1 j < i n, then (a) µ is a root of (P (λ)) i,i or (P (λ)) j,j, and (b) µ is a common root of all polynomials (P (λ)) i,p, p N \i}, or a common root of all polynomials (P (λ)) j,p, p N \j}. Proof. For the sake of contradiction, assume that (P (µ)) i,i and (P (µ)) j,j. Since µ is an isolated point, it belongs to the boundary of the Brauer set, and also there exists an ε > such that the disk D(µ, ε) = λ C : λ µ ε} contains no other point of B i,j (P ). The Brauer set can be written as Consider the function B i,j (P ) = µ C : (P (µ)) i,i (P (µ)) j,j r i (P (µ))r j (P (µ))} } r i (P (µ))r j (P (µ)) = µ C : (P (µ)) i,i (P (µ)) j,j 1 } r i (P (µ))r j (P (µ)) = µ C : log (P (µ)) i,i (P (µ)) j,j = µ C : log r i(p (µ)) (P (µ)) i,i + log r } j(p (µ)) (P (µ)) j,j. (4.11) ϑ(λ) = log r i(p (λ)) (P (λ)) i,i + log r j(p (λ)) (P (λ)) j,j, λ D(µ, ε), and observe that it is subharmonic (as a sum of subharmonic functions) and satisfies the maximum principle [, 4]. By definition, ϑ(λ) is equal to zero on the boundary of B i,j (P ), nonnegative in the interior of B i,j (P )
17 667 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials and negative elsewhere. Since µ B i,j (P ), it follows (P (µ)) i,i (P (µ)) j,j = r i (P (µ))r j (P (µ)) = p N \i} (P (µ)) i,p p N \j} (P (µ)) j,p. Hence, the function ϑ(µ) is equal to zero at the center µ of D(µ, ε) and negative in the rest of the disk; this is a contradiction because ϑ(λ) satisfies the maximum principle. As a consequence, = (P (µ)) i,i (P (µ)) j,j = (P (µ)) i,p (P (µ)) j,p, and the proof is complete. p N \i} p N \j} Theorem 4.5. Any point of the (i, j)-th Brauer set B i,j (P ), 1 j < i n, has local dimension either or (isolated point); in other words, the Brauer set cannot have parts with (nontrivial) curves. Proof. The proof is similar to the proof of Theorem.5, using the (subharmonic) function ϑ(λ) in (4.11) instead of ϕ(λ) in (.). Similarly to Theorem.6 for the Gershgorin set and Theorem 3.6 for the generalized Gershgorin set, and recalling the definition of β i and β i in (.3), we obtain necessary and sufficient conditions for B i,j (P ) to be bounded. By Theorem 4.4, if µ C is an isolated point of B i,j (P ) for some i, j N with j < i, then µ is a root of (P (λ)) i,i or (P (λ)) j,j, and a common root of polynomials (P (λ)) i,p, p N \i}, or of polynomials (P (λ)) j,p, p N \j}. As a consequence, if i β i and j β j, or β i \i} = and β j \j} =, then the origin is not an isolated point of B i,j ( ˆP ), i.e., B i,j (P ) is not the union of a bounded set and. Theorem 4.6. Suppose that for some integers i, j N with j < i, it holds that i β i and j β j, or β i \i} and β j \j}. (i) If i β i and j β j, then B i,j (P ) is unbounded if and only if B i,j m ). (ii) If i β i and j β j, then B i,j (P ) is unbounded and B i,j m ). Proof. (i) Suppose that i β i, i.e., m ) i,i, and j β j, i.e., m ) j,j. Let B i,j (P ) be unbounded. By hypothesis, the origin is not an isolated point of B i,j ( ˆP ), and there is a sequence µ l } l N in B i,j (P )\} such that µ l +. Then, for every l N, m k ) i,i µ k m l k ) j,j µ k l m k ) i,p µ k l m k ) j,p µ k l, or m µ k l m µ k l k ) i,i k ) j,j As l +, it follows m ) i,i m ) j,j p N \i} p N \i} p β i\i} m µ k l k ) i,p m ) i,p p β j\j} p N \j} p N \j} m µ k l k ) j,p. m ) j,p,
18 Christina Michailidou and Panayiotis Psarrakos 668 and hence, B i,j m ). For the converse, suppose that B i,j m ) (or equivalently, m ) i,i m ) j,j r i m )r j m )). Then B i,j ( ˆP ) and, by definition, B i,j (P ). (ii) Suppose that i β i and j β j, i.e., m ) i,i = m ) j,j =. Then, it is clear that B i,j m ). Moreover, m 1 m 1 B i,j (P )\} = µ C\} : k ) i,i µ k k ) j,j µ k m k ) i,p µ k m k ) j,p µ k p N \i} p N \j} m 1 µ k m 1 µ k = µ C\} : k ) i,i µ m k ) j,j µ m m µ k k ) i,p m µ k k ) j,p, p N \i} µ m p N \j} where at least one of the coefficients m ) i,p (p N \i}) and one of the coefficients m ) j,p (p N \j}) are nonzero. As a consequence, for large enough µ, µ lies in B i,j (P ). Thus, there exists a real number M > such that every scalar µ C with µ M lies in B i,j (P ), i.e., µ C : µ M} B i,j (P ). Theorems.9 and 3.7 can be easily extended to the case of the Brauer set. Theorem 4.7. Suppose that for every i, j N with j < i, i β i and j β j. If the Brauer set B(P ) is bounded, then the number of connected components of B(P ) is less than or equal to nm. Moreover, if G is a connected component of B(P ) constructed by ξ connected components of (i, j)-th Brauer sets, then the total number of roots of the polynomials (P (λ)) 1,1, (P (λ)),,..., (P (λ)) n,n in G is at least ξ and it is equal to the number of eigenvalues of P (λ) in G, counting multiplicities. Proof. By hypothesis, for every i, j N with j < i, the polynomial (P (λ)) i,i (P (λ)) j,j is of degree m (i.e., it has exactly m roots, counting multiplicities). Based on the continuity of the eigenvalues of a matrix polynomial with respect to its matrix coefficients and the boundedness of B(P ), one can complete the proof, following the arguments of the proof of Theorem.9. µ m 4.3. Examples. In the case of a matrix polynomial P (λ), the relation det P (λ) = apparently yields (P (λ)) 1,1 (P (λ)), = (P (λ)) 1, (P (λ)),1, and consequently, the eigenvalues appear on the boundary of the Brauer set. Example 4.8. Consider the matrix polynomial P (λ) of Example.1. In Figure 5, the Gershgorin set G(P ) is illustrated in part (a) and the Brauer set B(P ) = B,1 (P ) is illustrated in part (b). It is clearly verified that the Brauer set contains the eigenvalues of P (λ) and lies in the Gershgorin set. As expected, the eigenvalues lie on the boundary of the Bruer set. Moreover, Proposition 4. (iv) (symmetry with respect to the real axis), Theorem 4.4 (isolated points.935±.58i and.98±1.573i), Theorem 4.6 (boundedness conditions) and Theorem 4.7 (distribution of eigenvalues in the connected components) are also confirmed.
19 669 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials (a) The Gershgorin set. (b) The Brauer set. Figure 5: Comparing the Gershgorin set and the Brauer set. (a) The Gershgorin set. (b) The Brauer set. Figure 6: Comparing the Gershgorin set and the Brauer set. Example 4.9. Consider the 3 3 matrix polynomial P (λ) of Example 3.9. In Figure 6, the Gershgorin set G(P ) is illustrated in part (a) and the Brauer set B(P ) is illustrated in part (b). It is once again verified that the Brauer set contains the eigenvalues of P (λ) and lies in the Gershgorin set.
20 Christina Michailidou and Panayiotis Psarrakos The Dashnic-Zusmanovich set. In this section, we introduce the Dashnic-Zusmanovich set of matrix polynomials Definitions. The Dashnic-Zusmanovich set of a matrix A C n n is defined as D) = D i,j ), where D i,j ) = µ C : µ ) i,i ( µ ) j,j r j ) + ) j,i ) r i ) ) j,i } i N for distinct integers i, j N. The Dashnic-Zusmanovich set D) is determined by n(n 1) oval sets, lies in the Brauer set B), and contains all the eigenvalues of matrix A [6, 1]. For a matrix polynomial P (λ) as in (1.1), we define D i,j (P ) = µ C : D i,j (P (µ))} = µ C : (P (µ)) i,i ( (P (µ)) j,j r j (P (µ)) + (P (µ)) j,i ) r i (P (µ)) (P (µ)) j,i } for distinct integers i, j N, the union D i (P ) = and the Dashnic-Zusmanovich set of P (λ) D(P ) = µ C : D(P (µ))} = D i,j (P ), i N, i N D i,j (P ) = i N D i (P ). We also say that µ = lies in D i,j (P ) (resp., in D i (P ), or in D(P )) exactly when lies in D i,j ( ˆP ) (resp., in D i ( ˆP ), or in D( ˆP )). Theorem 5.1. The Dashnic-Zusmanovich set of a matrix polynomial P (λ) is a subset of the Brauer set of P (λ) that contains all the (finite and infinite) eigenvalues of P (λ). Proof. By definition, a scalar µ C lies in D(P ) (resp., in σ(p ), or in B(P )) if and only if lies in D(P (µ)) (resp., in σ(p (µ)), or in B(P (µ))). Since σ(p (µ)) D(P (µ)) B(P (µ)) and σ( ˆP (µ)) D( ˆP (µ)) B( ˆP (µ)) [6, 1], the proof follows readily. 5.. Basic properties. Let P (λ) be an n n matrix polynomial as in (1.1). It is easy to verify the following properties. Proposition 5.. Consider two distinct integers i, j N. (i) D i,j (P ) and D i (P ) are closed subsets of C. (ii) For any scalar b C\}, consider the matrix polynomials Q 1 (λ) = P (bλ), Q (λ) = bp (λ) and Q 3 (λ) = P (λ + b). Then D i,j (Q 1 ) = b 1 D i,j (P ), D i,j (Q ) = D i,j (P ) and D i,j (Q 3 ) = D i,j (P ) b. (iii) If the (i, i)-th or the the (j, j)-th entry of P (λ) is (identically) zero, then D i,j (P ) = D i (P ) = C. (iv) If all the coefficient matrices A, A 1,..., A m have their i-th and j-th rows real, then D i,j (P ) is symmetric with respect to the real axis. Proof. The proof is similar to the proof of Proposition.. Similarly to Theorems.6, 3.6 and 4.6, we obtain necessary and sufficient conditions for the Dashnic- Zusmanovich set to be bounded. Theorem 5.3. Suppose that for some distinct i, j N, the sets β i and β j are nonempty, and the origin is not an isolated point of D i,j ( ˆP ), i.e., D i,j (P ) is not the union of a bounded set and.
21 671 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials (i) If i β i and j β j, then D i,j (P ) is unbounded if and only if D i,j m ). (ii) If i β i β j and j β j, then D i,j (P ) is unbounded and D i,j m ). Proof. (i) Suppose that i β i, i.e., m ) i,i and j β j, i.e., m ) j,j. Let D i,j (P ) be unbounded. Since the origin is not an isolated point of D i,j ( ˆP ), there is a sequence µ l } l N in D i,j (P )\} such that µ l +. Then, for every positive integer l, m k ) i,i µ k l m k ) j,j µ k l m k ) j,p µ k l + m k ) j,i µ k l p N \j} m k ) i,p µ k l m k ) j,i µ k l, p N \i} or m µ k l k ) i,i µ m m µ k l l k ) j,j µ m l m µ k l k ) j,p µ m p N \j} l m µ k l k ) i,p m µ k l k ) j,i, p N \i} + m µ k l k ) j,i As l +, it follows m ) i,i m ) j,j and thus, D i,j m ). p β i\i} m ) i,p + m ) j,i p β i\i} m ) i,p m ) j,i, For the converse, suppose that D i,j m ) (or equivalently, m ) i,i ( m ) j,j r j m ) + m ) j,i ) m ) j,i r j m ) ). Then D i,j ( ˆP ) and, by definition, D i,j (P ). (ii) Suppose that i β i, j β j and i β j, i.e., m ) i,i =, m ) j,j = and m ) j,i. Then, it is clear that D i,j m ), and m 1 D i,j (P )\} = µ C\} : m 1 k ) i,i µ k k ) j,j µ k m k ) j,p µ k p N \j} ) m + k ) j,i µ k m k ) i,p µ k m k ) j,i µ k p N \i} m 1 = µ C\} : µ k m 1 k ) i,i µ m 1 µ k k ) j,j µ m 1 m µ k k ) j,p p N \j} ) m µ k + k ) j,i m µ k k ) i,p m µ k k ) j,i, µ m 1 p N \i} µ m 1 µ m 1 µ m 1
22 Christina Michailidou and Panayiotis Psarrakos (a) The Gershgorin set (b) The Brauer set (c) The D.-Z. set. Figure 7: Comparing the Gershgorin set, the Brauer set and the D.-Z. set. where m ) j,i and at least one of the coefficients m ) i,p, p N \i}, is nonzero. As a consequence, for large enough µ, µ lies in D i,j (P ). Thus, there exists a real number M > such that every scalar µ C with µ M lies in D i,j (P ), i.e., µ C : µ M} D i,j (P ) Examples. Example 5.4. Consider the matrix polynomial [ ] 8iλ iλ + iλ + iλ + (1 + i) P (λ) = 3iλ + 5iλ + (1 + i) 6iλ. + 3iλ + 4i In Figure 7, the Gershgorin set, the Brauer set and the Dashnic-Zusmanovich (D.-Z.) set of P (λ) are drawn. Notice that the Brauer set and the Dashnic-Zusmanovich set are the same (because the matrix polynomial has only rows) and lie in the Gershgorin set. In the following two examples, we consider two 3 3 quadratic matrix polynomials, with bounded (Example 5.5) and unbounded (Example 5.6) eigenvalues inclusion sets. Example 5.5. Consider the matrix polynomial 8iλ iλ + iλ + iλ + (1 + i) ( 1 + i)λ + λ + P (λ) = 3iλ + 5iλ + (1 + i) 8iλ + 3iλ + 4i ( i)λ 4λ 5i. (.8 i)λ + iλ + (1 i).6iλ iλ (6 i)λ + i The Gershgorin set, the Brauer set and the Dashnic-Zusmanovich set are illustrated in Figure 8. It is clear that these three sets are bounded, confirming Theorems.6, 4.6 and 5.3, and D(P ) B(P ) G(P ). Example 5.6. Consider the matrix polynomial 8iλ iλ + iλ + iλ + (1 + i) ( 1 + i)λ + λ + P (λ) = 3iλ + 5iλ + (1 + i) 6iλ + 3iλ + 4i ( i)λ 4λ 5i. (.8 i)λ + iλ + (1 i) 6iλ iλ (6 i)λ + i
23 673 Gershgorin Type Sets for Eigenvalues of Matrix Polynomials (a) The Gershgorin set (b) The Brauer set (c) The D.-Z. set. Figure 8: Comparing the Gershgorin set, the Brauer set and the D.-Z. set (a) The Gershgorin set (b) The Brauer set (c) The D.-Z. set. Figure 9: Comparing the Gershgorin set, the Brauer set and the D.-Z. set. The Gershgorin set, the Brauer set and the Dashnic-Zusmanovich set are drawn in Figure 9, and they are unbounded, verifying Theorems.6, 4.6 and 5.3. Notice once again that D(P ) B(P ) G(P ). Acknowledgments. The authors wish to thank the anonymous referees for valuable comments and suggestions. REFERENCES [1] D. Bindel and A. Hood. Localization theorems for nonlinear eigenvalue problems. SIAM Journal on Matrix Analysis and Applications, 34(6): , 13. [] S. Boyd and C.A. Desoer. Subharmonic functions and performance bounds on linear time-invariant feedback systems. IMA Journal of Mathematical Control and Information, :153 17, 1985.
24 Christina Michailidou and Panayiotis Psarrakos 674 [3] A. Brauer. Limits for the characteristic roots of a matrix II. Duke Mathematical Journal, 14:1 6, [4] J.B. Conway. Functions of One Complex Variable. Springer-Verlag, New York, [5] L. Cvetkovic, V. Kostic, and R.S. Varga. A new Geršgorin-type eigenvalue inclusion set. Electronic Transactions on Numerical Analysis, 18:73 8, 4. [6] L.S. Dashnic and M.S. Zusmanovich. K voprosu o lokalizacii harakteristicheskih chisel matricy. Zhurnal Vychislitel noi Matematiki i Matematicheskoi Fiziki, 1(6): , 197. [7] T.W. Gamelin. Complex Analysis. Springer Science & Business Media, Springer-Verlag, New York, 3. [8] S. Gershgorin. Über die abgrenzung der eigenwerte einer matrix. Izvestiya Akademii Nauk SSSR, Seriya Fizicheskaya- Matematicheskaya, 6: , [9] I. Gohberg, P. Lancaster, and L. Rodman. Matrix Polynomials. Society for Industrial and Applied Mathematics, Philadelphia, 9. [1] N.J. Higham and F. Tisseur. Bounds for eigenvalues of matrix polynomials. Linear Algebra and its Aplications, 358:5, 3. [11] W. Hurewicz and H. Wallman. Dimension Theory (revised edition). Princeton Mathematical Series, Vol. 4, Princeton University Press, Princeton, [1] V. Kostic, L.J. Cvetkovic, and R.S. Varga. Geršgorin-type localizations of generalized eigenvalues. Numerical Linear Algebra with Applications, 16(11/1): , 9. [13] V. Kostić and D. Gardašević. On the Geršgorin-type localizations for nonlinear eigenvalue problems. Applied Mathematics and Computation, 337: , 18. [14] P. Lancaster. Lambda-Matrices and Vibrating Systems. Dover Publications,. [15] P. Lancaster and M. Tismenetsky. The Theory of Matrices, second edition. Academic Press, Orlando, [16] A.S. Markus. Introduction to the Spectral Theory of Polynomial Operator Pencils. Translations of Mathematical Monographs, Vol. 71, American Mathematical Society, Providence, [17] M.F. Neuts. Matrix-Geometric Solutions in Stochastic Models. Springer, Berlin, [18] A.M. Ostrowski. Über das Nichtverschwinden einer Klasse von Determinanten und die Lokalisierung der charakteristischen Wurzeln von Matrizen. Compositio Mathematica, 9:9 6, [19] O. Taussky. A recurring theorem on determinants. American Mathematical Monthly, 56:67 676, [] O. Taussky. Bounds for characteristic roots of matrices. Duke Mathematical Journal, 15: , [1] R.S. Varga. Geršgorin and His Circles. Springer Series in Computational Mathematics, Vol. 36, Springer-Verlag, Berlin, 4.
H-matrix theory vs. eigenvalue localization
Numer Algor 2006) 42: 229 245 DOI 10.1007/s11075-006-9029-3 H-matrix theory vs. eigenvalue localization Ljiljana Cvetković Received: 6 February 2006 / Accepted: 24 April 2006 / Published online: 31 August
More informationc 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March
SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.
More informationAn angle metric through the notion of Grassmann representative
Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.
More informationOn the computation of the Jordan canonical form of regular matrix polynomials
On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationDistance bounds for prescribed multiple eigenvalues of matrix polynomials
Distance bounds for prescribed multiple eigenvalues of matrix polynomials Panayiotis J Psarrakos February 5, Abstract In this paper, motivated by a problem posed by Wilkinson, we study the coefficient
More informationIn 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices.
SIMPLIFICATIONS OF THE OSTROWSKI UPPER BOUNDS FOR THE SPECTRAL RADIUS OF NONNEGATIVE MATRICES CHAOQIAN LI, BAOHUA HU, AND YAOTANG LI Abstract AM Ostrowski in 1951 gave two well-known upper bounds for the
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More informationBIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES
BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES MILTIADIS KARAMANLIS AND PANAYIOTIS J. PSARRAKOS Dedicated to Profess Natalia Bebiano on the occasion of her 60th birthday Abstract. Consider
More informationOn cardinality of Pareto spectra
Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationFirst, we review some important facts on the location of eigenvalues of matrices.
BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given
More informationOn Pseudospectra of Matrix Polynomials and their Boundaries
On Pseudospectra of Matrix Polynomials and their Boundaries Lyonell Boulton, Department of Mathematics, Heriot-Watt University, Edinburgh EH14 2AS, United Kingdom. Peter Lancaster, Department of Mathematics
More informationGershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds
Gershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds Author: David Marquis Advisors: Professor Hans De Moor Dr. Kathryn Porter Reader: Dr. Michael Nathanson May
More informationRecognition of hidden positive row diagonally dominant matrices
Electronic Journal of Linear Algebra Volume 10 Article 9 2003 Recognition of hidden positive row diagonally dominant matrices Walter D. Morris wmorris@gmu.edu Follow this and additional works at: http://repository.uwyo.edu/ela
More informationRefined Inertia of Matrix Patterns
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl
More informationOn the reduction of matrix polynomials to Hessenberg form
Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu
More informationRETRACTED On construction of a complex finite Jacobi matrix from two spectra
Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional
More informationInequalities involving eigenvalues for difference of operator means
Electronic Journal of Linear Algebra Volume 7 Article 5 014 Inequalities involving eigenvalues for difference of operator means Mandeep Singh msrawla@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela
More informationETNA Kent State University
F N Electronic Transactions on Numerical Analysis Volume 8, 999, pp 5-20 Copyright 999, ISSN 068-963 ON GERŠGORIN-TYPE PROBLEMS AND OVALS OF CASSINI RICHARD S VARGA AND ALAN RAUTSTENGL Abstract Recently,
More informationQUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY
QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics
More informationPoint equation of the boundary of the numerical range of a matrix polynomial
Linear Algebra and its Applications 347 (2002) 205 27 wwwelseviercom/locate/laa Point equation of the boundary of the numerical range of a matrix polynomial Mao-Ting Chien a,, Hiroshi Nakazato b, Panayiotis
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationBlock companion matrices, discrete-time block diagonal stability and polynomial matrices
Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationA method for computing quadratic Brunovsky forms
Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: http://repositoryuwyoedu/ela
More informationThe symmetric linear matrix equation
Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela
More informationAn improved characterisation of the interior of the completely positive cone
Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this
More informationPotentially nilpotent tridiagonal sign patterns of order 4
Electronic Journal of Linear Algebra Volume 31 Volume 31: (2016) Article 50 2016 Potentially nilpotent tridiagonal sign patterns of order 4 Yubin Gao North University of China, ybgao@nuc.edu.cn Yanling
More informationSpectral Properties of Matrix Polynomials in the Max Algebra
Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract
More informationSingular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,
More informationNon-trivial solutions to certain matrix equations
Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationInertially arbitrary nonzero patterns of order 4
Electronic Journal of Linear Algebra Volume 1 Article 00 Inertially arbitrary nonzero patterns of order Michael S. Cavers Kevin N. Vander Meulen kvanderm@cs.redeemer.ca Follow this and additional works
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationThe Jordan forms of AB and BA
Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationDihedral groups of automorphisms of compact Riemann surfaces of genus two
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 36 2013 Dihedral groups of automorphisms of compact Riemann surfaces of genus two Qingje Yang yangqj@ruc.edu.com Dan Yang Follow
More informationOn EP elements, normal elements and partial isometries in rings with involution
Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationPairs of matrices, one of which commutes with their commutator
Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela
More informationOn nonnegative realization of partitioned spectra
Electronic Journal of Linear Algebra Volume Volume (0) Article 5 0 On nonnegative realization of partitioned spectra Ricardo L. Soto Oscar Rojo Cristina B. Manzaneda Follow this and additional works at:
More informationOn the distance signless Laplacian spectral radius of graphs and digraphs
Electronic Journal of Linear Algebra Volume 3 Volume 3 (017) Article 3 017 On the distance signless Laplacian spectral radius of graphs and digraphs Dan Li Xinjiang University,Urumqi, ldxjedu@163.com Guoping
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationSystems of Algebraic Equations and Systems of Differential Equations
Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices
More informationEigenvalues and Eigenvectors
LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are
More informationNilpotent matrices and spectrally arbitrary sign patterns
Electronic Journal of Linear Algebra Volume 16 Article 21 2007 Nilpotent matrices and spectrally arbitrary sign patterns Rajesh J. Pereira rjxpereira@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationThe Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation
Linear Algebra and its Applications 360 (003) 43 57 www.elsevier.com/locate/laa The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation Panayiotis J. Psarrakos
More informationSOMEWHAT STOCHASTIC MATRICES
SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationOn the closures of orbits of fourth order matrix pencils
On the closures of orbits of fourth order matrix pencils Dmitri D. Pervouchine Abstract In this work we state a simple criterion for nilpotentness of a square n n matrix pencil with respect to the action
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationGeneralized left and right Weyl spectra of upper triangular operator matrices
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017 Article 3 2017 Generalized left and right Weyl spectra of upper triangular operator matrices Guojun ai 3695946@163.com Dragana S. Cvetkovic-Ilic
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationChapter 2: Linear Independence and Bases
MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space
More informationBASIC MATRIX PERTURBATION THEORY
BASIC MATRIX PERTURBATION THEORY BENJAMIN TEXIER Abstract. In this expository note, we give the proofs of several results in finitedimensional matrix perturbation theory: continuity of the spectrum, regularity
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationSparse spectrally arbitrary patterns
Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon
More informationModeling and Stability Analysis of a Communication Network System
Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationOn an Inverse Problem for a Quadratic Eigenvalue Problem
International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil
More information(II.B) Basis and dimension
(II.B) Basis and dimension How would you explain that a plane has two dimensions? Well, you can go in two independent directions, and no more. To make this idea precise, we formulate the DEFINITION 1.
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More informationSUMS OF ENTIRE FUNCTIONS HAVING ONLY REAL ZEROS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 SUMS OF ENTIRE FUNCTIONS HAVING ONLY REAL ZEROS STEVEN R. ADAMS AND DAVID A. CARDON (Communicated
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More informationDetailed Proof of The PerronFrobenius Theorem
Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationThe spectrum of the edge corona of two graphs
Electronic Journal of Linear Algebra Volume Volume (1) Article 4 1 The spectrum of the edge corona of two graphs Yaoping Hou yphou@hunnu.edu.cn Wai-Chee Shiu Follow this and additional works at: http://repository.uwyo.edu/ela
More informationShort proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 35 2009 Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Eric A. Carlen carlen@math.rutgers.edu
More informationUNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if
UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is
More informationThe distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue 1
The distance from a matrix polynomial to matrix polynomials with a prescribed multiple eigenvalue Nikolaos Papathanasiou and Panayiotis Psarrakos Department of Mathematics, National Technical University
More informationLecture 11: Eigenvalues and Eigenvectors
Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationA note on estimates for the spectral radius of a nonnegative matrix
Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow
More informationWeek 3: Faces of convex sets
Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal
More informationA note on polar decomposition based Geršgorin-type sets
A note on polar decomposition based Geršgorin-type sets Laura Smithies smithies@math.ent.edu November 21, 2007 Dedicated to Richard S. Varga, on his 80-th birthday. Abstract: Let B C n n denote a finite-dimensional
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationChapter 4. Measure Theory. 1. Measure Spaces
Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if
More informationSOME PROPERTIES OF N-SUPERCYCLIC OPERATORS
SOME PROPERTIES OF N-SUPERCYCLIC OPERATORS P S. BOURDON, N. S. FELDMAN, AND J. H. SHAPIRO Abstract. Let T be a continuous linear operator on a Hausdorff topological vector space X over the field C. We
More informationNote on the Jordan form of an irreducible eventually nonnegative matrix
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 19 2015 Note on the Jordan form of an irreducible eventually nonnegative matrix Leslie Hogben Iowa State University, hogben@aimath.org
More informationINVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas
Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.
More informationReachability indices of positive linear systems
Electronic Journal of Linear Algebra Volume 11 Article 9 2004 Reachability indices of positive linear systems Rafael Bru rbru@matupves Carmen Coll Sergio Romero Elena Sanchez Follow this and additional
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More information