MATRIX ROOTS OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES PIETRO PAPARELLA

Size: px
Start display at page:

Download "MATRIX ROOTS OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES PIETRO PAPARELLA"

Transcription

1 MATRIX ROOTS OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES By PIETRO PAPARELLA A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department of Mathematics JULY 2013 Copyright by PIETRO PAPARELLA, 2013 All Rights Reserved

2 Copyright by PIETRO PAPARELLA, 2013 All Rights Reserved

3 ii To the Faculty of Washington State University: The members of the Committee appointed to examine the dissertation of PIETRO PAPARELLA find it satisfactory and recommend that it be accepted. Michael J. Tsatsomeros, Ph.D., Chair Judith J. McDonald, Ph.D., Chair Bala Krishnamoorthy, Ph.D.

4 iii MATRIX ROOTS OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES Abstract by Pietro Paparella, Ph.D. Washington State University July 2013 Chair: Michael J. Tsatsomeros and Judith J. McDonald Eventually nonnegative matrices are real matrices whose powers become and remain nonnegative. As such, eventually nonnegative matrices are, a fortiori, matrix roots of nonnegative matrices, which motivates us to study the matrix roots of nonnegative matrices. Using classical matrix function theory and Perron-Frobenius theory, we characterize, classify, and describe in terms of the complex and real Jordan canonical form the pth-roots of nonnegative and eventually nonnegative matrices.

5 iv Acknowledgements This dissertation would not have been possible without the unwavering encouragement, patience, and love of my wife Allison L. Paparella. I wish to thank my children Quintin L. Paparella and Alessia R. Paparella for providing me with inspiration to help me finish this endeavor. I also would like to acknowledge my mother Maria Aquino for always supporting me and my siblings and for her tremendous efforts and sacrifice in raising three children on her own. I owe a deep debt of gratitude to my advisors Michael J. Tsatsomeros and Judtih J. McDonald. Their vision, guidance, wisdom, and expertise were paramount in the completion of this project. I wish to extend a debt of gratitude to Bala Krishnamoorthy, who went above and beyond the duties and responsibilities of a committee member: thank you for responding to my pesky s, the sage advice, and the encouraging words. I would also like to thank David S. Watkins for serving on my committee and K. A. Ari Ariyawansa for generously providing me a research assistantship from , so that I was able to return to Washington State to complete my Ph.D in mathematics. I am also grateful for wonderful friends and family for their encouraging words and support: Travis W. Arnold; Aaron L. Carr-Callen; Kirk J. Griffith; Jasmin L. and Jeffrey A. Harshman; Jameus C. Hutchens; Francesco Paparella; Stefano Paparella; Christine S. and V. Joshua Phillips; Lena Y. and V. Jefferson Phillips; Daniela Paparella and Joseph W. Thornton; and Frank and Chiara D. Trejos. Finally, I would like to acknowledge several graduate students from my time at Washington State and Louisiana State for the wonderful memories, collaboration, and continued

6 v inspiration: Baha M. Alzalg, Jeremy J. Becnel, Timothy D. Breitzman, Mihály Kovács, Suat Namli, and Rokas Varaneckas.

7 To my wife Allison, my children Quintin and Alessia, and my mother Maria. vi

8 If I have seen further it is by standing on ye sholders of Giants Sir Isaac Newton. vii

9 viii Contents Abstract iii Acknowledgements iv Dedication vi Inspiration vii 1 Introduction Background and Motivation Notation and Definitions Preliminaries Combinatorial Structure of a Matrix Upper-triangular Toeplitz Matrices Theoretical Background Perron-Frobenius Theory of Nonnegative Matrices Theory of Functions of Matrices Matrix Roots of Nonnegative Matrices Matrices > λ > 0 > λ λ =

10 ix 3.2 Primitive Matrices Imprimitive Irreducible Matrices Complete Residue Systems Jordan Chains of h-cyclic matrices Main Results Reducible Matrices Preliminary Results Bibliography 73

11 1 Chapter 1 Introduction 1.1 Background and Motivation A real matrix A is said to be eventually nonnegative (respectively, positive) if there exists a nonnegative integer p such that A k is entrywise nonnegative (respectively, positive) for all k p. If p is the smallest such integer, then p is called the power index of A and is denoted by p(a). Eventual nonnegativity was introduced and studied in [5] and was the central subject of study in various publications (see, e.g., [3, 4, 10, 14, 19, 20, 21, 24, 25]). It is well-known that the notions of eventual positivity and nonnegativity are associated with properties of the eigenspace corresponding to the spectral radius. A real matrix A has the Perron-Frobenius property if its spectral radius is a positive eigenvalue corresponding to an entrywise nonnegative eigenvector. The strong Perron-Frobenius property further requires that the spectral radius is simple; it dominates in modulus every other eigenvalue of A; and it has an entrywise positive eigenvector. Several challenges regarding the theory and applications of eventually nonnegative matrices remain unresolved. For example, eventual positivity of A is equivalent to A and A T having the strong Perron-Frobenius property, however, the Perron-Frobenius property for A and A T is a necessary but not sufficient condition for eventual nonnegativity of A. The motivation of this thesis is the following observation: an eventually nonnegative

12 2 (respectively, positive) matrix with power index p = p(a) is, a fortiori, a matrix pth-root of the nonnegative matrix A p (i.e., it is a solution to the matrix equation X p A p = 0). As a consequence, in order to gain insight into eventual nonnegativity, it is natural to examine the roots of matrices that possess the (strong) Perron-Frobenius property. 1.2 Notation and Definitions The following notational conventions are adopted throughout the thesis; when convenient, additional notation and definitions are introduced and adopted throughout. (1) Denote by C the set of complex numbers, R the set of real numbers, Z the set of integers, and N the set of natural numbers. Denote by i the imaginary unit, i.e., i := 1. (2) When convenient, an indexed set of the form {x i, x i+1,..., x i+j } is abbreviated to {x k } i+j k=i. (3) Denote by M n (C) (M n (R)) the algebra of complex (respectively, real) n n matrices. (4) Given A M n (C), [a ij ] = [a ij ] i,j=1 n denotes the entries of A, where a ij C is the (i, j)-entry of A. When convenient, the (i, j)-entry of A may also be denoted by [A] ij. (5) Denote by C n (R n ) the n-dimensional complex (respectively, real) vector space. Given x C n (x R n ), [x] i denotes the ith-component of x. (6) Denote by I n the n n identity matrix. Whenever the context is clear, the subscript may be suppressed.

13 3 (7) For λ C, J n (λ) M n (C) denotes the n n Jordan block, i.e, λ, i = j; [J n (λ)] ij := 1, j = i + 1; and 0, otherwise. (8) Given A M n (C): (i) the spectrum of A, denoted by σ (A), is the multiset σ (A) := {λ C : det(λi A) = 0}, i.e., σ (A) contains the eigenvalues of A; (ii) the spectral radius of A, denoted by ρ = ρ (A), is the quantity ρ (A) := max λ i σ(a) { λ i }; and (iii) the peripheral spectrum, denoted by π (A), is the (multi-)set given by {λ σ (A) : λ = ρ}. (9) The direct sum of the matrices A 1, A 2,..., A k, where A i M ni (C), denoted by A 1 A 2 A k, k i=1 A n i, or diag (A 1, A 2,..., A n ), is the matrix where n = k i=1 n i. A 1 A 2... A k M n (C)

14 4 (10) For A M n (C), let J = Z 1 AZ = t i=1 J n i (λ i ) = t i=1 J n i, where n i = n, denote a Jordan canonical form. (11) The order of the largest Jordan block of A corresponding to the eigenvalue λ is called the index of λ i and is denoted by m = m i = index λi A. (12) For c = (c 1, c 2,..., c n ), where c i C, i, the circulant matrix (or simply circulant) of c, denoted by circ (c) is the n n matrix given by c 1 c 2 c n c n c 1 c n 1. M n (C) c 2 c 3 c 1 (13) For A, B M n (C), the hadamard product of A and B, denoted by A B, is the n n matrix whose (i, j)-entry is given by [A B] ij := a ij b ij. (14) For A M n (C) and X M n (C), X is said to be a matrix pth-root, pth-root, matrix root, or simply root of A if X satisfies the matrix equation X p A = 0. (15) For k N, let R(k) denote the canonical complete residue system (mod k), i.e., R(k) = {0, 1,..., k 1}. 1.3 Preliminaries Because the combinatorial stucture of a matrix (i.e., the location of its zero-nonzero entries) will be central to the analysis of the matrix roots of nonnegative, irreducible, imprimitive

15 5 matrices, we devote Subsection to notation, concepts, and background concerning the combinatorial structure of a matrix. In addition, in Subsection 1.3.2, we establish ancillary results pertaining to upper-triangular Toeplitz matrices Combinatorial Structure of a Matrix For notation and definitions concerning the combinatorial stucture of a matrix, i.e., the location of the zero-nonzero entries of a matrix, we follow [2] and [10]. For further results concerning combinatorial matrix theory, see [2] and references therein. A directed graph (or simply digraph) Γ = (V, E) consists of a finite, nonempty set V of vertices, together with a set E V V of arcs. Digraphs may contain both arcs (u, v) and (v, u) as well as loops, i.e., arcs of the form (v, v). In this thesis, we do not consider directed general graphs (or simply general digraphs), i.e., digraphs that contain multiple copies of the same arc. For A M n (C), the directed graph (or simply digraph) of A, denoted by Γ = Γ (A), has vertex set V = {1,..., n} and arc set E = {(i, j) V V : a ij 0}. If R, C {1,..., n}, then A[R C] denotes the submatrix of A whose rows and columns are indexed by R and C, respectively. If R = C, then A[R R] is abbreviated as A[R]. For a digraph Γ = (V, E) and W V, the induced subdigraph Γ[W ] is the digraph with vertex set W and arc set {(u, v) E : u, v W }. For a square matrix A, Γ(A[W ]) is identified with Γ (A) [W ] by a slight abuse of notation. A digraph Γ is strongly connected if for any two distinct vertices u and v of Γ, there is a walk in Γ from u to v (as a consequence, there must also be a walk from v to u;

16 6 thus, both conditions could be employed as the definition). Following [2], we consider every vertex of V as strongly connected to itself. For a strongly connected digraph Γ, the index of imprimitivity is the greatest common divisor of the the lengths of the closed walks in Γ. A strong digraph is primitive if its index of imprimitivity is one, otherwise it is imprimitive. The strong components of Γ are the maximal strongly connected subdigraphs of Γ. For n 2, a matrix A M n (C), is reducible if there exists a permutation matrix P such that P T AP = A 11 A 12 0 A 22 where A 11 and A 22 are nonempty square matrices and 0 is a (possibly rectangular) block consisting entirely of zero entries. If A is not reducible, then A is called irreducible. The connection between reducibility and the digraph of A is as follows: A is irreducible if and only if Γ (A) is strongly connected 1 (see, e.g., [2, Theorem 3.2.1] or [11, Theorem ]). If A M n (C) is reducible, then there exists a permutation matrix P such that A 11 A 12 A 1k A P T AP = A 2k.... A kk (1.3.1) where each A ii is irreducible (see, e.g., [2, Theorem 3.2.4] or [11, Exercise 8, 8.3]. The matrix on the right-hand side of (1.3.1) is called the Frobenius (or irreducible) normal form of A. For h 2, a digraph Γ = (V, E) is cyclically h-partite if there exists an ordered partition Π = (π 1,..., π h ) of V into h nonempty subsets such that for each arc (i, j) E, there 1 Following [2], vertices are strongly connected to themselves so we take this result to hold for all n N and not just n N, n 2.

17 7 exists l {1,..., h} with i π l and j π l+1 (where, for convenience, we take V h+1 := V 1 ). For h 2, a strong digraph Γ is cyclically h-partite if and only if h divides the index of imprimitivity (see, e.g., [2, p. 70]). For h 2, a matrix A M n (C) is called h-cyclic if Γ (A) is cyclically h-partite. If Γ (A) is cyclically h-partite with ordered partition Π, then A is said to be h-cyclic with partition Π or that Π describes the h-cyclic structure of A. The ordered partition Π = (π 1,..., π h ) is consecutive if π 1 = {1,..., i 1 }, π 2 = {i 1 + 1,..., i 2 },..., π h = {i h 1 + 1,..., n}. If A is h-cyclic with consecutive ordered partition Π, then A has the block form 0 A A A (h 1)h A h (1.3.2) where A i,i+1 = A[π i π i+1 ] ([2, p. 71]). For any h-cyclic matrix A, there exists a permutation matrix P such that P AP T is h-cyclic with consecutive ordered partition. The cyclic index of A is the largest h for which A is h-cyclic. An irreducible nonnegative matrix A is primitive if Γ (A) is primitive, and the index of imprimitivity of A is the index of imprimitivity of Γ (A). If A is irreducible and imprimitive with index of imprimitivity h 2, then h is the cyclic index of A, Γ (A) is cyclically h-partite with ordered partition Π = (π 1,..., π h ), and the sets π i are uniquely determined (up to cyclic permutation of the π i ) (see, for example, [2, p. 70]). Furthermore, Γ ( A h) is the disjoint union of h primitive digraphs on the sets of vertices π i, i = 1,..., h (see, e.g., [2, 3.4]). Following [10], for an ordered partition Π = (π 1,..., π h ) of {1,..., n} into h nonnempty subsets, the cyclic characteristic matrix, denoted by χ Π, is the n n matrix whose (i, j)-entry

18 8 is 1 if there exists l {1,..., h} such that i π l and j π l+1 and 0 otherwise. For an ordered partition Π = (π 1,..., π h ) of {1,..., n} into h nonnempty subsets, note that (1) χ Π is h-cyclic and Γ (χ Π ) contains every arc (i, j) for i π l and j π l+1 ; and (2) A M n (C) is h-cyclic with ordered partition Π if and only if Γ (A) Γ (χ Π ) Upper-triangular Toeplitz Matrices In this subsection we establish the following lemmas concerning upper-triangular Toeplitz matrices. Lemma For n 2, let λ, c, y 1,..., y n 1 C, and y n 1 0. If y := (y 1, y 2,..., y n 1 ) T C n 1, ȳ := (c, y 1,..., y n 2 ) T C n 1, Ĵ := J n 1(λ) y M n (C), z T λ and ˆQ := I n 1 z T ȳ y n 1 M n (C) then ˆQJ ˆQ 1 = J n (λ). Proof. Note that ˆQ 1 = I n 1 ȳ y n 1 z T 1 y n 1 ˆQĴ ˆQ 1 = I n 1 z T, hence ȳ y n 1 J n 1(λ) z T y I n 1 λ ȳ y n 1 z T 1 y n 1

19 9 J n 1 (λ) = z T y + λȳ λy n 1 I n 1 ȳ y n 1 z T 1 y n 1 1 J n 1 (λ) y n 1 (y + λȳ J n 1 (λ)ȳ) =. z T λ If N n = [n ij ] C n n is the nilpotent matrix defined by 1, j = i + 1 n ij = 0, otherwise then it is well known that J n (λ) = λi n + N n. Furthermore, N n 1 ȳ = (y 1, y 2,..., y n 2, 0) T and y + λȳ J n 1 (λ)ȳ y n 1 = y + λȳ (λi n 1 + N n 1 )ȳ y n 1 = y + λȳ λȳ + N n 1ȳ y n 1 = (y 1, y 2,..., y n 2, y n 1 ) T (y 1, y 2,..., y n 2, 0) T y n 1 = (0, 0,..., 0, 1) T, which completes the proof.

20 10 Lemma For n 2, consider the upper-triangular Toeplitz matrix x 1... x k... x n F := x 1... x k M n (C).... where x 1,..., x n C and x i 0 for all i = 2,..., n. Then there exists an invertible matrix Q such that QF Q 1 = J n (x 1 ). Proof. Proceed by induction on n: for the base-case, i.e., n = 2, note that 1 0 x 1 x = x 1 x = x x 2 0 x 1 0 x 1 x 2 0 x 1 so that the base-case holds. 0 1 x 2 x x 2 Assume the assertion holds when n = k 1. Let F be the submatrix of F obtained from deleting the n th -row and column of F, i.e., x 1... x n 1 F =.... M n 1(C), x 1 and let x = (x n, x n 1,..., x 2 ) T C n 1 F. Then F = x and, by the inductive hypoth- z T x 1 esis, there exists an invertible matrix Q such that Q F Q 1 = J n 1 (x 1 ). Note that the matrix Q z Q := is invertible, Q 1 Q 1 z =, and z T 1 z T 1 QF Q 1 Q z F x Q 1 z = z T 1 z T x 1 z T 1,

21 11 Q F Qx Q 1 z = z T x 1 z T 1 1 Q F Q Qx = z T x 1 = J n 1(x 1 ) Qx. (1.3.3) z T x 1 If y := Qx C n 1, ȳ := (0, y 1,..., y n 2 ) T C n 1, then, following Lemma 1.3.1, a similaritytransform by the matrix ˆQ := I n 1 ȳ brings the matrix in (1.3.3) into the desired z T y n 1 form. Thus, the desired similarity matrix transformation is given by Q = ˆQ Q. In Table 1, we present the transformation Q and its inverse for several dimensions. Table 1: The transformation matrix Q and Q 1. n F Q Q 1 2 x 1 x x 1 0 x 2 0 x 2 3 x 1 x 2 x x 1 x 2 0 x 2 x 3 0 x 2 x x x x x 1 x 2 x 3 x x 1 x 2 x 3 0 x 2 x 3 x 4 0 x 2 x 3 x x 1 x x 2 2 2x 2 x x x x x 3 2 2x 2 3 x 2x 4 x 5 2 2x 3 x 4 2

22 12 Chapter 2 Theoretical Background 2.1 Perron-Frobenius Theory of Nonnegative Matrices We review some well-known results, concepts, and definitions from the theory of nonnegative matrices. For further results, see [11, Chapter 8] or [1] and references therein. Recall that a matrix A = [a ij ] M n (R) is said to be (entrywise) nonnegative (respectively, positive), denoted A 0 (respectively, A > 0), if a ij 0 (respectively, a ij > 0) for all 1 i, j n. Recall that a matrix A M n (R) is eventually positive (nonnegative) if there exists a nonnegative integer p such that A k is entrywise positive (nonnegative) for all k p. If p is the smallest such integer, then p is called the power index of A and is denoted by p(a). We recall the Perron-Frobenius theorem for positive matrices (see [11, Theorem ]). Theorem If A M n (R) is positive, then (a) ρ > 0; (b) ρ σ (A); (c) there exists a positive vector x such that Ax = ρx; (d) ρ is an algebraically (and hence geometrically) simple eigenvalue of A; and (e) λ < ρ for every λ σ (A) such that λ ρ.

23 13 There are nonnegative matrices with zero-entries that satisfy Theorem Recall that a nonnegative matrix A M n (R) is said to be primitive if it is irreducible and has only one eigenvalue of maximum modulus. The conclusions to Theorem apply to primitive matrices (see [11, Theorem 8.5.1]), and the following theorem is a useful characterization of primitivity (see [11, Theorem 8.5.2]). Theorem If A M n (R) is nonnegative, then A is primitive if and only if A k > 0 for some k 1. Perron-Frobenius type results also hold when A is an irreducible imprimitive matrix (see [11, 8.4]). Theorem Let A M n (R) and suppose that A is irreducible, nonnegative, and h is the cyclic index of A. Then (a) ρ > 0; (b) ρ σ (A); (c) there exists a positive vector x such that Ax = ρx; (d) ρ is an algebraically (and hence geometrically) simple eigenvalue of A; and (e) π (A) = {ρ exp (i2πk/h) : k R(h)}. The following result is the widest-reaching forrm of the Perron-Frobenius Theorem. Theorem If A M n (R) and A 0, then ρ (A) σ (A) and there exists a nonnegative vector x 0, x 0, such that Ax = ρ (A) x.

24 14 One can verify that the matrix possesses properties (a) through (e) of Theorem 2.1.1, is irreducible, but obviously contains a negative entry. This motivates the following concept. Definition A matrix A M n (R) is said to possess the strong Perron-Frobenius property if A possesses properties (a) through (e) of Theorem The following theorem characterizes the strong Perron-Frobenius property (see [6, Lemma 2.1], [14, Theorem 1], or [20, Theorem 2.2]). Theorem A real matrix A is eventually positive if and only if A and A T possess the strong Perron-Frobenius property. 2.2 Theory of Functions of Matrices We review some basic notions and results from the theory of matrix functions (for further results on the theory of matrix functions, see [8], [12, Chapter 9], or [15, Chapter 6]). Definition Let f : C C be a function and f (k) denote the kth derivative of f. The function f is said to be defined on the spectrum of A if the values f (k) (λ i ), k = 0,..., m i 1, i = 1,..., s, called the values of the function f on the spectrum of A, exist. Definition (Matrix function via Jordan canonical form). If f is defined on the spectrum of A M n (C), then ( k ) f(a) := Zf(J)Z 1 = Z f(j ni ) Z 1, i=1

25 15 where f(λ i ) f (λ i )... f(λ f(j ni ) := i ) f (λ i ) f(λ i ) f (n i 1) (λ i ) (n i 1)!. (2.2.1) The following equivalent definition (see [8, Theorem 1.12]) serves to illuminate the notion of a primary root, and will be used to examine the 2 2 case in Section 3.1. Definition (Matrix function via Hermite interpolation). Let f be defined on the spectrum of A M n (C) and let ϕ be the minimal polynomial of A. Then f(a) := p(a), where p is the polynomial of degree less than s n i = deg ϕ i=1 that satisfies the interpolation conditions p (j) (λ i ) = f (j) (λ i ), j = 0,..., n i 1, i = 1,..., s (2.2.2) There is a unique such p and it is known as the Hermite interpolating polynomial. Remark The Hermite interpolating polynomial p is given explicitly by the Lagrange- Hermite formula p(t) = s i=1 [( ni 1 j=0 1 j! φ(j) i (λ i )(t λ i ) j ) j i (t λ j ) n j ], (2.2.3) where φ i (t) = f(t)/ j i (t λ j) n j. For a matrix with distinct eigenvalues, i.e., n i = 1, s = n, (2.2.3) reduces to p(t) = n i=1 f(λ i )l i (t), l i (t) = j=1 j i ( t λj λ i λ j ). (2.2.4)

26 16 Formulae (2.2.3) and (2.2.4) will be used to classify the primary matrix roots 2 2 matrices in Section 3.1. Theorem Let f be defined on the spectrum of a nonsingular matrix A M n (C). If J = t i=1 J n i (λ i ) = Z 1 AZ is a Jordan canonical form of A, then is a Jordan canonical form of f(a). Proof. Because f(a) = Z (f(j)) Z 1 J f := t J ni [f(λ i )] i=1 = Z ( k ) i=1 f(j n i ) Z 1, note that, for each i = 1,..., t, f(j ni ) is an upper-triangular, Toeplitz matrix. Following Lemma 1.3.1, for each i = 1,..., k, there exists an invertible matrix Q i M ni (C) such that f(j ni ) = Q i J ni (f(λ i ))Q 1 i, and hence f(a) = Z = Z = and the result is established. ( t f(j ni )Z 1 i=1 [ t Z i=1 t Q i J ni [f(λ i )]Q 1 i t Q i) i=1 i=1 ( t ) = Z J ni [f(λ i )] i=1 ] J ni [f(λ i )] Z 1 Z 1 ( t i=1 Q 1 i Z 1 ) Definition For z = r exp (iθ) C, where r > 0, and an integer p > 1, let z 1/p := r 1/p exp (iθ/p), and, for j {0, 1,..., p 1}, define f j (z) = z 1/p exp (i2πj/p) = r 1/p exp (i [θ + 2πj] /p), (2.2.5)

27 17 i.e., f j is the (j + 1)st-branch of the pth-root function. Note that f (k) j [ (z) = 1 k 1 ] ( i[2πj + θ(1 kp)] (1 ip) r (1 kp)/p exp p k p i=0 ), (2.2.6) where k is a nonnegative integer and the product k 1 i=0 (1 ip) is empty when k = 0. Next we present several technical, but easily establishable lemmas on the branches of the pth-root function. Lemma If z = r exp (iθ), z = r exp (iθ ) C, and r < r, then. Proof. Note that f j (z) = r 1/p exp (i [θ + 2πj] /p) = r 1/p exp (i [θ + 2πj] /p) = r 1/p < (r ) 1/p = (r ) 1/p exp (i [θ + 2πj ] /p) = (r ) 1/p exp (i [θ + 2πj ] /p) = f j (z ). Lemma For z C, I(z) 0, j, j R(p), and f (k) j f (k) j ( z) if and only if j + j 0 (modp). as in (2.2.6), we have f (k) j (z) = Proof. Note that f (k) j (z) = f (k) ( z) j

28 18 exp (i[2πj + θ(1 kp)]/p) = exp (i[ 2πj + θ(1 kp)]/p) exp (i[2πj + θ(1 kp)]/p) = exp (i[ 2πj + θ(1 kp)]/p) exp (i2πlp/p) exp (i[2πj + θ(1 kp)]/p) = exp (i[2π(pl j ) + θ(1 kp)]/p) i[2πj + θ(1 kp)]/p = i[2π(pl j ) + θ(1 kp)]/p j + j = lp j + j 0 (modp), where l Z. Finally, note that j + j 0 (modp) j = j = 0 or j = p j. Lemma Let z = r exp (iπ), r > 0. For j, j {0, 1,..., p 1} and f (k) j as in (2.2.6), we have f (k) j (z) = f (k) j (z) if and only if j + j = p 1. Proof. Note that f (k) j (z) = f (k) (z) exp (i[2πj + π(1 kp)]/p) = exp (i[ 2πj π(1 kp)]/p) j exp (i[2πj + π(1 kp)]/p) = exp (i[ 2πj π(1 kp)]/p) exp (i2π(p kp)) exp (i[2πj + π(1 kp)]/p) = exp (i[ 2πj π(1 kp) + 2π(p kp)]/p) i[2πj + π(1 kp)]/p = i[ 2πj π(1 kp) + 2π(p kp)]/p 2πj + π(1 kp) = 2πj π(1 kp) + 2π(p kp) 2πj + 2π(1 kp) = 2πj + 2π(p kp) j + j = p 1. The following theorem classifies all pth-roots of a general nonsingular matrix [23, Theorems 2.1 and 2.2].

29 19 Theorem (Classification of pth-roots of nonsingular matrices). If A M n (C) is nonsingular, then A has precisely p s pth-roots that are expressible as polynomials in A, given by ( t ) X j = Z f ji (J ni ) Z 1, (2.2.7) i=1 where j = (j 1,..., j t ), j i {0, 1,..., p 1}, and j i = j k whenever λ i = λ k. If s < t, then A has additional pth-roots that form parameterized families ( t ) X j (U) = ZU f ji (J ni ) U 1 Z 1, (2.2.8) i=1 where U is an arbitrary nonsingular matrix that commutes with J and, for each j, there exist i and k, depending on j, such that λ i = λ k, while j i j k. In the theory of matrix functions, the roots given by (2.2.7) are called primary functions (or roots) of A and are expressible as polynomials in A, and the roots given by (2.2.8), which exist only if A is derogatory (i.e., if some eigenvalue appears in more than one Jordan block), are called the nonprimary functions or roots of A and are not expressible as polynomials in A [8, Chapter 1]. The next result provides a necessary and sufficient condition for the existence of a root for a general matrix (see [22]). Theorem (Existence of pth-root). A matrix A M n (C) has a pth-root if and only if the ascent sequence of integers d 1, d 2,... defined by d i = dim (null (A i )) dim (null (A i 1 )) has the property that for every integer ν 0 no more than one element of the sequence lies strictly between pν and p(ν + 1).

30 20 Before we state results concerning the matrix roots of a real matrix, we state some wellknown results concerning the real Jordan canonical form for real matrices (see [11, Section 3.4], [15, Section 6.7]). Theorem (Real Jordan canonical form). If A M n (R) has r real eigenvalues (including multiplicities) and c complex conjugate pairs of eigenvalues (including multiplicities), then there exists a real, invertible matrix R M n (R) such that r R 1 k=1 AR = J R = J n k (λ k ), r+c k=r+1 C n k (λ k ) where: 1. C(λ) I 2 C(λ) C k (λ) :=.... M 2k (R); (2.2.9).. I2 C(λ) 2. C(λ) := R(λ) I(λ) I(λ) M 2 (R); (2.2.10) R(λ) 3. λ 1,..., λ r are the real eigenvalues (including multiplicities) of A; and 4. λ r+1, λ r+1,..., λ r+c, λ r+c are the complex eigenvalues (including multiplicities) of A.

31 21 Lemma Let λ C and suppose C k (λ) and C(λ) are defined as in (2.2.9) and ( k ) (2.2.10), respectively. If S k := i=1 S M 2k (R), where S := i i, then 1 1 S 1 k C k(λ)s k = D k (λ) := where D(λ) := λ 0 0 λ. D(λ) I 2 D(λ).... M 2k (C), (2.2.11).. I 2 D(λ) Proof. Proceed by induction on k, the number of 2 2-blocks: when k = 1 one readily obtains S 1 C(λ)S = 1 i 1 R(λ) 2 i 1 I(λ) I(λ) i i = D(λ). R(λ) 1 1 Now assume the assertion holds for all matrices of the form (2.2.9) of dimension 2(k 1). Note that the matrices in the product S 1 k C k(λ)s k can be partitioned as S 1 k 1 Z C k 1(λ) Y S k 1 Z, Z T S 1 Z T C(λ) Z S. where Z M 2(k 1),2 (R) is a rectangular zero matrix, Y = M 2(k 1),2 (R), and Z 2 Z 2 is the 2 2 zero matrix. With the above partition in mind, and following the induction Z 2 I 2

32 22 hypothesis, we obtain S 1 C k (λ)s = D k 1(λ) Z T S 1 k 1 Y S, S 1 C(λ)S but note that S 1 k 1 Y S = Y and S 1 C(λ)S = D(λ). Lemma Let λ C and suppose D k (λ) and D(λ) are defined as in Lemma If P k is the permutation matrix given by ] P k = [e 1 e 3... e 2k 1 e 2 e 4... e 2k M 2k (R), (2.2.12) where e i denotes the canonical basis vector in R n of appropriate dimension, then P T k D k (λ)p k = J k (λ) J k ( λ). Proof. Proceed by induction on k: the base-case when k = 1 is trivial, so we assume the assertion holds for matrices of the form (2.2.11) of dimension 2(k 1). If z n denotes the n 1 zero vector, then λ e T 2 0 D(λ) = z 2(k 1) D 2(k 1) ( λ) e 2(k 1), 0 z2(k 1) T λ and if ˆP is the permutation matrix defined by 1 z2(k 1) T 0 ˆP := z 2(k 1) P 2(k 1) z 2(k 1) M 2k(R), 0 z2(k 1) T 1

33 23 then, following the induction-hypothesis, λ zk 1 T e T 1 0 ˆP T D(λ) ˆP z = k 1 J k 1 ( λ) Z k 1 z k 1. (2.2.13) z k 1 Z k 1 J k 1 (λ) e k 1 0 zk 1 T zk 1 T λ A permutation-similarity by the matrix P defined by 1 I P := k 1 M 2k (R) I k 1 1 brings the matrix in the right-hand side of (2.2.13) to the desired form. The proof is completed by noting that 1 ] ˆP P = [e 1 e 2... e 2(k 1) e 3... e 2k 1 e 2k I k 1 I k 1 = P k, 1 since right-hand multiplication by P permutes columns 2 through k with columns k + 1 through 2k 1. Corollary Let λ C, λ 0, and let f be a function defined on the spectrum of J k (λ) J k ( λ). For j a nonnegative integer, let f (j) λ denote f (j) (λ). If C k (λ) and C(λ) are

34 24 defined as in (2.2.9) and (2.2.10), respectively, then ( ) C(f λ ) C(f λ )... C f (k 1) λ (k 1)! f(c k (λ)) = C(f λ )..... M 2k (R)... C(f λ ) C(f λ ) if and only if f (j) λ = f (j) λ. Proof. Following Lemmas Lemma and Lemma , P T k S 1 k C k(λ)s k P k = J k (λ) J k ( λ). (2.2.14) Since f(a) = f(x 1 AX) ([8, Theorem 1.13(c)]), f(a B) = f(a) f(b) [8, Theorem 1.13(g)], and f (j) λ = f (j) λ for all j, applying f to (2.2.14) yields Pk T S 1 k f(c k(λ))s k P k = f(j k (λ)) f(j k ( λ) ) = f(jk (λ)) f(j k (λ)). Hence, and ] k f(c k(λ))s k = P k [f(j k (λ)) f(j k (λ)) Pk T ( ) D(f λ ) D(f λ )... D f (k 1) λ (k 1)!. = D(f λ ) D(f λ ) D(f λ ) S 1 ( ) D(f λ ) D(f λ )... D f (k 1) λ (k 1)! f(c k (λ)) = S k D(f λ ) D(f λ ) D(f λ ) S 1 k

35 25 ( ) C(f λ ) C(f λ )... C f (k 1) λ (k 1)!. = C(f λ ) C(f λ ) C(f λ ) The converse follows by noting that, for µ, ν C, the product S µ 0 S 1 = i i µ 0 1 i ν ν i 1 = 1 iµ iν i 1 2 µ ν i 1 = 1 µ + ν i(ν µ) 2 i(µ ν) µ + ν is real if and only if ν = µ. Remark In general, our analysis reveals that f(c λ ) f f (C λ )... (k 1) (C λ ) (k 1)! f(c f(c k (λ)) = λ )..... M 2k (C),.. f (C λ ) f(c λ ) which bears a striking resemblance to (2.2.1). Corollary If λ C, I(λ) 0 and F j (C k (λ)) := S k P k f j 1 (J k (λ)) 0 Pk 0 f j2 (J k ( λ) T S 1 k M 2k (C), (2.2.15) )

36 26 where j = (j 1, j 2 ) and j 1, j 2 {0, 1,..., p 1}, then ( ) C(f j1 (λ)) C(f f (k 1) j (λ) j 1 (λ))... C 1 (k 1)!. F j (C k (λ)) = C(f j1 (λ))... M 2k (R)... C(f j1 (λ)) C(f j1 (λ)) if and only if j 1 = j 2 = 0 or j 1 = j 2 p. Proof. Follows from Lemma and Corollary Corollary If λ = r exp (iπ) C, where r > 0, and F j is defined as in (2.2.15), then ( ) C(f j1 (λ)) C(f f (k 1) j (λ) j 1 (λ))... C 1 (k 1)!. F j (C k (λ)) = C(f j1 (λ)).... M 2k (R).. C(f j1 (λ)) C(f j1 (λ)) if and only if j 1 + j 2 = p 1. Proof. Follows from Lemma and Corollary The next theorem provides a necessary and sufficient condition for the existence of a real pth-root of a real A (see [9, Theorem 2.3]) and our proof utilizes the real Jordan canonical form. Theorem (Existence of real pth-root). A matrix A M n (R) has a real pth-root if and only if it satisfies the ascent sequence condition specified in Theorem and, if p is even, A has an even number of Jordan blocks of each size for every negative eigenvalue.

37 27 Proof. Case 1: p is even. Following Theorem , there exists a real, invertible matrix R such that A = R J 0 J + J C R 1 where J 0 collects the singular Jordan blocks; J + collects the Jordan blocks with positive real eigenvalues; J collects Jordan blocks with negative real eigenvalues; and C collects blocks of the form (2.2.9) corresponding to the complex conjugate pairs of eigenvalues of A. By hypothesis, if J k (λ) is a submatrix of J, it must appear an even number of times; for every such pair of blocks, it follows that P k [J k (λ) J k (λ)]p T k = C k (λ), where P k is defined as in (2.2.12). Thus, there exists a permutation matrix P such that J 0 J 0 J A = RP T P + P T P R 1 = R J p R 1, J C C where R = RP T, and C collects all the blocks of the form (2.2.9). Since the ascent sequence condition holds for A it must also hold for J 0, so J 0 has a pth-root, say W 0, and W 0 can be taken real in view of the construction given in [22, Section 3]; clearly, there exists a real matrix W + such that W p + = J + and, following Corollaries and , there exists a real matrix W c such that Wc p = C. Hence, the matrix X = R[W 0 W + W c ] R 1 is a real pth-root of A.

38 28 Conversely, if A satisfies the ascent sequence condition and has an odd number of Jordan blocks corresponding to a negative eigenvalue, then the process just described can not produce a real matrix pth-root, as one of the Jordan blocks can not be paired, so that the root of such a block is necessarily complex. Case 2: p is odd. Follows similarly to the first case since real roots can be taken for J 0, J +, J, and C. We now present an analog of Theorem for real matrices. Theorem (Classification of pth-roots of nonsingular real matrices). Let F k be defined as in (2.2.15). If A M n (R) is nonsingular, then A has precisely p s primary pth-roots, given by r k=1 X j = R f j k (J nk (λ k )) 0 R 1, (2.2.16) 0 r+c k=r+1 F j k (C nk (λ k )) where j = (j 1,..., j r, j r+1,..., j r+c ), j k = (j k1, j k2 ) for k = r + 1,..., r + c, and j i = j k whenever λ i = λ k. If s < t, then A has additional nonprimary pth-roots that form parameterized families given by r k=1 X j (U) = RU f j k (J nk (λ k )) 0 U 1 R 1, (2.2.17) 0 r+c k=r+1 F j k (C nk (λ k )) where U is an arbitrary nonsingular matrix that commutes with J R, and for each j there exist i and k, depending on j, such that λ i = λ k while j i j k. Proof. Following Theorem , there exists a real, invertible matrix R such that r R 1 k=1 AR = J R = J n k (λ k ) ; r+c k=r+1 C n k (λ k )

39 29 if T = I, then r+c k=r+1 S n k P nk r T 1 k=1 J R T = J = J n k (λ k ) r+c k=r+1 [ )], Jnk (λ k ) J nk ( λk following Lemmas and Following Theorem , J R has p s primary roots given by r k=1 T f j k (J nk (λ k )) [ ] T 1 r+c k=r+1 f jk1 (J nk (µ k )) f jk2 (J nk ( µ k ) r k=1 = f j k (J nk (λ k )) 0, 0 r+c k=r+1 F j k (C nk (µ k )) where j k = (j k1, j k2 ) for k = r + 1,..., r + c, which establishes (2.2.16). If A is derogatory, then J R has additional roots of the form r k=1 T W f j k (J nk (λ k )) [ ] W 1 T 1, r+c k=r+1 f jk1 (J nk (µ k )) f jk2 (J nk ( µ k ) where W is arbitray nonsingular matrix that commutes with J. Note that r k=1 T W f j k (J nk (λ k )) [ ] W 1 T 1 r+c k=r+1 f jk1 (J nk (µ k )) f jk2 (J nk ( µ k ) r k=1 = U f j k (J nk (λ k )) U 1, r+c k=r+1 F j k (C nk (λ k )) where U = T W T 1. Following [15, Theorem 1, 12.4], U is an arbitary, nonsingular matrix that commutes with J R, which establishes (2.2.17).

40 30 The next theorem identifies the number of real primary pth-roots of a real matrix (c.f. [9, Theorem 2.4]) and our proof utilizes the real Jordan canonical form. Corollary Let the nonsingular real matrix A have r 1 distinct positive real eigenvalues, r 2 distinct negative real eigenvalues, and c distinct complex-conjugate pairs of eigenvalues. If p is even, there are (a) 2 r 1 p c real primary pth-roots when r 2 = 0; and (b) no real primary pth-roots when r 2 > 0. If p is odd, there are p c real primary pth-roots. Proof. Following Theorem , there exists a real, invertible matrix R such that r R 1 k=1 AR = J R = J n k (λ k ). r+c k=r+1 C n k (λ k ) Case 1: p is even. If r 2 > 0, then A does not possess a real primary root, since A must have an even number of Jordan blocks of each size for every negative eigenvalue and the same branch of the pth-root function must be selected for every Jordan block containing the same negative eigenvalue. If r 2 = 0, then, following Corollary , for every complex-conjugate pair of eigenvalues, there are p choices such that F jk (C nk (λ k )) is real. For every real eigenvalue, there are two choices such that f jk (J nk (λ k )) is real, yielding 2 r 1 p c real primary roots. Case 2: p is odd. The matrix X j is real provided that the principal-branch of the pth-root function is chosen for every real eigenvalue. Similar to the first case, there are p choices such that F jk (C nk (λ k )) is real, yielding p c real primary roots. The following theorem extends Theorem to include singular matrices (see [9, Theorem 2.6]). Theorem (Classification of pth-roots). Let A M n (C) have the Jordan canonical form Z 1 AZ = J = J 0 J 1, where J 0 collects together all the Jordan blocks corresponding to

41 31 the eigenvalue zero and J 1 contains the remaining Jordan blocks. If A possesses a pth-root, then all pth-roots of A are given by A = Z (X 0 X 1 ) Z 1, where X 1 is any pth-root of J 1, characterized by Theorem , and X 0 is any pth-root of J 0.

42 32 Chapter 3 Matrix Roots of Nonnegative Matrices Matrices We analyze and categorize the primary matrix roots of a 2 2 nonnegative matrix via the interpolating-polynomial definition of a matrix function (see Definition 2.2.3, (2.2.3), and (2.2.4)): let A M 2 (R) and suppose A 0. Following Theorem and the fact that the spectrum of a real matrix must be self-conjugate (see, e.g., [11, Appendix C]), without loss of generality, we may assume that σ (A) = {1, λ}, where 1 λ 0 and λ R. Note that the Jordan form of A must be or 1 0 (3.1.1) 0 λ 1 1. (3.1.2) 0 1 As in (2.2.5), let f j denote the (j + 1)st-branch of the pth-root function. For 2 2 matrices having the Jordan form (3.1.1), with the exception when λ = 1, following (2.2.4), the required Lagrange-Hermite interpolating polynomial is given by p(t) = f j1 (1) t λ ( ) t 1 1 λ + f j j2 (λ) = ωj 1 p f j2 (λ) t + f j 2 (λ) λω j 1 p. λ 1 1 λ 1 λ Thus, if j = (j 1, j 2 ), then A has p 2 primary pth-roots given by A 1/p j = ωj 1 p f j2 (λ) 1 λ A + f j 2 (λ) λω j 1 p I 2. (3.1.3) 1 λ

43 > λ 0 Let A 1/p j be the matrix given in (3.1.3) and consider the two cases depending on the parity of p: (1) p is even. The root A 1/p j is real if and only if j 1, j 2 {0, p/2}. If j 1 = j 2 = 0, then A 1/p j = 1 p λ 1 λ A + p λ λ 1 λ I 2 0, because (1 p λ)/(1 λ) > 0 and ( p λ λ)/(1 λ) > 0. If j 1 = j 2 = p/2, then A 1/p j = If j 1 = p/2 and j 2 = 0, then A 1/p j p λ 1 1 λ A + λ p λ 1 λ I 2 0. can not be nonnegative since the off-diagonal entries of A are negative and not affected by a positive multiple of the identity. If j 1 = 0 and j 2 = p/2, then A 1/p j = 1 p λ 1 λ A + λ p λ 1 λ I 2, and A 1/p j 0 if and only if (1 p λ)a ii (λ p λ) for i = 1, 2. (2) p is odd. The root A 1/p j is real if and only if j 1 = j 2 = 0 and, if j 1 = j 2 = 0, then A 1/p j = 1 p λ 1 λ A + p λ λ 1 λ I 2 0, because (1 p λ)/(1 λ) > 0 and ( p λ λ)/(1 λ) > 0. If λ = 0, then A is a rank-one matrix and A 1/p j = ω j 1 p A, j 1 R(p)

44 > 0 > λ Let A 1/p j be the matrix given in (3.1.3) and consider the two cases depending on the parity of p: (1) p is even. If λ < 0 and p is even, then A 1/p j M 2 (R). (2) p is odd. Same as (2) when 1 > λ > λ = 1 If the Jordan form of A is (3.1.1), then (trivially) A = I 2 and { } ω j 1 p 1 p I 2 are the pth-roots j 1 =0 of A. If the Jordan form of A is (3.1.2), then (2.2.3) reduces to p(t) = f j1 (1) + f j 1 (1)(t 1) = ω j 1 p + ωj 1 p p (t 1) so that j = f j1 (1)I 2 + f j 1 (1)(A I 2 ) = ωj 1 p p A + A 1/p [ ω j 1 p ωj 1 p p ] I 2, which is real if and only if j 1 = 0 (or j 1 = p/2, when p is even). When j 1 = 0, note that A 1/p j = 1 [ p A ] I 2 = A + (p 1)I 2. p p As a special case, note that if A = 1 1, then J p := p N. p 1 is a pth-root for all

45 Primitive Matrices We present our main results for primitive matrices. Theorem Let the nonsingular primitive matrix A have r 1 distinct positive real eigenvalues, r 2 distinct negative real eigenvalues, and c distinct complex-conjugate pairs of eigenvalues. If p is even, there are (a) 2 r1 1 p c eventually positive primary pth-roots when r 2 = 0; and (b) no eventually positive primary pth-roots if r 2 > 0. If p is odd, there are p c eventually positive primary pth-roots. Proof. If r 2 > 0, then, following Theorem , the matrix A does not have a real primary pth-root, hence, a fortiori, it can not have an eventually positive primary pth-root. If r 2 = 0, then, following Theorems and 2.1.1, there exists a real, invertible matrix R such that ] ρ A = [x R r k=2 J n k (λ k ) yt, (3.2.1) (R 1 ) r+c k=r+1 C n k (λ k ) ] where R = [x R M n (R), R 1 = y M n (R), x > 0 is the right Perron-vector, (R 1 ) and y > 0 is the left Perron-vector. Following Lemma 2.2.7, f j (z) < f j (z ) for all j, j {0, 1,..., p 1}, so that any primary pth-root X of the form p ρ X j = R r k=2 f j k (J nk (λ k )) r+c k=r+1 F j k (C nk (λ k )) R 1

46 36 inherits the strong Perron-Frobenius property from A. Since f(a) T = f(a T ) ([8, Theorem 1.13(b)]), a similar argument demonstrates that X T inherits the strong Perron-Frobenius property from A T. Case 1: p is even. For all k = 2,..., r, there are two possible choices such that f jk (J nk (λ k )) is real; following Corollary , for all k = r + 1,..., r + c, there are p choices such that F jk (C nk (λ k )) is real. Thus, there are 2 r1 1 p c possible ways to select X and X T to be real. Case 2: p is odd. For all k = 2,..., r, the principal-branch of the pth-root function must be selected so that f jk (J nk (λ k )) is real and, similar to the previous case, there are p choices such that F jk (C nk (λ k )) is real. Hence, there are p c ways to select X and X T real. Regardless of the parity of p, following Theorem 2.1.6, the matrices X and X T are eventually positive. Example We demonstrate Theorem via an example. Consider the matrix A =

47 37 Note that A = RJ R R 1, where J R = , R = , and R 1 = Because σ (A) = {10, 1 + i, 1 + i, 1 i, 1 i}, following Theorem 3.2.1, A has eight primary matrix square-roots, of which the matrices X j = R R 1

48 = , where j = (0, (0, 0)), and X j = R = , where j = (0, (1, 1)), are eventually positive square-roots of A. R 1 The following question arises from Example 3.2.2: are X j and X j the only eventually positive square-roots of A? This is answered in the following result, which yields an explicit description of the eventually positive primary roots of a nonsingular primitive matrix A. Theorem Let A be a nonsingular, primitive matrix and X j be any primary pth-root

49 39 of A of the form f j1 (ρ) X j = R r k=2 f j k (J nk (λ k )) R 1, r+c k=r+1 F j k (C nk (λ k )) where j = (j 1,..., j r, j r+1,..., j r+c ) and j k = (j k1, j k2 ), for k = r + 1,..., r + c. If p is odd, then X j is eventually positive if and only if 1. j 1 = 0; 2. j k = 0 for all k = 2,..., r; and 3. j k = (0, 0) or j k = (j k1, p j k1 ) for all k = r + 1,..., r + c. If p is even, then X j is eventually positive if and only if 1. j 1 = 0; 2. j k = 0 or j k = p/2 for all k = 2,..., r; and 3. j k = (0, 0) or j k = (j k1, p j k1 ) for all k = r + 1,..., r + c. Proof. We demonstrate necessity as sufficiency is shown in the proof of Theorem To this end, we demonstrate the contrapositive. Case 1: p is odd. If the principal branch of the pth-root function is not selected for the Perron eigenvalue, then X j can not possess the strong Perron-Frobenius property; if j k 0 for some k {2,..., r}, then f jk (J nk (λ k )) is not real so that X j can not be real; similarly, X j is not real if j k (0, 0) or j k (j k1, p j k1 ) for some k {r + 1,..., r + c}. Case 2: p is even. Result is similar to the first case, but we note that, without loss of generality, we may assume that A does not have any negative eigenvalues (else it can not possess a primary root).

50 40 Theorem Let A be a primitive, nonsingular, derogatory matrix that possesses a real root, and let X j (U) be any nonprimary root of A of the form f j1 (ρ) X j (U) = RU r k=2 f j k (J nk (λ k )) U 1 R 1, r+c k=r+1 F j k (C nk (λ k )) where j = (j 1,..., j r, j r+1,..., j r+c ) and j k = (j k1, j k2 ), for k = r + 1,..., r + c, subject to the constraint that for each j, there exist i and k, depending on j, such that λ i = λ k, while j i j k. If p is even, then X j (U) is eventually positive if and only if (1) j 1 = 0; (2) j k = 0 or j k = p/2, for all k = 2,..., r; (3) j k = (0, 0) or j k = (j k1, p j k1 ) for all k = r + 1,..., r + c; (4) U is a real nonsingular matrix; and (5) Jordan blocks containing negative eigenvalues are transformed, via a permutation matrix, to blocks of the form (2.2.9) (see proof of Theorem ) and branches for these blocks are selected in accordance with Corollary If p is odd, then X j (U) is eventually positive if and only if (1) j 1 = 0; (2) j k = 0 for all k = 2,..., r; (3) j k = (0, 0) or j k = (j k1, p j k1 ) for all k = r + 1,..., r + c; and

51 41 (4) U is a real nonsingular matrix. Example We demonstrate Theorem via an example. Consider the matrix A = Note that A = RJ R R 1, where J R =

52 42 R = , and R 1 = If j = (0, (0, 0), (1, 1)), then, following Theorem 3.2.4, any matrix of the form X j (U) = RU 20 F (0,0) (C 2 (1 + i)) F (1,1) (C 2 (1 + i)) R 1 U 1,

53 43 where U = u u 2 u u 4 u u 3 u u 5 u u 2 u u 4 u u 3 u u 5 u 4 0 u 6 u u 8 u u 7 u u 9 u u 6 u u 8 u 9, u 7 u u 9 u 8 u 1 0, and u i R, for i = 2,..., 9, is an eventually positive square-root of A. Next, we present an analog of Theorem for real matrices. Theorem Let the primitive matrix A M n (R) have the real Jordan canonical form R 1 AR = J R = J 0 J 1, where J 0 collects all the singular Jordan blocks and J 1 collects the remaining blocks. If A possesses a real root, then all eventually positive pth-roots of A are given by A = R (X 0 X 1 ) R 1, where X 1 is any pth-root of J 1, characterized by Theorem or Theorem 3.2.4, and X 0 is a real pth-root of J 0. Remark It should be clear that Theorems 3.2.1, 3.2.3, 3.2.4, and remain true if the assumption of primitivity is replaced with eventually positivity. Recall that for A M n (C) with no eigenvalues on R, the principal pth-root, denoted by A 1/p, is the unique pth-root of A all of whose eigenvalues lie in the segment {z : π/p <

54 44 arg(z) < π/p} [8, Theorem 7.2]. In addition, recall that a nonnegative matrix A is said to be stochastic if n j=1 a ij = 1, for all i = 1,..., n. In [9], being motivated by discrete-time Markov-chain applications, two classes of stochastic matrices were identified that possess stochastic principal pth-roots for all p. As a consequence, and of particular interest, for these classes of matrices the twelfth-root of an annual transition matrix is itself a transition matrix. A (monthly) transition matrix that contains a negative entry or is complex is meaningless in the context of a model, however the following remark demonstrates that, under suitable conditions, a matrix-root of a primitive stochastic matrix will be eventually stochastic (i.e., eventually positive with row sums equal to one). Eventual stochasticity may be useful in the following manner: consider, for example, the application of discrete-time Markov chains in credit risk: let P = [p ij ] M n (R) be a primitive transition matrix where p ij [0, 1] is the probability that a firm with rating i transitions to rating j. Such matrices are derived via annual data, and the twelfth-root of such a matrix would correspond to a monthly transition matrix if the entries are nonnegative. However, in application, the twelfth-root would be used for forecasting purposes (e.g., to estimate the likelihood that a firm with credit-rating i, transitions to rating j, m months into the future). Thus, if R is the twelfth-root of P and R m 0, then r (m) ij is a candidate for the aforementioned probability. Moreover, it is well-known that n 2 2n + 2 is a sharp upper-bound for the primitivity index of a primitive matrix (see, e.g., [11, Corollary 8.5.9]) so that eventual stochasticity is feasible in application. Remark Theorems 3.2.1, 3.2.3, 3.2.4, and remain true if stochasticity is added as an assumption on the matrix A and the conclusion of eventually postivity is replaced with eventually stochasticity.

55 45 We conclude with the following remark. Remark If A is an eventually positive matrix matrix, then B := A q is eventually positive for all q N: thus, all the previous results hold for fractional powers of A. 3.3 Imprimitive Irreducible Matrices Consider the matrices A = 0 0 1, B = and C = One can verify that B 2 = C 2 = A, yet the matrix C is not eventually nonnegative despite possessing left and right positive eigenvectors. In this section, we investigate why it is guaranteed that B is an eventually nonnegative and C is not eventually nonnegative Complete Residue Systems Our discussion of complete residue systems begins with the following definition. Definition If R = {a 0, a 1,..., a h 1 } is a set of integers, then R is said to be a complete residue system (modh), denoted CRS (h), if the map f : R R(h) defined by a i q i := a i (modh) is injective or, equivalently, surjective. With the aforementioned definition, we readily glean the following equivalent properties if R is a CRS(h):

arxiv: v1 [math.ra] 16 Jul 2014

arxiv: v1 [math.ra] 16 Jul 2014 Matrix Roots of Imprimitive Irreducible Nonnegative Matrices Judith J. McDonald a, Pietro Paparella b, arxiv:1407.4487v1 [math.ra] 16 Jul 2014 a Department of Mathematics, Washington State University,

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius. May 7, DETERMINING WHETHER A MATRIX IS STRONGLY EVENTUALLY NONNEGATIVE LESLIE HOGBEN 3 5 6 7 8 9 Abstract. A matrix A can be tested to determine whether it is eventually positive by examination of its

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

Two applications of the theory of primary matrix functions

Two applications of the theory of primary matrix functions Linear Algebra and its Applications 361 (2003) 99 106 wwwelseviercom/locate/laa Two applications of the theory of primary matrix functions Roger A Horn, Gregory G Piepmeyer Department of Mathematics, University

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Primitive Digraphs with Smallest Large Exponent

Primitive Digraphs with Smallest Large Exponent Primitive Digraphs with Smallest Large Exponent by Shahla Nasserasr B.Sc., University of Tabriz, Iran 1999 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Section 1.7: Properties of the Leslie Matrix

Section 1.7: Properties of the Leslie Matrix Section 1.7: Properties of the Leslie Matrix Definition: A matrix A whose entries are nonnegative (positive) is called a nonnegative (positive) matrix, denoted as A 0 (A > 0). Definition: A square m m

More information

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Sign Patterns that Allow Diagonalizability

Sign Patterns that Allow Diagonalizability Georgia State University ScholarWorks @ Georgia State University Mathematics Dissertations Department of Mathematics and Statistics 12-10-2018 Sign Patterns that Allow Diagonalizability Christopher Michael

More information

The spectrum of a square matrix A, denoted σ(a), is the multiset of the eigenvalues

The spectrum of a square matrix A, denoted σ(a), is the multiset of the eigenvalues CONSTRUCTIONS OF POTENTIALLY EVENTUALLY POSITIVE SIGN PATTERNS WITH REDUCIBLE POSITIVE PART MARIE ARCHER, MINERVA CATRAL, CRAIG ERICKSON, RANA HABER, LESLIE HOGBEN, XAVIER MARTINEZ-RIVERA, AND ANTONIO

More information

On the spectra of striped sign patterns

On the spectra of striped sign patterns On the spectra of striped sign patterns J J McDonald D D Olesky 2 M J Tsatsomeros P van den Driessche 3 June 7, 22 Abstract Sign patterns consisting of some positive and some negative columns, with at

More information

AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS

AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS MAX GOLDBERG Abstract. We explore ways to concisely describe circulant graphs, highly symmetric graphs with properties that are easier to generalize

More information

COMBINATORIAL PROPERTIES OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES

COMBINATORIAL PROPERTIES OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES COMBINATORIAL PROPERTIES OF NONNEGATIVE AND EVENTUALLY NONNEGATIVE MATRICES By DEANNE MORRIS A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON

More information

The Perron-Frobenius Theorem and its Application to Population Dynamics

The Perron-Frobenius Theorem and its Application to Population Dynamics The Perron-Frobenius Theorem and its Application to Population Dynamics by Jerika Baldin A project submitted in partial fulfillment of the requirements for the degree of Mathematics (April 2017) Department

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY. February 16, 2010

SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY. February 16, 2010 SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY MINERVA CATRAL, LESLIE HOGBEN, D. D. OLESKY, AND P. VAN DEN DRIESSCHE February 16, 2010 Abstract. A matrix A is power-positive if some positive integer

More information

Nonnegative Matrices I

Nonnegative Matrices I Nonnegative Matrices I Daisuke Oyama Topics in Economic Theory September 26, 2017 References J. L. Stuart, Digraphs and Matrices, in Handbook of Linear Algebra, Chapter 29, 2006. R. A. Brualdi and H. J.

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Sign patterns that allow eventual positivity

Sign patterns that allow eventual positivity Electronic Journal of Linear Algebra Volume 19 Volume 19 (2009/2010) Article 5 2009 Sign patterns that allow eventual positivity Abraham Berman Minerva Catral Luz Maria DeAlba Abed Elhashash Frank J. Hall

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Sign Patterns with a Nest of Positive Principal Minors

Sign Patterns with a Nest of Positive Principal Minors Sign Patterns with a Nest of Positive Principal Minors D. D. Olesky 1 M. J. Tsatsomeros 2 P. van den Driessche 3 March 11, 2011 Abstract A matrix A M n (R) has a nest of positive principal minors if P

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Another algorithm for nonnegative matrices

Another algorithm for nonnegative matrices Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Some Results on Generalized Complementary Basic Matrices and Dense Alternating Sign Matrices

Some Results on Generalized Complementary Basic Matrices and Dense Alternating Sign Matrices Georgia State University ScholarWorks @ Georgia State University Mathematics Dissertations Department of Mathematics and Statistics 5-9-2016 Some Results on Generalized Complementary Basic Matrices and

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

ON THE CONSTRUCTION OF NONNEGATIVE SYMMETRIC AND NORMAL MATRICES WITH PRESCRIBED SPECTRAL DATA SHEROD EUBANKS

ON THE CONSTRUCTION OF NONNEGATIVE SYMMETRIC AND NORMAL MATRICES WITH PRESCRIBED SPECTRAL DATA SHEROD EUBANKS ON THE CONSTRUCTION OF NONNEGATIVE SYMMETRIC AND NORMAL MATRICES WITH PRESCRIBED SPECTRAL DATA By SHEROD EUBANKS A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

1.4 Solvable Lie algebras

1.4 Solvable Lie algebras 1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)

More information

Groups of Prime Power Order with Derived Subgroup of Prime Order

Groups of Prime Power Order with Derived Subgroup of Prime Order Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Note on the Jordan form of an irreducible eventually nonnegative matrix

Note on the Jordan form of an irreducible eventually nonnegative matrix Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 19 2015 Note on the Jordan form of an irreducible eventually nonnegative matrix Leslie Hogben Iowa State University, hogben@aimath.org

More information

This section is an introduction to the basic themes of the course.

This section is an introduction to the basic themes of the course. Chapter 1 Matrices and Graphs 1.1 The Adjacency Matrix This section is an introduction to the basic themes of the course. Definition 1.1.1. A simple undirected graph G = (V, E) consists of a non-empty

More information

Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations

Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations University of Iowa Iowa Research Online Theses and Dissertations Summer 2013 Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations in N Alina Anca Florescu University of Iowa Copyright 2013 Alina

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Monomial subdigraphs of reachable and controllable positive discrete-time systems

Monomial subdigraphs of reachable and controllable positive discrete-time systems Monomial subdigraphs of reachable and controllable positive discrete-time systems Rafael Bru a, Louis Caccetta b and Ventsi G. Rumchev b a Dept. de Matemàtica Aplicada, Univ. Politècnica de València, Camí

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Tactical Decompositions of Steiner Systems and Orbits of Projective Groups

Tactical Decompositions of Steiner Systems and Orbits of Projective Groups Journal of Algebraic Combinatorics 12 (2000), 123 130 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Tactical Decompositions of Steiner Systems and Orbits of Projective Groups KELDON

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 Name: Exam Rules: This exam lasts 4 hours. There are 8 problems.

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

INVERSE CLOSED RAY-NONSINGULAR CONES OF MATRICES

INVERSE CLOSED RAY-NONSINGULAR CONES OF MATRICES INVERSE CLOSED RAY-NONSINGULAR CONES OF MATRICES CHI-KWONG LI AND LEIBA RODMAN Abstract A description is given of those ray-patterns which will be called inverse closed ray-nonsingular of complex matrices

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Abed Elhashash and Daniel B. Szyld. Report August 2007

Abed Elhashash and Daniel B. Szyld. Report August 2007 Generalizations of M-matrices which may not have a nonnegative inverse Abed Elhashash and Daniel B. Szyld Report 07-08-17 August 2007 This is a slightly revised version of the original report dated 17

More information

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes by Chenlu Shi B.Sc. (Hons.), St. Francis Xavier University, 013 Project Submitted in Partial Fulfillment of

More information

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-016 REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING Gina

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Vladimir Kirichenko and Makar Plakhotnyk

Vladimir Kirichenko and Makar Plakhotnyk São Paulo Journal of Mathematical Sciences 4, 1 (2010), 109 120 Gorenstein Quivers Vladimir Kirichenko and Makar Plakhotnyk Faculty of Mechanics and Mathematics, Kyiv National Taras Shevchenko University,

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

A linear algebraic view of partition regular matrices

A linear algebraic view of partition regular matrices A linear algebraic view of partition regular matrices Leslie Hogben Jillian McLeod June 7, 3 4 5 6 7 8 9 Abstract Rado showed that a rational matrix is partition regular over N if and only if it satisfies

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information