University of Minnesota. Michael Neumann. Department of Mathematics. University of Connecticut

Size: px
Start display at page:

Download "University of Minnesota. Michael Neumann. Department of Mathematics. University of Connecticut"

Transcription

1 ONVEXITY AND ONAVITY OF THE PERRON ROOT AND VETOR OF LESLIE MATRIES WITH APPLIATIONS TO A POPULATION MODEL Stephen J Kirkland Institute for Mathematics and its Applications University of Minnesota Minneapolis, Minnesota Michael Neumann Department of Mathematics University of onnecticut Storrs, onnecticut 0669{3009 Research supported by NSF Grants Nos DMS{ and DMS{

2 Abstract This paper considers the Leslie model of population growth and analyzes its asymptotic growth rate and its asymptotically stable age distribution as functions of the fecundity and survival rates of each age group in the population This analysis is performed by computing rst and second order partial derivatives of the Perron root and vector of a Leslie matrix with respect to each relevant entry in the matrix, with emphasis on the second partial derivatives We discuss the signs of these derivatives as well as the qualitative implications that our results have for the Leslie model Where possible, quantitative interpretations of the results are also given Throughout, the techniques employ ideas from the theory of group generalized inverses

3 INTRODUTION The purpose of this paper is to investigate rst and second order eects of changes in fecundity and survival rates of various age groups on the asymptotic rate of growth and the asymptotically stable age distribution vector of the Leslie population model This population model can be represented by a nonnegative matrix whose Perron root and, an appropriately normalized, Perron eigenvector then furnish the rate of growth and the stable age distribution of the model, respectively The rst and second order eects are obtained by computing the rst and second order partial derivatives of the Perron root and Perron vector with respect to those matrix entries which represent the fecundity and survival rates of each age group The existence of these derivatives is assured because in the problem's setting the Perron root is simple It should be mentioned that rst order eects of changes in the fecundity and survival rates upon the growth rates of the model have already been obtained by authors such as Demetrius [6], Goodman [], and Lal and Anderson [5] In the Leslie model one assumes that the population consists of n age groups Let x i (t) denote the number of individuals in the i{th age group at time t Let F i, i =;;n, denote the fecundity of each individual in the i{th age group and let P i, i =;;n denote the probability of survival of an individual from age i to age i + Assume that both the fecundity and survival rates are independent of the time t Then, as can be readily ascertained, the age distribution at time t +,t0, can be described by the matrix { vector relation: x(t +) = 0 F F F n F n P P n 0 A x(t) =: ~ Ax(t); (:) where x() =(x () x n ()) T for all 0 For more background material on the Leslie population model see Pollard [8] Papers in demography, cf Demetrius [6], say that the population has reached a stationary age distribution if there exists a time t 0 and a constant

4 >0 such that x(t +) =x(t); 8 t>t 0 : (:) In this case is called the growth rate of the population The term stationary age distribution comes from the fact that if () holds, then from time t 0 +onwards the ratio between the various age groups in the population is maintained We note further that if () holds, then x(t) = x(t+) = ~ Ax(t) = = ~ A t+ x(0); 8t >t 0 : (:3) ecause a stationary age distribution is reached in nite time only when the initial age distribution vector x(0) is a linear combination of a Perron vector and a generalized null vector of ~ A,we prefer to think of Demetrius' notions of growth rate of the population and stationary age distribution as the asymptotic growth rate and the asymptotically stable age distribution vector, respectively If ~ A is an irreducible matrix then by the Perron{Frobenius theory (see Section for preliminaries and erman and Plemmons [] for a comprehensive background), must be the Perron root of ~ A and x(t) is a corresponding right Perron (eigen{)vector and the theory guarantees the existence and differentiability of both Further, if ~ A is primitve, namely, it is irreducible with a single eigenvalue of maximum modulus, then, eg, the power method for computing dominant eigenvalues and corresponding normalized eigenvectors (see Stewart [9, p340]) shows that the right{hand{side of (3) will always converge to a Perron vector of ~ A so that asymptotically () holds We comment now that just as the entries of the nonnegative matrix ~ A have a physical interpretation, we see that the Perron root and vector and their derivatives also have a physical meaning for the model One can relax the assumption of irreducibility Even then the special structure of the Leslie matrix A ~ implies that its Perron root remains simple, thus ensuring that the Perron root and vector are still dierentiable While the reducible case may lead to some interesting questions, we nevertheless restrict ourselves to the case of irreducible Leslie matrices, { ie we will always assume that F n > 0 { since this captures the largest part of the mathematical content and diculty It is natural to ask how the growth rate and the stable age distribution vectors are eected as we change the fecundity and survival probabilities at

5 each age group Demetrius [6] has investigated one of these questions by looking at the derivatives of the Perron root with respect to the fecundities and the survival rates which he obtained via the characteristic equation for a Leslie matrix His results can also be obtained from a more general theorem giving expressions of the partial derivatives of a simple eigenvalue of a matrix with respect to the matrix entries as follows: Let =(b i;j )be an n n real or complex matrix and let be a simple eigenvalue of Let and be right and left eigenvectors of corresponding to normalized such that T = Then it is known, see for example Stewart [9, p305, Exer ], that i;j = j i ; 8 i; j n; (:4) where = i;j is the derivative ofwith respect to the (i; j){th entry at onsider the matrix = I Zero now is a simple eigenvalue of and therefore the group generalized inverse of,, exists and as is known, T = I Thus we see that the group inverse of can be used to express rst order partial derivatives of with respect to the matrix entries at Further, as is shown in Deutsch and Neumann [7], the second order partial derivatives of with respect to the (i; j){th entry can also written in terms of Specically they showed that i;j = (I ) j;i j;i ; 8 i; j n: (:5) Assume that is an n n nonnegative and irreducible matrix Since any right and left Perron vectors of are positive, (4) readily conrms the well known fact that the Perron value is a strictly increasing function in any of the matrix entries In a series of papers ohen [3,4], and [5] established the fact that the Perron root is a convex function in the main diagonal of the matrix His principal approach to proving this fact was probabilistic relying on evolution equations due to Kac In [7] a matrix theoretic proof of this fact is presented Moreover, from the formula (5) which was found in [7] we see that for any pair (i; j), the convexity or concavity of the Perron root with respect to (i; j){th entry is determined by the sign of the (j; i){th entry of =(I ) ~ Perturbation and convexity theory of the Perron root Section for a more precise discussion of the partial derivatives of eigenvalues of matrix with respect to matrix entries 3

6 has been investigated by anumber of authors To list a few we mention Elsner [9], Friedland [], Golub and Meyer [3], Haviv, Ritov, and Rothblum [4], and Meyer and Stewart [7] Let us return to our Leslie matrix ~ A and assume that it is irreducible In this paper we explicitly compute expressions for the rst and second order derivatives of the Perron root and an appropriately normalized Perron vector of a Leslie matrix with respect to its entries in the rst row and on its subdiagonal We shall then interpret what meaning our result have for the population model which the matrix represents Most of our results show that younger ages exert more inuence on the behavior of the growth rate and the stable age distribution vector than older age groups do Our experience is that it is much more dicult to analyze the eects of changes in survival rates on the population than it is to do so for changes in the fecundity rates The plan of this paper is as follows In Section we shall present further notation and preliminaries which are necessary for the work here We shall obtain in this section an explicit formula for the group inverse of I ~ A In Section 3 we shall derive formulas for the second order derivatives of the Perron value of a Leslie matrix with respect to its top row and subdiagonal In doing so some results of Deutsch and Neumann in [8] on the rst and second derivatives of the Perron vector will be extended In Section 4 we shall do similarly for the partial derivatives of the Perron vector of ~ A In each of Sections 3 and 4 we shall explain what implications our results have for the population model both qualitatively and, where possible, also quantitatively 4

7 NOTATION AND PRELIMINARIES In this paper we shall use the following notation: IR k denotes the k{dimensional real space IR k;k denotes the space of all k k real matrices e IR k denotes the k{dimensional vector whose entries are all 's e i R k denotes the k{dimensional unit coordinate vector, i =;;n E i;j IR k;k is the matrix whose (i; j){th entry is and whose remaining entries are 0 The symbol will indicate algebraic expressions with the same sign Let u =(u u n ) IR k and v =(v v n ) IR k We shall write that that u v, ifu i v i,i=;;nwe shall say that u majorizes v, in notation u v, if ix j= u j ix j= v j ; i =;;n ; and nx j= u j = nx j= v j : We shall say that a function f :IR k!ir k is isotone if u v ) f(u) f(v): Let IR n;n and consider the matrix equations X = ; XX = X; and X = X: A matrix X IR n;n which satises all three equations, if it exists, is called the group inverse of and is denoted by Moreover, if exists and D is any nonsingular matrix, then (D D) = D D: (:) 5

8 In particular, if D is a diagonal matrix whose diagonal entries are all positive, then all the entries of the matrices and D D have identical signs in the same locations It is known, see en{israel and Greville [], that a necessary and sucient condition for to exist is that the elementary divisors, if any, ofcorresponding to the eigenvalue zero are all linear Thus if is an n n nonnegative and irreducible matrix so that its Perron root is simple, then 0 is a simple eigenvalue of = I, showing that the group inverse of exists Suppose that IR n;n has a simple eigenvalue, call it () It readily follows from considerations involving the minimal polynomial that there is an open ball in IR n;n about such that every matrix in the ball has a simple eigenvalue Thus for any E IR n;n and for suciently small t IR, + te has a simple eigenvalue ( + te) such that ( + te)! 0ast!0 Wilkinson [0, pp66{67] shows that for suciently small t, ( + te) can be expanded in a convergent power series about Hence the derivatives of all orders of with respect to t exist at In particular, the partial derivative ofwith respect to the (i; j){th entry at is given by the limit () i;j := lim t!0 ( + te i;j ) () t = by (4) j i ; (:) where = () =( n ) T and = () =( n ) T are right and left eigenvectors of corresponding to () normalized so that their inner product is In a similar manner we dene the higher order partial derivatives of at with respect to the matrix entries Wilkinson goes on to show that provided one of many standard normalizations (but not the innity norm) is applied to the eigenvector corresponding to ( + te) throughout the ball, then the entries of the corresponding eigenvector can be expanded in a convergent power series and therefore they too are dierentiable with respect to t at When it is absolutely clear from the context, we shall suppress the letter representing the matrix from the expressions for the partial derivatives, viz, we shall write = i;j for ()= i;j etc Our approach will be to rst develop our results for stochastic Leslie matrices and then transform them to general Leslie matrices from which we shall be able to draw our conclusion for the population model under consideration To do so it will be helpful to have formulas connecting the partial derivatives with respect to the Perron root of a general nonnegative 6

9 and irreducible matrix ~ and the stochastic and irreducible matrix to which ~ is transformed using the diagonal similarity where D = = D ~ D; (:3) 0 ~x ~x ~x n A (:4) and where ~x =(~x ~x n ) T is a right eigenvector of ~ corresponding to its Perron root Throughout we shall normalize all right Perron vectors so that their rst entry equals Thus if ~y is the left Perron vector of ~ normalized so that ~x T ~y =, then it follows from (4), (5), (), (3), and (4) that = ~y i (~x j ) Q j;i ~x ; (:5) i i;j where Q = I Similarly relations for the eigenvectors will also be useful In particular, the following formulas can be obtained: ~x i;j = ~x j ~x i D x i;j ; i; j n; (:6) and ~x ;i = ~y ~x i D x ;i ; i n: (:7) For the specic case of the Leslie matrix ~ A as given in (), the right Perron vector is given by ~x = 0 P P P P Pn n 7 A : (:8)

10 With (3) and (4) in mind we nd that A ~ can be transformed into the stochastic and irreducible Leslie matrix where A = 0 a a a n a n A ; (:9) a = F 0 and a i = P P i F i i 0; i=;;n; (:0) with a n > 0 LEMMA Let A be aanirreducible stochastic Leslie matrix whose top row is given by ( a a n ) Set 0 a a a n a n 0 0 Q = 0 0 : (:) A 0 Then where Q (Q ) = ; (Q ) ; (Q ) ; (Q ; (:) ) ; (Q ) ; = M + M er T M + er T M + (r T M e)er T M ; (:3) (Q ) ; = M e (r T M e)e; (:4) (Q ) ; = r T M + (r T M e)r T M ; (:5) and (Q ) ; = (r T M e); (:6) 8

11 and where e IR n, r = 0 0 A IR n and = r T M e = P n ( i=0 s i) (:7) an and M = a n Here 0 s 0 s n s s n s s n s n 3 s n s n s 0 s s n s s n s n 3 s n s n s 0 s s s n s n 3 s n s n s 0 s s s n 3 s n (:8) s 0 = 0 and s i = ix j= a j ; 8 i n : (:9) Proof: Let M be the (n ) (n ) leading principal submatrix of Q It is not dicult to check using the fact that the a i 's sum to that Q admits the full rank factorization Q = M r T ( I e ) =: : (:0) According to en{israel and Greville [], Q = () Nowitcan be veried that () = M + M er T M, where and M are as given in (7) and (8), respectively That Q is given by () and as specied in (3){(9) can now be ascertained from the above partitioning of and and the aforementioned formula for () We comment that the above lemma is a specialization to the case of a singular M{matrix obtained from an n n irreducible Leslie matrix of a formula for the group inverse of an a singular M{matrix obtained from a general n n nonnegative and irreducible matrix found by Meyer [6, Theorem 5] A : 9

12 3 THE PERRON ROOT OF A LESLIE MATRIX AS A FUNTION OF TOP ROW AND SUDIAGONAL We begin by determining the second order behavior of the Perron root as a function of the top row THEOREM 3 Let A be annn irreducible stochastic Leslie matrix whose top row is given by ( a a n ) Then the entries in the rst column of the group inverse of Q = I A are given by Q = n 4 X i; n i a n a n 3 ( s j )(n j) 5 ; i =;;n; (3:) where and the s j 's are as in(7) and (9) In particular, the entries of the rst column of Q are strictly decreasing from rst to last and Q ; > jq n; j Moreover there exists an integer k 0 < (n +)=such that Q ; ; ;Q k 0 ; are all nonnegative while Q k 0 +; ; ;Q n; are all nonpositive Proof: For each in we nd from (3) that Q i; = et i [M + M er T M + er T M + (r T M e)er T M ]e : Also from (8) we have that r T M e = (r T M )(M e) = nx s j a n (n j) It now follows after several algebraic reductions that 0 n X s j A : a n (3:) Q i; = a n ( s 0 )( r T M e e T i M e): Further algebraic manipulations now yield (3) A similar calculation shows that (3) holds also for the case i = n From (3) it readily follows that the entries in the rst column of Q are strictly decreasing from rst to last and that the rst entry is always positive 0

13 (which is a general result of Meyer [6] for all diagonal entries of the group inverse of a singular and irreducible M{matrix and it is also an outcome of ohen's results described in the introduction) while the last entry is always negative Next, that Q ; > jq n;j follows at once from (3) To complete the proof we need only show that if i (n +)=, then Q i; 0 From (3) we nd that for i n, Q i; is nonpositive if and only if X n (n i) ( s j ) ut (33) holds if and only if X n i ( s j+i )(j +) nx ( s j )(n j): (3:3) Xi ( s j+ i )(j +): (3:4) Since s j+i s j+ i for each 0jn i, it now follows that if i (n +)=, then (34) always holds onsequently, Q i; 0 whenever i (n +)= Several comments are in order Theorem shows that the sign change in the rst column of Q always occurs somewhere before the b(n +)=c{th position We rst furnish an example to show that the sign change can occur at any position after the rst and prior to the (n +)={th if n is odd or prior the b(n +)=c{th if n is even Fix k<(n+)= Let A be given by (9) with 8 if j = k a j = : if j = n ; 0 otherwise where 0 <<isyet to be specied From (3) we nd that P n ( s j)(n j) Q n k k; k(k ) Similarly, Q k+; n k k(k +) P n +( ) ( s j) (n k)(n k +) P n ( s j)(n j) P n ++( ) : ( s j) (n k)(n k ) :

14 Now choose a positive so that n(n k) (n k)(n k ) < < n(n+ k) (n k)(n k +) < : Then a straightforward exercise shows that Q k; > 0 >Q k+; and that such an causes the sign change to occur at the desired admissible position Our next comment is motivated by the fact that if ( a a n ) (^a ^a n ), then ( s s n )(^s ^s n ) onsider the function f(a) = f(a ;;a n ) = (Q ; Q n; ) ; (3:5) where the a i 's, i n, are nonnegative and sum to Then is f an isotone function? That in general the answer to this question is in the negative is illustrated by the example A = 0 = = A and ^A = A : 0 0 Notice that the top row ofamajorizes the top row of ^Afor each 0<< However, as can be veried, f(=;=; ) does not majorize f(0;; ) for all 0 << since, in particular, for 0 <<(7 p 53)=6, (I A) ; > (I ^A) ;, whereas when (7 p 53)=6 <<, (I A) ; < (I ^A) ; The following corollary shows that under certain conditions on A and ^A, there is an entrywise relationship between f(a) and f( ^A): OROLLARY 3 Let a =(a a n ) and ^a =(^a ^a n ) be the top rows of the irreducible stochastic Leslie matrices A and ^A, respectively Suppose a ^a Then Q ; ( ^Q) ; ) Q i; ( ^Q) i; ; 8 i n; so that also f(a) f( ^A) and Q n; ( ^Q) n; ) Q i; ( ^Q) i; ; 8 i n; so that also f(a) f( ^A)

15 Proof: Since a ^a we have from (3) that Q i; Q = k i k; P n ( s j) k i P n ( ^s j) = (^Q) i; ( ^Q) k; for all i k n Thus Q i; ( ^Q) i; Q k; ( ^Q) k; for all i k n from which the results follow Next we examine the behavior of the Perron root of a Leslie matrix as a function of its entries on subdiagonal From (5) and the preceding discussion we know that it suces to determine the signs of the superdiagonal entries of Q Aswe shall see, determining expressions to represent these is no more dicult than determining expressions for the entries of Q down its rst column ut determining the signs of the entries on the superdiagonal, especially in the top rst half, appears to be more dicult than determining the signs of Q down the rst column THEOREM 33 Let A be an nn irreducible stochastic Leslie matrix whose top row is given by ( a a n ) Then, providing we interpret any summation sign from a lower limit which exceeds an upper limit to be zero, the entries in the super diagonal positions of the group inverse of Q = I A are given by Q k;k+ = n sk an h n k + P n P n j=k sj an o sj an (n j) i ; k =;;n ; (36) where and the s j 's are given by (7) and (9), respectively The positive entries on the super diagonal of Q, if any, appear consecutively from the (; ){th entry and form a non increasing sequence The (k; k+){th entries, where k (n )=, areallnegative and for k (n +)= those entries form a nondecreasing sequence Proof: For each kn, we nd from (3) that Q k;k+ = et k [M +M er T M +er T M + (r T M e)er T M ]e k+ : (3:7) 3

16 Also recall the expression for r T M e given in (3) Examining each term in the expansion of (37) and referring to (8) we nd that e T k M e T k+ = s k a n ; e T k M e = nx s j a n (n k ); and r T M e k+ = s k a n ; r T M e k+ =(r T M )(M e k+ )= 4 s nx k s j a n a n kx 3 s j 5 : a n Anumber of algebraic reductions now yield (36) shows that (36) holds also when k = n A similar calculation To see that the positive entries in the super diagonal positions of Q, if any, begin at the (; ) entry, are consecutive, and form a nonincreasing sequence we need only show that if Q 0, then k;k+ Q k ;k Q To k;k+ this end note that if Q k;k+ > 0, then, in particular, from (36) it follows that nx n k ut from (36) we nd that n Q k ;k Q = k;k+ s k s k X 4 n k a n The claim now follows from (38) s j a n (n j) > 0: (3:8) 3 s j (n j) 5 : a n (3:9) Now let k (n Q k;k+ ( s k) )= From (36) we nd that ( P j=k s j)(j k) k P n s j) s j)(k j) P n nx j=k+ ( s j ): 4

17 The numerator in the rst term of the above expression can be rearranged as nxk j= j(s k j s k+j ) j=n kx k j( s k j ) which is evidently nonpositive onsequently Q k;k+ 0 Next consider the dierence (39) Note that if Q is nonpositive and if k;k+ then, necessarily, Q k a n (n X nx n k ;k Q k;k+ n k)+ ( s j )[j (k )] j=k s j a n (n j) 0; (3:0) 0 ut (30) holds if and only if kx ( s j )[(k ) j]: (3:) If k (n+)=, the terms in (3) can be paired{o to yield the equivalent expression: [(k ) a n (n k)] +(s k s k ) +(s k+ s k 3 ) + +(s n s k n )(n k) +( s k (n+) )(n k) + +( s )(k ) 0: Observe that each of the terms on the left{hand{side above is nonnegative Since we already proved that Q n ;n 0, the proof of the theorem is now complete Note that Theorem 33 implies that either the Q k;k+ 's are all nonpositive or there is an index k 0, necessarily less than (n )=, such that for k k 0, Q 0 and for k;k+ k 0 +kn, Q k;k+ 0 While we are not able to show that there exist cases where k 0 attains the value b(n )=c, the following example shows that some Q k;k+ can be positive when n is 5

18 suciently large and when k is not too large compared with n Specically consider the n n stochastic Leslie matrix whose top row is given by a j = 8 : if j = k + if j = n 0 otherwise, where is yet to be specied From (36) we see that Q k;k+ ( s k) "P n ( s j)(j + k) P n ( s j) n X j=k (3:) ( s j ): (3:3) Since s j = 0 for 0 j k and s j = for k +jn after some analysis that, (33) implies Q k(k+)+( k;k+ )(n 4kn+3k +5k 3n+) ( ) (n k ) : (3:4) hoosing the value = 7=0 and substituting it into (34), it follows that is positive as long as Q k;k+ 9k (0n + 68)k +n 7n +5>0: (3:5) onsidering this as a quadratic in k, we nd that Q k;k+ is positive provided that p k < (0n + 68) + (0n + 68) + 4(9)(n 7n + 5) : (3:6) 38 For k =, (35) yields that the minimal value of n which guarantees that for the matrix given by (3), Q ; > 0, is n = For k =, the minimal such n is 7 As n!, the right{hand{side (36) is asymptotic to p! n 0:75n: (3:7) 38 Let us now revert to a general Leslie matrix A ~ From the discussion in Section we see by (5) that the derivative of the Perron root with respect to the entries in the top row at Ais ~ given by ;i = " P n ( s j) P P P i Q i i; ; i =;;n; (3:8) 6

19 where Q = I A and A is as in (9) and (0) From this formula and the results of Theorem 3 it follows that if >max in P i, then ;i 0 ) ;i ;j ; j>i: (3:9) In particular, for as above, max in ;i = ; : (3:0) Similarly with respect to the subdiagonal entries we have that k+;k = P k s k Pn ( s j) Q k;k+ ; k =;;n : (3:) In particular, if P j P i and j>i, then i+;i 0 ) i+;i j+;j : (3:) We wish now to discuss the implications of the results obtained in this section on the population model which the Leslie matrix represents First the qualitative interpretations Theorems 3 and 33 show that the asymptotic rate of increase can only be a convex function of the fecundity and survival rates of younger age groups and that it is a concave function of these rates in older age groups This suggests that only changes in the vital rates for younger age groups can yield a sharp change in the rate of increase of the asymptotic growth rate of the population In attempting to compare the eects of changes in fecundity rates to changes in survival rates, we note that Theorem 3 always guarantees that for some younger age groups the asymptotic growth rate is a convex function of the fecundity This is not necessarily the case for the eects of changes in the survival rates Indeed, in our experience, it is dicult to produce examples where the asymptotic growth rate is a convex function of the survival rate of even the rst age group In this sense, the example given after Theorem 33 is atypical It requires a peculiar fecundity distribution and, as we observe, a large number of age groups in order for changes in survival rates to have a sharp eect on the asymptotic growth rate 7

20 Next we consider the quantitative implications of our results upon the population model Recall that Demetrius' result [6, p34, eq(8)] asserts that if >max in P i, then > ; j>i: (3:3) ;i ;j Our result in (39) reinforces (33) by showing that not only are the rst partial derivatives of the asymptotic growth rate with respect to the fecundities ordered, but so are the second partial derivatives, at least for younger age groups As for the eect of changes in the survival rates Here Demetrius' result [6, p34, eq()] is that P j P i and j>i ) i+;i j+;j : (3:4) Once again our result in (3) reinforces (34) Earlier we noted that the condition = i+;i 0 which appears in (3) can only occur for younger age group if at all However when this is the case, the hypothesis P j >P i for j > iseems reasonable when i is small because the survival rate for newborns is likely to be lower than that for slightly more mature individuals 8

21 4 THE PERRON VETOR OF A LESLIE MATRIX AS A FUNTION OF TOP ROW AND SUDIAGONAL In this section we shall examine the derivatives of the Perron vector of a stochastic Leslie matrix with respect to the entries in the top row and subdiagonal of the matrix We shall than infer implications concerning the asymptotically stable age distribution vector of the general Leslie population model Let ~ A be an n n general irreducible Leslie matrix whose Perron root is = ( ~ A) Throughout its right Perron vector x = x( ~ A) will be normalized so that its rst entry is equal to Denote by y = y( ~ A) the left Perron vector of ~ A normalized so that y T x = Since now all the derivatives of the rst entry of that Perron vector are zero, it will be convenient for us to work with the truncated form of x =( x x n ) T, viz x =(x x n ) T efore truncation all vectors which we shall work with in this section will be in IR n and a bar over them shall indicate their truncation to an (n ){ dimensional vector by deleting their rst entry This notation is consistent with [8] and for Leslie matrices, some results in that paper will also be generalized here In the interest of convenience we shall derive a basic relation for the derivatives of the x which in essence are already contained in [8] Put Q ~ = I A ~ On dierentiating the matrix{vector relation Ax ~ = x with respect to the (i; j){th entry we obtain, on recalling (4), that or E i;j x + ~ A x i;j = x j y i x + x i;j x i;j = x j N ~Q (e i y i x); (4:) where N ~Q is the (n ) (n ) trailing principal submatrix of ~ Q For the special case of a stochastic Leslie matrix we obtain the following: LEMMA 4 At the nn irreducible stochastic Leslie matrix A whose top 9

22 row is given by ( a a n ), and x k+;k = x ;i = P n ( s j) P n ( s j) 0 where the s i 's are given in (9) 0 n A ( s k ) ( s k ) (k )( s k ) P n s j) k( s k ) P n ( s j) (n )( s k ) ; i n; (4:) A ; k n ; (4:3) Proof: For the stochastic Leslie matrix A we havex(a)=( ) T and for Q = I A, N 0 Q A : Furthermore, if y = y(a) is the left Perron vector of A normalized as above, then y T = P n ( s j) (( s 0) ( s ) ( s n )): The expression for x= ;i now follows on substituting these values of x; y and N into (4) and on noting that Q e = 0 The expression for x= k+;k follows similarly onsidering (4) we see that each entry of x= ;i is negative for each i =;;n This fact is actually a consequence of a more general result of Elsner, Johnson, and Neumann [0, Theorem ] In contrast to (4), (43) shows that the signs of the entries of the x= k+;k are not necessarily uniform learly the rst k entries are negative and the remaining ones 0

23 form a decreasing sequence The following argument shows that x= k+;k always has at least one positive entry Note that nx ( s j ) (k+)( s k )= kx (s k s j )+ nx j=k+ ( s j ) > 0; k n : Thus we see that the k{th and (k + ){th entries of x= k+;k are always positive for k n A similar argument shows that the last entry of x= n;n is also always positive From the above we see that either the entries of x= k+;k in positions k through n are all nonnegative or there is an index i 0 k + such that the entries in positions k through i 0 are nonnegative and the remaining entries are nonpositive To see that each such sign pattern can be realized consider the stochastic Leslie matrix whose top row has 0 << in the (k + ){th position and in the n{th position It is readily veried that each admissible sign pattern can be obtained by a suitable choice of We next investigate the sign pattern of the second partial derivatives of x with respect to the entries of the top row of a stochastic Leslie matrix THEOREM 4 Let A be annn irreducible stochastic Leslie matrix whose top row is given by ( a a n ) and let x = x(a) be its right Perron vector normalized so that its rst entry is Then!! " P n x l + = ;i l P n ( l s +i ( s j)(j +) P j) n ( s j) (4:4) for all l n and i n and where the s i 's are asspecied in (9) Furthermore, when i is xed, there is at most one sign change in the entries of x= ;i from minus to plus as l increases Similarly, when l is xed and i is increased, there is at most one sign change in the sequence formed from the l{th entries of the vectors x= ;i, i =;;n In particular, l + +i n+ ) x ;i! l 0 (4:5) and an all pluses sign pattern is possible when either i or l is suciently large

24 Proof: As usual let Q = I A and let y be the left Perron vector of A normalized so that y T x= From formula (43) in [8] we nd that x ;i = y x i (y x i N Q Q i; I)N Q x; i =;;n: (4:6) Substituting in the expressions for x; y, and N Q developed in Lemma 4 and using the formula for Q i; given in (3) yields after some simplication (44) The claims concerning the sign patterns follows readily by inspecting (44) That (45) holds can be established by the same argument given in Theorem 3 which was used to show that Q i; 0 for i (n +)= As in our remarks following Theorem 3 concerning the sign pattern of the entries of the rst column of Q, examples can be constructed to show that when i is xed and l is increasing, the switch from minus to nonnegative in ( x= ;i ) l can occur at any index l 0 <n i A similar remark holds for the sequence x= ;i when l is xed and i is increasing In the spirit of orollary 3 we can prove the following majorization result for the second derivatives of the Perron vectors corresponding to two stochastic Leslie matrices: OROLLARY 43 Let a =(a a n ) and ^a =(^a ^a n ) be the top rows of the irreducible stochastic Leslie matrices A and ^A, respectively Suppose a ^a Then x ; ^x ; ) x ;i ^x ; i n; ;i and x ;n ^x ;n ) x ;i ^x ; i n: ;i Proof: From (44) we have that x ;i x ;j = (i j) " P n ( s j) 0 n A

25 (i j) " P n ( ^s j) 0 A = ^x ;i n ^x ;j for j i n and where the s i 's are specied in (9) and where the ^s i 's are similarly determined onsequently x ;i ^x ;i x ;j ^x ; j i n; ;j from which the results follow We now return to a general Leslie matrix ~ A From (6) we have that for the rst partial derivatives of the Perron vector with respect to the top row and subdiagonal, and x( A) ~ = P P P i x(a) N ;i i D ; i =;;n; (4:7) ;i x( ~ A) k+;k = P k N D x(a) k+;k ; k =;;n ; (4:8) where A is given by (9) and (0) Similarly, for the second partial derivatives of the Perron vector with respect to the rst row wehave from (7) that x( ~ A) ;i = " P n ( s j) P P P i for all i =;;n: From (48) we can conclude that 0 0 max P i and x( A) ~ A 0 ) x( A) ~ A in k+;k k+;k In particular max jn j 0 x( A) ~ A k+;k j = i N D x(a) j 0 x( A) ~ A k+;k k ;i l (4:9) 0 x( A) ~ A ; l j: k+;k (4:0) : (4:) 3

26 We come now tointerpret our results on the Perron vector for the population model We begin with qualitative observations For our analysis of the behavior of the asymptotically stable age distribution vector to make sense we havetochoose a frame of reference, and the one we select is to compare the sizes of any age group group to the size of the rst age group (i) Formula (4) in Lemma 4 implies that raising the fecundity of any age group decreases the ratio of the size of any age group beyond the rst to the size of the rst age group (ii) Formula (43) in Lemma 4 shows that raising the survival rate of the k{th age group has the eect of increasing the ratio of the size of the (k + ){th age group to the size of the rst age group and possibly of raising those ratios for subsequent age groups while diminishing the those ratios for age groups ; ;k In the way of quantitative interpretation of our results on the Perron vector for the population model, the only denite conclusion that we can draw follows from (43) and (4) Here we see that increasing the survival rate at the k{th age group has the eect of raising the ratios of the sizes of some age groups beyond the size of the rst to the rst age group while decreasing others However, when max in P i, it follows from (48) that the ratio which increases the most corresponds to the (k + ){th age group AKNOWLEDGEMENTS This research was conducted while the rst author was a Postdoctoral Fellow at the Institute for Mathematics and its Applications of the University of Minnesota and while the second author was a visitor there oth authors wish to thank the IMA and its sta for their support 4

27 References [] A en{israel and T N Greville Generalized Inverses: Theory and Applications Academic Press, New{York, 973 [] A erman and R J Plemmons Nonnegative Matrices in the Mathematical Sciences Academic Press, New{York, 979 [3] J E ohen Derivatives of the spectral radius as a function of a non{ negative matrix Math Proc ambridge Philos Soc, 83:83{90, 978 [4] J E ohen Random evolutions and the spectral radius of a non{ negative matrix elements Math Proc ambridge Philos Soc, 86:343{ 350, 979 [5] J E ohen onvexity of the dominant eigenvalue of an essentially non{negative matrix Proc Amer Math Soc, 8:657{658, 98 [6] L Demetrius The sensitivity of population growth rate to perturbations in the life cycle components Math iosciences, 4:9{36, 969 [7] E Deutsch and M Neumann Derivatives of the Perron root at an essentially nonnegative matrix and the group inverse of an M{matrix J Math Anal Appl, 0: {9, 984 [8] E Deutsch and M Neumann On the rst and second order derivatives of the Perron vector Lin Alg Appl, 7:57{76, 985 [9] L Elsner On convexity properties of the spectral radius of nonnegative matrices Lin Alg Appl, 6:3{35, 984 [0] L Elsner, R Johnson, and M Neumann On the eect of the perturbation of a nonnegative matrix on its Perron eigenvector zech Math J, 3:99{09, 98 [] S Friedland onvex spectral functions Lin Multilin Alg, 9:99{36, 98 [] L A Goodman On the sensitivity of the intrinsic growth rate to changes in the age{specic birth and death rates Theoretical Population iology, :339{354, 97 5

28 [3] G H Golub and D Meyer, Jr Using the QR{factorization and group inversion to compute, dierentiate, and estimate the sensitivity of stationary probabilities for Markov chains SIAM J Alg Discrete Methods, 7:73{8, 986 [4] M Haviv, Y Ritov, U G Rothblum Taylor expansions of eigenvalues of perturbed matrices with applications to spectral radii of nonnegative matrices Lin Alg Appl, 68:59{88, 99 [5] R Lal and D H Anderson alculation and utilization of component matrices in linear bioscience models Math iosciences, 99:{9, 990 [6] D Meyer, Jr The role of the group generalized inverse in the theory of nite Markov chains SIAM Rev, 7:443{464, 975 [7] D Meyer, Jr and G W Stewart Derivatives and perturbations of eigenvectors SIAM J Numer Anal, 5:679{69, 988 [8] J H Pollard Mathematical Models for the Growth of Human Populations ambridge Univ Press, ambridge, 973 [9] G W Stewart Introduction to Matrix omputations Academic Press, New York, 973 [0] J H Wilkinson The Algebraic Eigenvalue Problem Oxford Univ Press, London, 965 6

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute

More information

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem Michael Neumann Nung Sing Sze Abstract The inverse mean first passage time problem is given a positive matrix M R n,n,

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

Perron eigenvector of the Tsetlin matrix

Perron eigenvector of the Tsetlin matrix Linear Algebra and its Applications 363 (2003) 3 16 wwwelseviercom/locate/laa Perron eigenvector of the Tsetlin matrix RB Bapat Indian Statistical Institute, Delhi Centre, 7 SJS Sansanwal Marg, New Delhi

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

INVERSE TRIDIAGONAL Z MATRICES

INVERSE TRIDIAGONAL Z MATRICES INVERSE TRIDIAGONAL Z MATRICES (Linear and Multilinear Algebra, 45() : 75-97, 998) J J McDonald Department of Mathematics and Statistics University of Regina Regina, Saskatchewan S4S 0A2 M Neumann Department

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

ELA

ELA SUBDOMINANT EIGENVALUES FOR STOCHASTIC MATRICES WITH GIVEN COLUMN SUMS STEVE KIRKLAND Abstract For any stochastic matrix A of order n, denote its eigenvalues as λ 1 (A),,λ n(a), ordered so that 1 = λ 1

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

SOME NEW EXAMPLES OF INFINITE IMAGE PARTITION REGULAR MATRICES

SOME NEW EXAMPLES OF INFINITE IMAGE PARTITION REGULAR MATRICES #A5 INTEGERS 9 (29) SOME NEW EXAMPLES OF INFINITE IMAGE PARTITION REGULAR MATRIES Neil Hindman Department of Mathematics, Howard University, Washington, D nhindman@aolcom Dona Strauss Department of Pure

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in Intervals of Almost Totally Positive Matrices Jurgen Garlo University of Applied Sciences / FH Konstanz, Fachbereich Informatik, Postfach 100543, D-78405 Konstanz, Germany Abstract We consider the class

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

1 Tournament Matrices with Extremal Spectral Properties 1 Stephen J. Kirkland Department of Mathematics and Statistics University of Regina Regina, Sa

1 Tournament Matrices with Extremal Spectral Properties 1 Stephen J. Kirkland Department of Mathematics and Statistics University of Regina Regina, Sa Tournament Matrices with Extremal Spectral Properties Stephen J. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada SS OA and Bryan L. Shader Department

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

On Euclidean distance matrices

On Euclidean distance matrices On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT

THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 12, December 1996, Pages 3647 3651 S 0002-9939(96)03587-3 THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Section 1.7: Properties of the Leslie Matrix

Section 1.7: Properties of the Leslie Matrix Section 1.7: Properties of the Leslie Matrix Definition: A matrix A whose entries are nonnegative (positive) is called a nonnegative (positive) matrix, denoted as A 0 (A > 0). Definition: A square m m

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu CONVERGENCE OF MULTIDIMENSIONAL CASCADE ALGORITHM W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. Necessary and sucient conditions on the spectrum of the restricted transition operators are given for the

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

A NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days.

A NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. A NOTE ON MATRI REFINEMENT EQUATIONS THOMAS A. HOGAN y Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. They can play a central role in the study of renable

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang Pointwise convergence rate for nonlinear conservation laws Eitan Tadmor and Tao Tang Abstract. We introduce a new method to obtain pointwise error estimates for vanishing viscosity and nite dierence approximations

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

VII Selected Topics. 28 Matrix Operations

VII Selected Topics. 28 Matrix Operations VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function Will Dana June 7, 205 Contents Introduction. Notations................................ 2 2 Number-Theoretic Background 2 2.

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices

A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada, S4S 0A2 August 3, 2007

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den University of Groningen System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF)

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the

More information

Positive entries of stable matrices

Positive entries of stable matrices Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,

More information

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version 1 An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version Robert Shorten 1, Oliver Mason 1 and Christopher King 2 Abstract The original

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Structural identifiability of linear mamillary compartmental systems

Structural identifiability of linear mamillary compartmental systems entrum voor Wiskunde en Informatica REPORTRAPPORT Structural identifiability of linear mamillary compartmental systems JM van den Hof Department of Operations Reasearch, Statistics, and System Theory S-R952

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators Lei Ni And I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of

More information

Linear estimation in models based on a graph

Linear estimation in models based on a graph Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

On hitting times and fastest strong stationary times for skip-free and more general chains

On hitting times and fastest strong stationary times for skip-free and more general chains On hitting times and fastest strong stationary times for skip-free and more general chains James Allen Fill 1 Department of Applied Mathematics and Statistics The Johns Hopkins University jimfill@jhu.edu

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

THE BAILEY TRANSFORM AND FALSE THETA FUNCTIONS

THE BAILEY TRANSFORM AND FALSE THETA FUNCTIONS THE BAILEY TRANSFORM AND FALSE THETA FUNCTIONS GEORGE E ANDREWS 1 AND S OLE WARNAAR 2 Abstract An empirical exploration of five of Ramanujan s intriguing false theta function identities leads to unexpected

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo ON CONSTRUCTING MATRICES WITH PRESCRIBED SINGULAR VALUES AND DIAGONAL ELEMENTS MOODY T. CHU Abstract. Similar to the well known Schur-Horn theorem that characterizes the relationship between the diagonal

More information

On the asymptotic dynamics of 2D positive systems

On the asymptotic dynamics of 2D positive systems On the asymptotic dynamics of 2D positive systems Ettore Fornasini and Maria Elena Valcher Dipartimento di Elettronica ed Informatica, Univ. di Padova via Gradenigo 6a, 353 Padova, ITALY e-mail: meme@paola.dei.unipd.it

More information