Comparison of perturbation bounds for the stationary distribution of a Markov chain
|
|
- Katrina Briana Cross
- 6 years ago
- Views:
Transcription
1 Linear Algebra and its Applications 335 (00) Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics and Engineering Analysis, The Boeing Company, P.O. Box 3707, MC 7L-, Seattle, WA , USA b Department of Mathematics, North Carolina State University, P.O. Box 805, Raleigh, NC , USA Received 8 November 999; accepted November 000 Submitted by V. Mehrmann Abstract The purpose of this paper is to review and compare the existing perturbation bounds for the stationary distribution of a finite, irreducible, homogeneous Markov chain. 00 Elsevier Science Inc. All rights reserved. AMS classification: 65F35; 60J0; 5A5; 5A; 5A8 Keywords: Markov chains; Stationary distribution; Stochastic matrix; Group inversion; Sensitivity analysis; Perturbation theory; Condition numbers. Introduction Let P be the transition probability matrix of an n state finite, irreducible, homogeneous Markov chain. The stationary distribution vector of P is the unique positive vector π T satisfying π T P = π T, n π =. = Corresponding author. Tel.: addresses: grace.e.cho@boeing.com (G.E. Cho), meyer@math.ncsu.edu (C.D. Meyer). URL: ehcho/home.htmlb (G.E. Cho), (C.D. Meyer). Supported in part by National Science Foundation grants CCR and DMS /0/$ - see front matter 00 Elsevier Science Inc. All rights reserved. PII:S (0)0030-
2 38 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Suppose P is perturbed to a matrix P, that is the transition probability matrix of an n state finite, irreducible, homogeneous Markov chain as well. Denoting the stationary distribution vector of P by π, the goal is to describe the change π π in the stationary distribution in terms of the change E P P in the transition probability matrix. For suitable norms, π π κ E for various different condition numbers κ. We review eight existing condition numbers κ,...,κ 8. Most of the condition numbers we consider are expressed in terms of either the fundamental matrix of the underlying Markov chain or the group inverse of I P. The condition number κ 8 is expressed in terms of mean first passage times, providing a qualitative interpretation of error bound. In Section 4, we compare the condition numbers.. Notation Throughout the article the matrix P denotes the transition probability matrix of an n state finite, irreducible, homogeneous Markov chain C and π denotes the stationary distribution vector. Then π T P = π T, π > 0, π T e =, where e is the column vector of all ones. The perturbed matrix P = P E is the transition probability matrix of another n state finite, irreducible, homogeneous Markov chain C with the stationary distribution vector π: π T P = π T, π >0, π T e =. The identity matrix of size n is denoted by I. For a matrix B,the(i, ) component of B is denoted by b i. The -norm v of a vector v is the absolute entry sum, and the -norm B of a matrix B is its maximum absolute row sum. 3. Condition numbers of a Markov chain The norm-wise perturbation bounds we review in this section are of the following form: π π p κ l E q, where (p, q) = (, ) or (, ), depending on l.
3 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Most of the perturbation bounds we will consider are in terms of one the two matrices related to the chain C: thefundamental matrix and the group inverse of A I P.Thefundamental matrix of the chain C is defined by Z (A + eπ T ). The group inverse of A is the unique square matrix A # satisfying AA # A = A, A # AA # = A #, AA # = A # A. The lists of the condition numbers κ l and the references are as follows: κ = Z Schweitzer [7] κ = A # Meyer [] κ 3 = max (a # min i a i # ) Haviv and van Heyden [5] Kirkland et al. [9] κ 4 = max i, a i # Funderlic and Meyer [4] κ 5 = τ (P ) Seneta [] κ 6 = τ (A # ) = τ (Z) Seneta [] κ 7 = min A () Ipsen and Meyer [6] [ ] Kirkland et al. [9] κ 8 = max maxi/= m i Cho and Meyer [] m where m i,i/=, is the mean first passage time from state i to state, andm is the the mean return time state. 3.. Schweitzer [7] Kemeny and Snell [8] call Z the fundamental matrix of the chain because most of the questions concerning the chain can be answered in terms of Z. For instance, the stationary distribution vector of the perturbed matrix P can be expressed in terms of π and Z: π T = π T (I + EZ) and π T π T = π T EZ. (3.) Eq. (3.), by Schweitzer [7], gives the first perturbation bound: π π Z E.
4 40 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) We define κ Z. 3.. Meyer [] The second matrix related to C is the group inverse of A. In his papers [,], Meyer showed that the group inverse A # can be used in a similar way Z is used, and, since Z = A # + eπ T, all relevant information is contained in A #, and the term eπ T is redundant [4]. In fact, in the place of (3.), we have π T = π T (I + EA # ) and π T π T = π T EA #, (3.) and the resulting perturbation bound is [] π π A # E. We define the second condition number κ A # Haviv and van Heyden [5] and Kirkland et al. [9] The perturbation bound in this section is derived from (3.) with a use of the following lemma: Lemma 3.. For any vector d and for any vector c such that c T e = 0, c T max i, d i d d c. The resulting perturbation bound is π π max (a # min i a i # ) (cf. Refs. [5,9].) We define κ 3 max (a # min i a i # ). E.
5 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Funderlic and Meyer [4] Eq. (3.) provides a component-wise bound for the stationary distribution vector: π π max i a # i E, whichleadsusto π π max i, a # i E. Funderlic and Meyer [4] called the number κ 4 max i, a # i the chain condition number. The behaviour of κ 4 is provided by Meyer [3,4]. He showed that the size of κ 4 is primarily governed by how close the subdominant eigenvalues of the chain are to. (This is not true for arbitrary matrices. For an example, see [4, p.76].) To be precise, denoting the eigenvalues of P by, λ,...,λ n, the lower bound and the upper bound of κ 4 are given by max a # (n ) i < n min i λ i i, i ( λ i). (3.3) Hence, if the chain is well-conditioned, then all subdominant eigenvalues must be well separated from, and if all subdominant eigenvalues are well separated from, then the chain must be well-conditioned. In [3,4], it is indicated that the upper bound (n )/ i ( λ i) in (3.3) is a rather conservative estimate of κ 4. If no single eigenvalue of P is extremely close to, but enough eigenvalues are within range of, then (n )/ i ( λ i) is large, even if the chain is not too badly condition. Seneta [3] provides a condition number and its bounds to overcome this problem. (See Section 3.6.) 3.5. Seneta [] The condition numbers κ, κ,andκ 4 are in terms of matrix norms. Seneta [], however, proposed the ergodicity coefficient instead of the matrix norm. The ergodicity coefficient τ (B) of a matrix B with equal row sums b is defined by τ (B) sup v T B. v = v T e=0
6 4 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Note that τ (B) is an ordinary norm of B on the hyperspace H n {v : v R n,v T e = 0} of R n. eigenvalues are to b. For more discussion and study, we refer readers to [,3,7,0,5,6,8 3,5,6]. For the stochastic matrix P, the ergodicity coefficient satisfies 0 τ (P ). In case of τ (P ) <, we have a perturbation bound in terms of the ergodicity coefficient of P []: π π τ (P ) E. (3.4) (For the case τ (P ) =, see [].) We denote κ 5 τ (P ) Seneta [] In the previous parts we noted that the group inverse A # of A = I P can be used in place of Kemeny and Snell s fundamental matrix Z. In fact, if we use ergodicity coefficients as a measure of sensitivity of the stationary distribution, then Z and A # give exactly the same information: κ 6 τ (A # ) = τ (Z), which is the condition number in the perturbation bound given by Seneta []: π π τ (A # ) E (= τ (Z) E ). In Section 3.4, we observed that the size of the condition number κ 4 = max i, a # i is governed by the closeness of the subdominant eigenvalues of P to, giving the lower and upper bound for κ 4. However, the problem of overestimating the upper bound for κ 4 occurs if enough eigenvalues of P are within the range of, even if no single eigenvalue λ i of P is close to. The following bounds for κ 5 overcome this problem: min i λ i τ (A # ) i n λ i min i λ i. Unlike the upper bound (3.3) for κ 4, the far left upper bound for κ 6 takes only the closest eigenvalue to into account. Hence, it shows that as long as the closest eigenvalue of P is not close to, the chain is well-conditioned.
7 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Ipsen and Meyer [6], and Kirkland et al. [9] Ipsen and Meyer [6] derived a set of perturbation bounds and showed that all stationary probabilities react in a uniform manner to perturbations in the transition probabilities. The main result of their paper is based on the following component-wise perturbation bounds: π π π π k π k min A () E, A () E, for all =,,...,n, for all k =,,...,n, (3.5) where A () is the principal submatrix of A obtained by deleting the th row and column from A. Kirkland et al. [9] improved the perturbation bound in (3.5) by a factor of : π π < min giving the condition number κ 7 min A () A (). E, 3.8. Cho and Meyer [] In the previous sections, we saw a number of perturbation bounds for the stationary distribution vector of an irreducible Markov chain. Unfortunately, they provide little qualitative information about the sensitivity of the underlying Markov chain. Moreover, the actual computation of the corresponding condition number is usually expensive relative to computation of the stationary distribution vector itself. In this section a perturbation bound is presented in terms of the structure of the underlying Markov chain. To be more precise, the condition number κ is in terms of mean first passage times. For an n-state irreducible Markov chain C, themean first passage time m i from state i to state ( /= i) is defined to be the expected number of steps to enter in state for the first time, starting in state i. Themean return time m of state is the expected number of steps to return to state for the first time, starting in state. Using the relationship m = π and a # i = a# π m i, i /=, we obtain the perturbation bound
8 44 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) π π [ ] max maxi/= m i E m (cf. []). We define κ 8 [ max maxi/= m i m ]. Viewing sensitivity in terms of mean first passage times can sometimes help practitioners decide whether or not to expect sensitivity in their Markov chain models merely by observing the structure of the chain without computing or estimating condition numbers. For example, consider chains consisting of a dominant central state with strong connections to and from all other states. Physical systems of this type have historically been called mammillary systems (see [4,7]). The simplest example of a mammillary Markov chain is one whose transition probability matrix has the form p 0 0 p P = 0 p 0 p p k p k q q q k q (3.6) in which the p i s and q i s are not unduly small (say, p i >.5andq i /k). Mean first passage times m i, i =, in mammillary structures are never very large. For example, in the simple mammillary chain defined by (3.6) it can be demonstrated that p i when = k +, m i = +σ q p when i = k +, +σ q + p i p when i, = k +, where σ = k h= q h p h. Since each m, it is clear that κ 8 cannot be large, and consequently no stationary probability can be unduly sensitive to perturbations in P. It is apparent that similar remarks hold for more general mammillary structures as well. 4. Comparison of condition numbers In this section, we compare the condition numbers κ l. The norm-wise perturbation bounds in Section 3 are of the following form: π π p κ l E q,
9 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) where (p, q) = (, ),or(, ), depending on l.(forκ 4, a strict inequality holds.) The purpose of this section is simply to compare the condition numbers appearing in those norm-wise bounds. Therefore we use the form π π κ l E for all condition numbers although some of the bounds are tighter in this form. (See Remark 4..) Lemma 4.. (a) max max i,k a i # a# k =max (a # min i a i # ), (b) max (a # min i a i # ) (c) max i, a i # A#, max i, (d) max (a # min i a # i ) τ (A # ), max (a (e) τ (A # # max a i # ) i ) n, (f) τ (A # ) A #, τ (A # ) Z, τ (A # ) τ (P ), (g) A # Z A # +. a i # < max (a # min a i # ), i Proof. (a) Using a symmetric permutation, we may assume that a particular probability occurs in the last position of π. Partition A and π as follows: ( ) ( ) A(n) c π(n) A =, π =, π n d T a nn where A (n) is the principal matrix of A obtained by deleting the nth row and column from A. Since rank A = n, the relationship a nn = d T A (n) c holds, so that A # = (I eπt (n) )A (n) (I eπt (n) ) π n(i eπ(n) T )A (n) e. π(n) T A (n) (I eπt (n) ) π nπ(n) T A (n) e Hence π n e a in # = i TA (n) e + π nπ(n) T A (n) e, i /= n, (4.) π n π(n) T A (n) e, i = n,
10 46 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) where e i is the ith column of I. By M-matrix properties, it follows that A (n) > 0so that π n ei TA (n) e, π nπ(n) T A (n) e>0. Thus max a in # = i a# nn, and the result follows. (b) For each, a # > 0, by (4.). Since π T A # = 0 T, it follows that there exists k 0 such that a k # 0 < 0. Furthermore, let i 0 be such that max i a i # = a# i 0. Then a a i # 0 < i # 0 a# k 0 if a# i 0 0, a i # 0 a# otherwise. Thus, for all, max i so that a # i <max i,k a # i a# k max a i # <max max a i # i, i,k a# k. On the other hand, max max i,k a # i a# k max i, a # i +max k, a # k = max a i #. i, Now the assertion follows from (a). (c) It follows directly from the definitions of, and norm, and their relationship, (d) For a real number a, definea + max {a,0}. Then max max i,k a i # a# k =max i,k max i,k and the assertion follows by (a). (e) For any vector x R n, x n x, and the result follows, since max a i # a# k (a i # a# k )+ =τ (A # ) (by Seneta [9, p.39]),
11 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) τ (A # )= max i,k A# i A# k, and max max i,k a # i a# k = max i,k A# i A# k, where B i denotes the ith row of a matrix B. (f) Since for any matrix B with equal row sums, and τ (B)= sup v = v T e=0 v T B, y T B y B, for any vector y R n,wehave τ (B) B. The inequalities follow by this relationship together with the following facts, proven in []: τ (A # ) = τ (Z), and if τ (P ) <, then τ (A # ) τ (P ). (g) It follows by applying the triangle inequality to Z = A # + eπ T and A # = Z eπ T. The following list summarizes the relationship between the condition numbers: Relation between condition numbers κ 8 = κ 3 κ 4 < κ 3 κ 6 κ l for l =,, 5, κ 6 nκ 3, κ κ κ +.
12 48 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) Remark 4.. As remarked at the beginning of this section, some of the condition numbers provide tighter bounds than the form used in this section. To be more precise, for l =,, 5, 6, π π π π κ l. E E To give a fairer comparison to these condition numbers, note that (π π) T e = 0 implies π π (/) π π so that π π κ l E, where κ l = (/)κ l,forl =,, 5, 6. The comparison of these new condtion numbers κ l with κ 3 is as given above: κ 3 κ l, for l =,, 5, Concluding remarks We reviewed eight existing perturbation bounds, and the condition numbers are compared. The list at the end of the last section clarifies the relationships between seven condition numbers. Among these seven condition numbers, the smallest condition number is κ 3 max (a # min i a # i ) by Haviv and van Heyde [5] and Kirkland et al. [9] or equivalently, κ 8 [ ] max maxi/= m i m by Cho and Meyer []. The only condition number not included in the comparison is κ 7 = min A (). Is κ 3 a smaller condition number than κ 7? Since by (4.), a # min i a # i = π A (), The authors thank the referee for bringing this point to our attention.
13 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) for each, the question is whether max π A () min A () (5.) holds. In other words, is π always small enough so that max π A () is never larger than min A ()? This question leads us to the relationship between a stationary probability π and the corresponding A (). In a special case, the relationship is clear. Lemma 5.. If P is of rank, then A () = π, for all =,,...,n. The proof appears in [9, Observation 3.3]. This relationship for an arbitrary irreducible Markov chain does not hold. In general, however, A () tends to increase as π decreases, and vice versa. Notice that κ 8 is equal to κ 3. However, viewing sensitivity in terms of mean first passage times can sometimes help practitioner decide whether or not to expect sensitivity in their Markov chain models merely by observing the structure of the chain, thus obviating the need for computing or estimating condition numbers. References [] G. Cho, C. Meyer, Markov chain sensitivity measured by mean first passage times, Linear Algebra Appl. 36 (000) 8. [] R. Dobrushin, Central limit theorem for nonstationary Markov chains. I, Theory Probab. Appl. (956) [3] R. Dobrushin, Central limit theorem for nonstationary Markov chains. II, Theory Probab. Appl. (956) [4] R. Funderlic, C. Meyer, Sensitivity of the stationary distribution vector for an ergodic Markov chain. II, Linear Algebra Appl. 76 (986) 7. [5] M. Haviv, L. Van der Heyden, Perturbation bounds for the stationary probabilities of a finite Markov chain, Adv. Appl. Probab. 6 (984) [6] I. Ipsen, C. Meyer, Uniform stability of Markov chains, SIAM J. Matrix Anal. Appl. 5 (994) [7] D. Isaacson, R. Madsen, Markov Chains: Theory and Application, Wiley, New York, 976. [8] J. Kemeny, J. Snell, Finite Markov Chains, Van Nostrand, New York, 960. [9] S. Kirkland, M. Neumann, B. Shader, Applications of Paz s inequality to perturbation bounds for Markov chains, Linear Algebra Appl. 68 (998) [0] A. Lešanovský, Coefficients of ergodicity generated by non-symmetrical vector norms, Czechoslovak Math. J. 40 (990) [] C. Meyer, The role of the group generalized inverse in the theory of finite Markov chains, SIAM Rev. 7 (975)
14 50 G.E. Cho, C.D. Meyer / Linear Algebra and its Applications 335 (00) [] C. Meyer, The condition of a finite Markov chain and perturbation bounds for the limiting probabilities, SIAM J. Algebraic Discrete Methods (980) [3] C. Meyer, The character of a finite Markov chain, in: C.D. Meyer, R.J. Plemmons (Eds.), Proceedings of the IMA Workshop on Linear Algebra, Markov Chains, and Queuing Models, Springer, Berlin, 993, pp [4] C. Meyer, Sensitivity of the stationary distribution of a Markov chain, SIAM J. Matrix Appl. 5 (994) [5] A. Rhodius, On explicit forms for ergodic coefficients, Linear Algebra Appl. 94 (993) [6] U. Rothblum, C. Tan, Upper bounds on the maximum modulus of subdominant eigenvlues of nonnegative matrices, Linear Algebra Appl. 66 (985) [7] P. Schweitzer, Perturbation theory and finite Markov chains, J. Appl. Probab. 5 (968) [8] E. Seneta, Coefficients of ergodicity: structure and applications, Adv. Appl. Probab. (979) [9] E. Seneta, Non-negative Matrices and Markov Chains, Springer, New York, 98. [0] E. Seneta, Explicit forms for ergodicity coefficients and spectrum localization, Linear Algebra Appl. 60 (984) [] E. Seneta, Perturbation of the stationary distribution measured by ergodicity coefficients, Adv. Appl. Probab. 0 (988) [] E. Seneta, Sensitivity analysis, ergodicity coefficients, and rank-one updates for finite Markov chains, in: W.J. Stewart (Ed.), Numerical Solution of Markov Chains, Marcel Dekker, NY, 99, pp. 9. [3] E. Seneta, Sensitivity of finite Markov chains under perturbation, Statist. Probab. Lett. 7 (993) [4] C. Sheppard, A. Householder, The mathematical basis of the interpretation of tracer experiments in closed steady-systems, J. Appl. Phys. (95) [5] C. Tan, A functional form for a coefficient of ergodicity, J. Appl. Probab. 9 (98) [6] C. Tan, Coefficients of ergodicity with respect to vector norms, J. Appl. Probab. 0 (983) [7] R. Whittaker, Experiments with radisphosphorus tracer in aquarium microcosms, Ecological Monographs 3 (96)
SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*
SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract
More informationThe Kemeny Constant For Finite Homogeneous Ergodic Markov Chains
The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute
More informationOn The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem
On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem Michael Neumann Nung Sing Sze Abstract The inverse mean first passage time problem is given a positive matrix M R n,n,
More information}, (n 0) be a finite irreducible, discrete time MC. Let S = {1, 2,, m} be its state space. Let P = [p ij. ] be the transition matrix of the MC.
Abstract Questions are posed regarding the influence that the colun sus of the transition probabilities of a stochastic atrix (with row sus all one) have on the stationary distribution, the ean first passage
More informationThe effect on the algebraic connectivity of a tree by grafting or collapsing of edges
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 855 864 www.elsevier.com/locate/laa The effect on the algebraic connectivity of a tree by grafting or collapsing
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 432 (2010) 1627 1649 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa New perturbation bounds
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationPerron eigenvector of the Tsetlin matrix
Linear Algebra and its Applications 363 (2003) 3 16 wwwelseviercom/locate/laa Perron eigenvector of the Tsetlin matrix RB Bapat Indian Statistical Institute, Delhi Centre, 7 SJS Sansanwal Marg, New Delhi
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationfor an m-state homogeneous irreducible Markov chain with transition probability matrix
UPDATING MARKOV CHAINS AMY N LANGVILLE AND CARL D MEYER 1 Introduction Suppose that the stationary distribution vector φ T =(φ 1,φ 2,,φ m ) for an m-state homogeneous irreducible Markov chain with transition
More informationAnother algorithm for nonnegative matrices
Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationPerturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.
Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that
More informationCONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER
CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER ILSE C.F. IPSEN AND STEVE KIRKLAND Abstract. The PageRank updating algorithm proposed by Langville and Meyer is a special case
More informationAnalysis of Google s PageRank
Analysis of Google s PageRank Ilse Ipsen North Carolina State University Joint work with Rebecca M. Wills AN05 p.1 PageRank An objective measure of the citation importance of a web page [Brin & Page 1998]
More informationOn some matrices related to a tree with attached graphs
On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs
More informationThe Second Eigenvalue of the Google Matrix
The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second
More informationAnn. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:
Ann Funct Anal 5 (2014), no 2, 127 137 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:wwwemisde/journals/AFA/ THE ROOTS AND LINKS IN A CLASS OF M-MATRICES XIAO-DONG ZHANG This paper
More informationCensoring Technique in Studying Block-Structured Markov Chains
Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains
More informationCONDITIONING PROPERTIES OF THE STATIONARY DISTRIBUTION FOR A MARKOV CHAIN STEVE KIRKLAND
Volume 10, pp 1-15, January 003 CONDITIONING PROPERTIES OF THE STATIONARY DISTRIBUTION FOR A MARKOV CHAIN STEVE KIRKLAND Abstract Let T be an irreducible stochastic matrix with stationary vector π T The
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationOn Distributed Coordination of Mobile Agents with Changing Nearest Neighbors
On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors Ali Jadbabaie Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 jadbabai@seas.upenn.edu
More informationHarnack s theorem for harmonic compact operator-valued functions
Linear Algebra and its Applications 336 (2001) 21 27 www.elsevier.com/locate/laa Harnack s theorem for harmonic compact operator-valued functions Per Enflo, Laura Smithies Department of Mathematical Sciences,
More informationELA
SUBDOMINANT EIGENVALUES FOR STOCHASTIC MATRICES WITH GIVEN COLUMN SUMS STEVE KIRKLAND Abstract For any stochastic matrix A of order n, denote its eigenvalues as λ 1 (A),,λ n(a), ordered so that 1 = λ 1
More informationConditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina
Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic
More informationThe Distribution of Mixing Times in Markov Chains
The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution
More informationLinear estimation in models based on a graph
Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received
More informationMarkov Chains and Spectral Clustering
Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu
More informationMathematical Properties & Analysis of Google s PageRank
Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance
More informationSOMEWHAT STOCHASTIC MATRICES
SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationUtilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms
algorithms Article Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms Ahmad Askarian, Rupei Xu and András Faragó * Department of Computer Science, The University of Texas at
More informationUsing Markov Chains To Model Human Migration in a Network Equilibrium Framework
Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 430 (2009) 532 543 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Computing tight upper bounds
More informationMinimum number of non-zero-entries in a 7 7 stable matrix
Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a
More informationA Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices
A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada, S4S 0A2 August 3, 2007
More informationSpectrally arbitrary star sign patterns
Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,
More informationBounds for Levinger s function of nonnegative almost skew-symmetric matrices
Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros
More informationMinimizing the Laplacian eigenvalues for trees with given domination number
Linear Algebra and its Applications 419 2006) 648 655 www.elsevier.com/locate/laa Minimizing the Laplacian eigenvalues for trees with given domination number Lihua Feng a,b,, Guihai Yu a, Qiao Li b a School
More informationUpdating PageRank. Amy Langville Carl Meyer
Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software
More informationSENSITIVITY OF HIDDEN MARKOV MODELS
J. Appl. Prob. 42, 632 642 (2005) Printed in Israel Applied Probability Trust 2005 SENSITIVITY OF HIDDEN MARKOV MODELS ALEXANDER Yu. MITROPHANOV, ALEXANDRE LOMSADZE and MARK BORODOVSKY, Georgia Institute
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationOn the eigenvalues of specially low-rank perturbed matrices
On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is
More informationCLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES
Bull Korean Math Soc 45 (2008), No 1, pp 95 99 CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES In-Jae Kim and Bryan L Shader Reprinted
More informationSome additive results on Drazin inverse
Linear Algebra and its Applications 322 (2001) 207 217 www.elsevier.com/locate/laa Some additive results on Drazin inverse Robert E. Hartwig a, Guorong Wang a,b,1, Yimin Wei c,,2 a Mathematics Department,
More informationCalculating Web Page Authority Using the PageRank Algorithm
Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationUpdating Markov Chains Carl Meyer Amy Langville
Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006 Intro Assumptions Very large irreducible
More informationOn factor width and symmetric H -matrices
Linear Algebra and its Applications 405 (2005) 239 248 www.elsevier.com/locate/laa On factor width and symmetric H -matrices Erik G. Boman a,,1, Doron Chen b, Ojas Parekh c, Sivan Toledo b,2 a Department
More informationTwo Characterizations of Matrices with the Perron-Frobenius Property
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel
More informationInequalities for the spectra of symmetric doubly stochastic matrices
Linear Algebra and its Applications 49 (2006) 643 647 wwwelseviercom/locate/laa Inequalities for the spectra of symmetric doubly stochastic matrices Rajesh Pereira a,, Mohammad Ali Vali b a Department
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationA note on estimates for the spectral radius of a nonnegative matrix
Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow
More informationA property concerning the Hadamard powers of inverse M-matrices
Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,
More informationOVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 534 540, April 1998 015 OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN
More informationApplied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system
Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity
More informationOn Finding Optimal Policies for Markovian Decision Processes Using Simulation
On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation
More informationFinite-Horizon Statistics for Markov chains
Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationAn estimation of the spectral radius of a product of block matrices
Linear Algebra and its Applications 379 (2004) 267 275 wwwelseviercom/locate/laa An estimation of the spectral radius of a product of block matrices Mei-Qin Chen a Xiezhang Li b a Department of Mathematics
More informationSpectral Properties of Matrix Polynomials in the Max Algebra
Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract
More informationGraphs with given diameter maximizing the spectral radius van Dam, Edwin
Tilburg University Graphs with given diameter maximizing the spectral radius van Dam, Edwin Published in: Linear Algebra and its Applications Publication date: 2007 Link to publication Citation for published
More informationProperties for the Perron complement of three known subclasses of H-matrices
Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationSeries Expansions in Queues with Server
Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil Aïssani Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented
More informationTridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix
Tridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix Roy Oste a,, Joris Van der Jeugt a a Department of Applied Mathematics, Computer Science and Statistics,
More informationThe Laplacian spectrum of a mixed graph
Linear Algebra and its Applications 353 (2002) 11 20 www.elsevier.com/locate/laa The Laplacian spectrum of a mixed graph Xiao-Dong Zhang a,, Jiong-Sheng Li b a Department of Mathematics, Shanghai Jiao
More informationSTOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS
STOCHASTIC COMLEMENTATION, UNCOULING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS C D MEYER Abstract A concept called stochastic complementation is an idea which occurs naturally, although
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationHierarchical open Leontief models
Available online at wwwsciencedirectcom Linear Algebra and its Applications 428 (2008) 2549 2559 wwwelseviercom/locate/laa Hierarchical open Leontief models JM Peña Departamento de Matemática Aplicada,
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationThe Kemeny constant of a Markov chain
The Kemeny constant of a Markov chain Peter Doyle Version 1.0 dated 14 September 009 GNU FDL Abstract Given an ergodic finite-state Markov chain, let M iw denote the mean time from i to equilibrium, meaning
More informationKey words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing
MIXING PROPERTIES OF TRIANGULAR FEEDBACK SHIFT REGISTERS BERND SCHOMBURG Abstract. The purpose of this note is to show that Markov chains induced by non-singular triangular feedback shift registers and
More informationMajorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design
Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA rdhal@eecs.wsu.edu
More informationDiagonal matrix solutions of a discrete-time Lyapunov inequality
Diagonal matrix solutions of a discrete-time Lyapunov inequality Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg, Germany February 3, 1997 Abstract Diagonal solutions of
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 432 2010 661 669 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa On the characteristic and
More informationOn the second Laplacian eigenvalues of trees of odd order
Linear Algebra and its Applications 419 2006) 475 485 www.elsevier.com/locate/laa On the second Laplacian eigenvalues of trees of odd order Jia-yu Shao, Li Zhang, Xi-ying Yuan Department of Applied Mathematics,
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationA Geometric Interpretation of the Metropolis Hastings Algorithm
Statistical Science 2, Vol. 6, No., 5 9 A Geometric Interpretation of the Metropolis Hastings Algorithm Louis J. Billera and Persi Diaconis Abstract. The Metropolis Hastings algorithm transforms a given
More informationOn matrix equations X ± A X 2 A = I
Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,
More informationLinear Algebra and its Applications
Linear Algebra and its Applications xxx (2008) xxx xxx Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Graphs with three distinct
More informationThe minimum rank of matrices and the equivalence class graph
Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,
More informationQUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY
QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationMATH36001 Perron Frobenius Theory 2015
MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,
More informationThe maximal determinant and subdeterminants of ±1 matrices
Linear Algebra and its Applications 373 (2003) 297 310 www.elsevier.com/locate/laa The maximal determinant and subdeterminants of ±1 matrices Jennifer Seberry a,, Tianbing Xia a, Christos Koukouvinos b,
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationCospectral graphs and the generalized adjacency matrix
Linear Algebra and its Applications 42 2007) 41 www.elsevier.com/locate/laa Cospectral graphs and the generalized adjacency matrix E.R. van Dam a,1, W.H. Haemers a,, J.H. Koolen b,2 a Tilburg University,
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationClassical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems
Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationThe Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations
The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations Isaac Sonin Dept. of Mathematics, Univ. of North Carolina at Charlotte, Charlotte, NC, 2822, USA imsonin@email.uncc.edu
More informationRESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices
Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,
More informationPersistence and global stability in discrete models of Lotka Volterra type
J. Math. Anal. Appl. 330 2007 24 33 www.elsevier.com/locate/jmaa Persistence global stability in discrete models of Lotka Volterra type Yoshiaki Muroya 1 Department of Mathematical Sciences, Waseda University,
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (200) 867 875 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa On the exponential exponents
More information