SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

Size: px
Start display at page:

Download "SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*"

Transcription

1 SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp , July, SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract It is well known that if the transition matrix of an irreducible Markov chain of moderate size has a subdominant eigenvalue which is close to 1, then the chain is ill conditioned in the sense that there are stationary probabilities which are sensitive to perturbations in the transition probabilities However, the converse of this statement has heretofore been unresolved The purpose of this article is to address this issue by establishing upper and lower bounds on the condition number of the chain such that the bounding terms are functions of the eigenvalues of the transition matrix Furthermore, it is demonstrated how to obtain estimates for the condition number of an irreducible chain with little or no extra computational effort over that required to compute the stationary probabilities by means of an LU or QR factorization Key words Markov chains, stationary distribution, stochastic matrix, sensitivity analysis, perturbation theory, character of a Markov chain, condition numbers AMS subject classifications 65U05, 65F35, 60J10, 60J20, 15A51, 15A12, 15A18 1 Introduction The problem under consideration is that of analyzing the effects of small perturbations to the transition probabilities of a finite, irreducible, homogeneous Markov chain More precisely, if P n n is the transition probability matrix for such a chain, and if π T (π 1, π 2,, π n is the stationary distribution vector satisfying π T P π T and n i1 π i 1, the goal is to describe the effect on π T when P is perturbed by a matrix E such that P P + E is the transition probability matrix of another irreducible Markov chain Schweitzer (1968 provided the first perturbation analysis in terms of Kemeny and Snell s fundamental matrix Z (A + eπ T 1 in which A I P and e is a column of 1 s If A # denotes the group inverse of A [Meyer (1975 or Campbell and Meyer (1991], then Z (A + eπ T 1 A # + eπ T But in virtually all applications involving Z, the term eπ T is redundant; ie, all relevant information is contained in A # In particular, if π T ( π 1, π 2,, π n is the stationary distribution for P P + E, then (11 π T π T ( I + EA # 1 and (12 π T π T E A # in which can be either the 1-, 2-, or -norm If the jth column and the (i, j- entry of A # are denoted by A # j and a# ij, respectively, then (13 π j π j E A # j *Received by the editors April 6, 1992; accepted for publication (in revised form October 30, 1992 This work was supported in part by National Science Foundation grants DMS and DDM North Carolina State University, Mathematics Department, Raleigh, North Carolina , (meyer@mathncsuedu 715

2 716 carl d meyer and (14 max j π j π j E max i,j This bound is about as good as possible see Ipsen and Meyer (1994 for a discussion of optimal bounds Moreover, if the transition probabilities are analytic functions of a parameter t so that P P(t, then a # ij (15 dπ T dt π T dp dt A# and dπ j dt πt dp dt A# j The results (11 and (12 are due to Meyer (1980, and (13 appears in Golub and Meyer (1986 The inequality (14 was given by Funderlic and Meyer (1986, and the formulas (15 are derived in Golub and Meyer (1986 and Meyer and Stewart (1988 Seneta (1991 established an inequality similar to (12 using the coefficient of ergodicity τ 1 (A # in place of A # These facts make it absolutely clear that the entries in A # determine the extent to which π T is sensitive to small changes in P, so, on the basis of (14, it is natural to adopt the following definition of Funderlic and Meyer (1986 Definition 11 The condition of a Markov chain with a transition matrix P is measured by the size of its condition number, which is defined to be κ max i,j where a # ij is the (i, j-entry in the group inverse A # of A I P It is an elementary fact that κ is invariant under permutations of the states of the chain For chains of moderate size, it is not difficult to show (see the proof of Theorem 21 given in 4 that if there exists a subdominant eigenvalue of P which is close to 1, then κ must be large However, the converse of this statement has heretofore been unresolved, and our purpose is to focus on this issue More precisely, we address the following question If the subdominant eigenvalues of an irreducible Markov chain are well separated from 1, can we be sure that the chain is well conditioned? In other words, do the subdominant eigenvalues of P (or equivalently, the nonzero eigenvalues of A somehow provide complete information about the sensitivity of the chain or do we really need to know something about the singular values of A? The conjecture that κ max i,j a # ij is somehow controlled by the nonzero eigenvalues of A is contrary to what is generally true a standard example is the triangular matrix ( n 2 2 n n 3 2 n T n n, T n 4 2 n a # ij

3 sensitivity of markov chains 717 for which max i,j [T 1 ] ij is immense for even moderate values of n, but the eigenvalues of T provide no clue whatsoever that this occurs The fact that the eigenvalues are repeated or that T is nonsingular is irrelevant consider a small perturbation of T or the matrices ( 0 0 T 0 T and T# ( T 1 We will prove that, unlike the situation illustrated above, irreducible stochastic matrices P possess enough structure to guarantee that growth of the entries in A # is controlled by the nonzero eigenvalues of A I P As a consequence, it will follow that the sensitivity of an irreducible Markov chain is governed by the location of its subdominant eigenvalues 2 The main result In the sequel, it is convenient to adopt the following terminology and notation Definition 21 Let P be the transition probability matrix of an n -state irreducible Markov chain, and let σ(p {1, λ 2, λ 3,, λ n } denote the eigenvalues of P The character 1 of the chain is defined to be the (necessarily real number (1 λ 2 (1 λ 3 (1 λ n It will follow from later developments that (21 0 < n A chain is said to be of weak character when is close to 0, and the chain is said to have a strong character when is significantly larger than 0 If ( P T T 0 C (eg, this may be the reduction to Jordan form where the spectral radius of C is less than 1, then ( ( A T T and A # T I C 0 (I C 1 T [Campbell and Meyer (1991], so det (I C and 1 det (I C 1 In other words, and 1 are the respective determinants of the nonsingular parts of A and A # in the sense that det (A /R(A and 1 det (A # / R(A where A /R(A denotes the linear operator defined by restricting A to R (A It is also true that 1 det (Z where Z is Kemeny and Snell s fundamental matrix The main result of this paper is the following theorem which establishes the connection between the condition of an irreducible chain and its character 1 The character was defined by Meyer (1993 to be n 1 (1 λ 2 (1 λ 3 (1 λ n, which is the normalization of the definition given here

4 718 carl d meyer Theorem 21 For an irreducible stochastic matrix P n n, let A I P, and for i j, let δ ij (A denote the deleted product of diagonal entries δ ij (A a kk (1 p kk k i,j k i,j If δ max i,j δ ij (A (the product of all but the two smallest diagonal entries, then the condition number κ is bounded by (22 1 n min 1 λ i κ < 2δ(n 1 λ i 1 2(n 1 The proof of this theorem depends on exploiting the rich structure of A, some of which is apparent, and some of which requires illumination Before giving a formal argument, it is necessary to detail the various components of this structure, so the important facets are first laid out in 3 as a sequence of lemmas After the necessary framework is in place, it will be a simple matter to connect the lemmas together in order to construct a proof; this is contained in 4 By combining Theorem 21 with (14 and the other facts listed in 1, we arrive at the following conclusion Theorem 22 The condition of an irreducible Markov chain is primarily governed by how close the subdominant eigenvalues of the chain are to 1 More precisely, if an irreducible chain is well conditioned, then all subdominant eigenvalues must be well separated from 1, and if all subdominant eigenvalues are well separated from 1 in the sense that the chain has a strong character, then it must be well conditioned It is a corollary of Theorem 21 that if max λi 1 λ i << 1, then the chain is not overly sensitive, but it is important to underscore the point that the issue of sensitivity is not equivalent to the question of how close max λi 1 λ i is to 1 Knowing that some λ i 1 is not sufficient to guarantee that the chain is sensitive; eg, consider the well-conditioned periodic chain (or any small perturbation thereof for which P and A # The underlying structure The purpose of this section is to organize relevant properties of A I P into a sequence of lemmas from which the formal proof of Theorem 21 can be constructed Some of the more transparent or well-known features of A are stated in the first lemma Lemma 31 If A I P where P n n is an irreducible stochastic matrix, then the following statements are true (31 A as well as each principal submatrix of A has strictly positive diagonal entries, and the off-diagonal entries are nonpositive (32 A is a singular M-matrix of rank n 1 (33 If B k k (k < n is a principal submatrix of A, then each of the following statements is true (a B is a nonsingular M-matrix (b B 1 0 (c det (B > 0 (d B is diagonally dominant (e det (B b 11 b 22 b kk 1

5 sensitivity of markov chains 719 Proof These facts are either self-evident, or they are direct consequences of wellknown results see Berman and Plemmons (1979 or Horn and Johnson (1991 Part of the less transparent structure of A is illuminated in the following sequence of lemmas Lemma 32 If P n n is an irreducible stochastic matrix, and if A i denotes the principal submatrix of A I P obtained by deleting the ith row and column from A, then n det (A i i1 Proof Suppose that the eigenvalues of A are denoted by {µ 1, µ 2,, µ n }, and write the characteristic equation for A as x n + α n 1 x n α 1 x + α 0 0 Each coefficient α n k is given by ( 1 k times the sum of the product of the eigenvalues of A taken k at a time That is, (34 α n k ( 1 k µ i1 µ i2 µ ik 1 i 1< <i k n But it is also a standard result from elementary matrix theory that each coefficient α n k can be described as α n k ( 1 k (all k k principal minors of A Since 0 is a simple eigenvalue for A, there is only one nonzero term in the sum (34 when k n 1, and hence Therefore, α 1 ( 1 n 1 µ 2 µ 3 µ n ( 1 n 1 (1 λ 2 (1 λ 3 (1 λ n n ( 1 n 1 det (A i i1 n det (A i i1 n (1 λ k k2 Lemma 33 If A i denotes the principal submatrix of A I P obtained by deleting the ith row and column from A, and if π i is the ith stationary probability, then the character of the chain is given by det (A i π i Proof This result follows directly from Lemma 32 and the fact that the stationary distribution π T is given by the formula π T 1 ( n i1 det (A det (A 1, det (A 2,, det (A n i [Golub and Meyer (1986 or Iosifescu (1980, p 123] The mean return time for the kth state is R k 1/π k [Kemeny and Snell (1960], and, since not all of the π k s can be less than 1/n, there must exist a state such that R k n By combining this with (33c and (33e, an interesting corollary which proves (21 is produced

6 720 carl d meyer Corollary 31 If R k denotes the mean return time for the kth state then 0 < det (A i < min k R k n for each i 1,2,, n Lemma 34 If A I P where P n n is an irreducible stochastic matrix, and if B k k (k < n is a principal submatrix of A, then the largest entry in each column of B 1 is the diagonal entry That is, for j 1, 2,, k, it must be the case that [B 1 ] jj [B 1 ] ij for each i j At least two different proofs are possible, and we shall give both because each is instructive in its own right The first argument is shorter and more probabilistic, but it rests on a result which requires a proof of its own The second argument involves more algebraic details, but it is entirely self-contained and depends only on elementary concepts Probabilistic proof Without loss of generality, assume that B is the leading k k principal submatrix of A so that P has the form ( I B P Consider any pair of states i and j in the set S {1,2,, k}, and let N j denote the number of times the process is in state j before first hitting a state in the complement S {k + 1, k + 2,, n} If X n denotes the state of the process after n steps, and if then h ij P(hitting state j before entering S X 0 i, (35 E[N j X 0 i] d ij + h ij E[N j X 0 j] where d ij { 1 if i j, 0 if i j This statement (which appears without proof on p 62 in Kemeny and Snell (1960 is intuitive, but it is not trivial The theory of absorbing chains says that [B 1 ] ij E[N j X 0 i], so for i j we have [B 1 ] ij h ij [B 1 ] jj [B 1 ] jj Algebraic proof Assume that B is the leading k k principal submatrix of A, and suppose the states have been arranged so that the jth state is listed first and the ith state is listed second The goal is to prove that [B 1 ] 11 [B 1 ] 21 Because [B 1 ] 11 det (B 11 det (B and [B 1 ] 21 det (B 12 det (B where B ij denotes the submatrix of B obtained by deleting the ith row and jth column from B and because Lemma 31 guarantees that det (B > 0, it suffices to prove that det (B 11 + det (B 12 0

7 sensitivity of markov chains 721 Denote the first unit vector by e T 1 (1,0,,0, and partition B as (36 1 p 11 p 12 p 1k ( p B 21 1 p 22 p 2k 1 p11 p 12 p 1k b 1 b 2 b k p k1 p k2 1 p kk In terms of these quantities, det (B 11 + det (B 12 is given by det (B 11 + det (B 12 det ( ( b 2 b 3 b k + det b1 b 3 b k det ( b 2 + b 1 b 3 b k det ( B 11 + b 1 e T 1 det (B 11 ( 1 + e T 1 B 1 11 b 1 Lemma 31 also insures that det (B 11 > 0, so the proof can be completed by arguing that 1 + e T 1 B 1 11 b 1 0 To do so, modify the chain by making state 1 as well as states k + 1, k + 2,, n absorbing states so that the transition matrix has the form p 21 p 22 p 23 p 2k p 2,k+1 p 2n p 31 p 32 p 33 p 3k p 3,k+1 p 3n P p k1 p k2 p k3 p kk p k,k+1 p kn b 1 Q R 0 0 I n k It follows from the elementary theory of absorbing chains that the entries in the matrix (I Q 1( b 1 R B 1 ( 11 b1 R represent the various absorption probabilities, and consequently all entries in B 1 11 b 1 are between 0 and 1 so that e T 1 B 1 11 b 1 1 Note Although it may not be of optimal efficiency, the algebraic argument given above is also a proof of the statement (35 Lemma 35 If A I P where P n n is an irreducible stochastic matrix, and if B k k (k < n is a principal submatrix of A, then 0 < det (B max i δ i (B max i,j [B 1 ] ij 1 max i,j [B 1 ] ij where δ r (B denotes the deleted product δ r (B b 11 b 22 b kk /b rr

8 722 carl d meyer Proof Lemma 34 insures that there is some diagonal entry [ B 1] rr such that of B 1 (37 [ B 1 ] max [ rr B 1 ] i,j ij If B rr is the principal submatrix of B obtained by deleting the rth row and column from B, then (33e together with (37 produces det (B det (B rr [B 1 ] rr δ r(b [B 1 ] rr δ r (B max i,j [B 1 ] ij max i δ i (B 1 max i,j [B 1 ] ij max i,j [B 1 ] ij Lemma 36 For an irreducible stochastic matrix P n n, let A j be the principal submatrix of A I P obtained by deleting the jth row and column from A, and let Q be the permutation matrix such that ( Aj Q T c j AQ d T j a jj If the stationary distribution for Q T PQ is written as ψ T π T Q (π T, π j, then the group inverse of A is given by ( (I eπ T A 1 A # j (I eπ T π j (I eπ T A 1 j e Q π T A 1 j (I eπ T π j π T A 1 j e where e is a column of 1 s whose size is determined by the context in which it appears Proof The group inverse possesses the property that (T 1 AT # T 1 A # T for all nonsingular matrices T [Campbell and Meyer (1991], so Q T ( # Aj A # c j Q Q T d T j a jj Since rank ( Q T AQ n 1, it follows that a jj d T j A 1 j c j 0, and this is used to verify that ( # ( Aj c j A 1 (I eψ T d T j a jj j (I eψ T ( (I eπ T A 1 j (I eπ T π j (I eπ T A 1 j e π T A 1 j (I eπ T π j π T A 1 j e

9 sensitivity of markov chains Proof of the main theorem The preceding sequence of lemmas are now connected together to prove the primary results stated in Theorem 21 The upper bound To derive the inequalities (41 max i,j a # ij < 2δ(n 1 2(n 1, begin by letting Q be the permutation matrix given in Lemma 36 so that for i j, the (i, j-entry of A # is the (k, n-entry of Q T A # Q where k n In succession, use the formula of Lemma 36 and Hölder s inequality followed by the results of Lemmas 35 and 33 to write a # ij π j e T k (I eπ T A 1 j e π j e k π 1 A 1 j e < 2π j A 1 j 2π j(n 1 max r,s 2π j(n 1 max i δ i (A j det (A j 2δ(n 1 2(n 1 [ ] A 1 j rs 2π j(n 1δ det (A j Now consider the diagonal elements The (j, j-entry of A # is the (n, n-entry of Q T A # Q, so proceeding in a manner similar to that above produces a # jj π j π T A 1 j e π j π 1 A 1 j e < π j A 1 j π j(n 1 max r,s π j(n 1 max i δ i (A j det (A j δ(n 1 thus proving (41 The lower bound To establish that (n 1, [ ] A 1 j rs π j(n 1δ det (A j (42 1 n min 1 λ i max a # ij, i,j λ i 1 make use of the fact that if Ax µx for µ 0, then A # x µ 1 x [Campbell and Meyer (1991, p 129] In particular, if λ 1 is an eigenvalue of P, and if x is a corresponding eigenvector, then Ax (1 λx implies that A # x (1 λ 1 x, so 1 1 λ A # n max i,j a # ij

10 724 carl d meyer 5 Using an LU factorization Except for chains which are too large to fit into a computer s main memory, the stationary distribution π T is generally computed by direct methods; ie, either an LU or QR factorization of A I P (or A T is computed [Harrod and Plemmons (1984; Grassmann, Taksar, and Heyman (1985; Funderlic and Meyer (1986; Golub and Meyer (1986; Barlow (1993] Even for very large chains which are nearly uncoupled, direct methods are usually involved they can be the basis of the main algorithm [Stewart and Zhang (1991], or they can be used to solve the aggregated and coupling chains in iterative aggregation/disaggregation algorithms [Chatelin and Miranker (1982, Haviv (1987] In the conclusion of their paper, Golub and Meyer (1986 make the following observation Computational experience suggests that when a triangular factorization of A n n is used to solve an irreducible chain, the condition of the chain seems to be a function of the size of the nonzero pivots, and this means that it should be possible to estimate κ with little or no extra cost beyond that incurred in computing π T For large chains, this can be a significant savings over the O(n 2 operations demanded by traditional condition estimators Of course, this is contrary to the situation which exists for general nonsingular matrices because the absence of small pivots (or the existence of a large determinant is not a guarantee of a well-conditioned matrix consider the matrix in (16 A mathematical formulation and proof (or even an intuitive explanation of Golub and Meyer s observation has heretofore not been given, but the results of 2 and 3 now make it possible to give a more precise statement and a rigorous proof of the Golub Meyer observation The arguments hinge on the fact that whenever π T is computed by means of a triangular factorization of A (or A T, the character of the chain is always an immediate by-product The results for an LU factorization are given below, and the analogous theory for a QR factorization is given in the next section Suppose that the LU factorization 2 of A I P is computed to be ( ( Ln 0 Un c A LU r T If A n is the principal submatrix of A obtained by deleting the last row and column from A, then A n is a nonsingular M-matrix, and its LU factorization is A n L n U n Since the LU factors of a nonsingular M-matrix are also nonsingular M-matrices [Berman and Plemmons (1979, Horn and Johnson (1991], it follows that L n and U n are nonsingular M-matrices, and hence L 1 n 0 and U 1 n 0 Consequently, r T 0, so the solution (obtained by a simple substitution process with no divisions of the nonsingular triangular system x T L n r T is nonnegative This together with the result of Lemma 33 and Theorem 21 produces the following conclusion Theorem 51 For an irreducible Markov chain whose transition matrix is P, let the LU factorization of A I P be given by ( ( Ln 0 Un c A LU r T Regardless of whether A or A T is used, Gaussian elimination with finite-precision arithmetic can prematurely produce a zero (or even a negative pivot, and this can happen for wellconditioned chains Practical implementation demands a strategy to deal with this situation, and Funderlic and Meyer (1986 and Stewart and Zhang (1991 discuss this problem along with possible remedies Practical algorithms involve reordering schemes which introduce permutation matrices, but these permutations are not important in the context of this section, so they are suppressed

11 sensitivity of markov chains 725 If x T is the solution of x T L n r T, then each of the following statements is true The stationary distribution of the chain is (51 π T x 1 (x T, 1 The character of the chain is (52 det (U n π n (1 + x 1 det (U n The condition number for the chain is bounded above by (53 κ < 2δ(n 1π n det (U n 2δ(n 1 (1 + x 1 det (U n 2(n 1 (1 + x 1 det (U n The condition number for the chain is bounded below by n 1 (54 π n i1 π i u ii n 1 1 x i (1 + x 1 2 κ u i1 ii where u ii is the ith pivot in U n Proof Statements (51, (52, and (53 are straightforward consequences of the previous discussion To establish (54, first recall from Lemma 36 that Since U 1 n 0 and L 1 n a # nn π n π T A 1 n ( π T U 1 π1 n, u 11 e π n π T U 1 n L 1 n e > 0 0, it follows that π T U 1 n π 2 u 22 + α 2,, L 1 n e (1, 1 + β 2,, 1 + β n 1 T and L 1 n e can be written as π n 1 + α n 1, u n 1,n 1 where each α i and β i is nonnegative, and consequently (setting α 0 β 0 0 Therefore, π T A 1 n κ a # nn π n π T U 1 n n 1 e π T U 1 n L 1 (π i + α i (1 + β i n e u i1 ii n 1 L 1 π i n e π n u i1 ii n 1 i1 π i u ii n 1 1 x i (1 + x 1 2 u i1 ii As mentioned before, the pivots or the determinant need not be indicators of the condition of a general nonsingular matrix In particular, the absence of small pivots (or the existence of a large determinant is not a guarantee of a well-conditioned matrix However, for our special matrices A I P, the bounds in Theorem 51 allow the pivots to be used as condition estimators

12 726 carl d meyer Corollary 51 For an irreducible Markov chain whose transition matrix is P, suppose that the LU factorization of A I P and the stationary distribution π T have been computed as described in Theorem 51 If the pivots u ii are large relative to π n in the sense that π n /det (U n is not too small, then the chain is well conditioned If there are pivots u ii which are small relative to π n π i in the sense that π i /u ii π n n 1 is large, then the chain is ill conditioned i1 6 Using a QR factorization The utility of orthogonal triangularization is well documented in the vast literature on matrix computations, and the use of a QR factorization to solve and analyze Markov chains is discussed by Golub and Meyer (1986 The following theorem shows that the character of an irreducible chain can be directly obtained from the diagonal entries of R and the last column of Q, and this will establish an upper bound using a QR factorization which is analogous to that in Theorem 51 for an LU factorization A lower bound analogous to the one in Theorem 51 is not readily available Theorem 61 For an irreducible Markov chain whose transition matrix is P, the QR factorization of A I P is given by ( ( ( Qn c Rn R n e Qn R n Q n R n e A QR d T 0 0 d T R n d T R n e q nn If q denotes the last column of Q, then each of the following statements are true The stationary distribution of the chain is (61 π T The character of the chain is q T n i1 q in (62 q 1 det (R n The condition number for the chain is bounded above by (63 κ < 2δ(n 1 q 1 det (R n 2(n 1 q 1 det (R n Proof The formula (61 for π T is derived in Golub and Meyer (1986 To prove (62, first recall the result of Lemma 33, and observe that ( 2 2 detan (detq nr n 2 π n π 2 n (detq n 2 (detr n 2 qnn/ 2 q 2 1 Use the fact that QQ T I implies Q n Q T n + cc T I to obtain (detq 2 det ( Q n Q T n det ( I cc T 1 c T c q 2 nn, and substitute this into the previous expression to obtain (62 The bound (63 is now a consequence of the result of Theorem 21

13 sensitivity of markov chains Concluding remarks It has been argued that the sensitivity of an irreducible chain is primarily governed by how close the subdominant eigenvalues are to 1 in the sense that the condition number of the chain is bounded by (71 1 n min λ i 1 1 λ i κ < 2δ(n 1 Although the upper bound explicitly involves n, it is generally not the case that 2δ(n 1/ grows in proportion to n Except in the special case when the diagonal entries of P are 0, the term δ somewhat mitigates the presence of n because as n becomes larger, δ becomes smaller Computational experience suggests that 2δ(n 1/ is usually a rather conservative estimate of κ, and the term δ/ by itself, although not always an upper bound for κ, is often of the same order of magnitude as κ However, there exist pathological cases for which even δ/ severely overestimates κ This seems to occur for chains which are not too badly conditioned and no single eigenvalue is extremely close to 1, but enough eigenvalues are within range of 1 to force 1 to be too large This suggests that for the purposes of bounding κ above, perhaps not all of the subdominant eigenvalues need to be taken into account In a forthcoming article, Seneta (1993 addresses this issue by an analysis involving coefficients of ergodicity When direct methods are used to solve an irreducible chain, standard condition estimators can be used to produce reliable estimates for κ, but the cost of doing so is O(n 2 operations beyond the solution process The results of Theorems 51 and 61 make it possible to estimate κ with the same computations which produce π T Although the bounds for κ produced by Theorem 51 are sometimes rather loose, they are nevertheless virtually free One must balance the cost of obtaining condition estimates against the information one desires to obtain from these estimates 8 Acknowledgments The exposition of this article was enhanced by suggestions provided by Dianne O Leary, Guy Latouche, and Paul Schweitzer REFERENCES J L Barlow (1993, Error bounds for the computation of null vectors with applications to Markov chains, SIAM J Matrix Anal Appl, 14, pp A Berman and R J Plemmons (1979, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York S L Campbell and C D Meyer (1991, Generalized Inverses of Linear Transformations, Dover Publications (1979 edition by Pitman Pub Ltd, London, New York F Chatelin and W L Miranker (1982, Acceleration by aggregation of successive approximation methods, Linear Algebra Appl, 43, pp R E Funderlic and C D Meyer (1986, Sensitivity of the stationary distribution vector for an ergodic Markov chain, Linear Algebra Appl, 76, pp 1 17 G H Golub and C D Meyer (1986, Using the QR factorization and group inversion to compute, differentiate, and estimate the sensitivity of stationary probabilities for Markov chains, SIAM J Algebraic Discrete Meth, 7, pp W K Grassmann, M I Taksar, and D P Heyman (1985, Regenerative analysis and steady state distributions for Markov chains, Oper Res, 33, pp W J Harrod and R J Plemmons (1984, Comparison of some direct methods for computing stationary distributions of Markov chains, SIAM J Sci Statist Comput, 5, pp M Haviv (1987, Aggregation/disaggregation methods for computing the stationary distribution of a Markov chain, SIAM J Numer Anal, 22, pp R A Horn and C R Johnson (1991, Topics In Matrix Analysis, Cambridge University Press, Cambridge

14 728 carl d meyer M Iosifescu (1980, Finite Markov Processes and their Applications, John Wiley and Sons, New York I C F Ipsen and C D Meyer (1994, Uniform stability of Markov chains, SIAM J Matrix Anal Appl, 15, pp J G Kemeny and J L Snell (1960, Finite Markov Chains, D Van Nostrand, New York C D Meyer (1975, The role of the group generalized inverse in the theory of finite Markov chains, SIAM Rev, 17, pp (1980, The condition of a finite Markov chain and perturbation bounds for the limiting probabilities, SIAM J Algebraic Discrete Meth, 1, pp (1993, The character of a finite Markov chain, in Linear Algebra, Markov Chains, and Queueing Models, C D Meyer and R J Plemmons, eds, IMA Volumes in Mathematics and its Applications, Vol 48, Springer-Verlag, New York, pp C D Meyer and G W Stewart (1988, Derivatives and perturbations of eigenvectors, SIAM J Numer Anal, 25, pp P J Schweitzer (1968, Perturbation theory and finite Markov chains, J Appl Probab, 5, pp E Seneta (1991, Sensitivity analysis, ergodicity coefficients, and rank-one updates for finite Markov chains, in Numerical Solution of Markov Chains, W J Stewart, ed, Probability: Pure and Applied, No 8, Marcel Dekker, New York, pp (1993, Sensitivity of finite Markov chains under perturbation, Statist and Probab Lett, 17, to appear G W Stewart and G Zhang (1991, On a direct method for the solution of nearly uncoupled Markov chains, Numer Math, 59, pp 1 11

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS STOCHASTIC COMLEMENTATION, UNCOULING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS C D MEYER Abstract A concept called stochastic complementation is an idea which occurs naturally, although

More information

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER ILSE C.F. IPSEN AND STEVE KIRKLAND Abstract. The PageRank updating algorithm proposed by Langville and Meyer is a special case

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem Michael Neumann Nung Sing Sze Abstract The inverse mean first passage time problem is given a positive matrix M R n,n,

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

On the Schur Complement of Diagonally Dominant Matrices

On the Schur Complement of Diagonally Dominant Matrices On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

for an m-state homogeneous irreducible Markov chain with transition probability matrix

for an m-state homogeneous irreducible Markov chain with transition probability matrix UPDATING MARKOV CHAINS AMY N LANGVILLE AND CARL D MEYER 1 Introduction Suppose that the stationary distribution vector φ T =(φ 1,φ 2,,φ m ) for an m-state homogeneous irreducible Markov chain with transition

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN

OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 534 540, April 1998 015 OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Two Characterizations of Matrices with the Perron-Frobenius Property

Two Characterizations of Matrices with the Perron-Frobenius Property NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

SOMEWHAT STOCHASTIC MATRICES

SOMEWHAT STOCHASTIC MATRICES SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Some New Results on Lyapunov-Type Diagonal Stability

Some New Results on Lyapunov-Type Diagonal Stability Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Perron eigenvector of the Tsetlin matrix

Perron eigenvector of the Tsetlin matrix Linear Algebra and its Applications 363 (2003) 3 16 wwwelseviercom/locate/laa Perron eigenvector of the Tsetlin matrix RB Bapat Indian Statistical Institute, Delhi Centre, 7 SJS Sansanwal Marg, New Delhi

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA rdhal@eecs.wsu.edu

More information

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

On Systems of Diagonal Forms II

On Systems of Diagonal Forms II On Systems of Diagonal Forms II Michael P Knapp 1 Introduction In a recent paper [8], we considered the system F of homogeneous additive forms F 1 (x) = a 11 x k 1 1 + + a 1s x k 1 s F R (x) = a R1 x k

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES Bull Korean Math Soc 45 (2008), No 1, pp 95 99 CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES In-Jae Kim and Bryan L Shader Reprinted

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information