Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Size: px
Start display at page:

Download "Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992."

Transcription

1 Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that nearly uncoupled Markov chains(numcs) are very sensitive to small changes in the elements. Indeed, some algorithms, such as standard Gaussian elimination, will obtain poor results for such problems. A structured perturbation theory is given that shows that NUMCs usually lead to well conditioned problems. It is shown that with appropriate stopping criteria, iterative aggregation/disagregation algorithms will achieve these structured error bounds. A variant of Gaussian elimination due to Grassman, Taksar, and Heyman was recently shown by O'Cinneide to achieve such bounds. Keywords Structured error bounds, aggregation/disaggregation, eigenvector, stopping criteria. AMS(MOS) Subject Classications 65F5, 65F5,65F2,65G5. Introduction We consider the problem of solving the homogeneous system of linear equations Ap = () Department of Computer Science, The Pennsylvania State University, University Park, PA , barlow@cs.psu.edu, Supported by the National Science Foundation under grant CCR-9526 and its renewal, grant CCR This research was done in part during the author's visit to the Institute for Mathematics and its Applications, 54 Vincent Hall, 26 Church St. S.E., University of Minnesota, Minneapolis, MN, 55455

2 subject to the constraint where A 2< nn is a singular M-matrix of rank n c T p = (2) c T A =:, c; p 2< n and Here c =(;;...;) T and A = I P T where P is a row stochastic matrix. Thus p =( ; 2 ;...; n ) T is a right eigenvector of P T and s a left eigenvector corresponding to the eigenvalue one. The application is that of nding the stationary distribution of a Markov chain. Our special interest in this paper is in \nearly uncoupled Markov chains (NUMC)." For the NUMC problem, the transition matrix P will have the form P T = B@ P E 2 E t E 2 P 22 E 23 E t ; P t ;t E t ;t E t E t;t P tt where all of the elements of the o-diagonal blocks E ij are small. Here each P ii is an matrix and each E ij is an m j matrix. Let be dened by = max X jt i6=j k E ij k CA and let Clearly, For convenience, let X i6=j F ij = E ij: k F ij k : and thus A = B@ B ii = I P ii ; i =;2;...;t B E 2 E t E 2 B 22 E 23 E 2t E t ; B t ;t E t ;t E t E t;t B tt CA : (3) We discuss how accurately we can expect to solve the NUMC and show how this can be applied to aggregation/disaggregation methods for these problems. 2

3 Stewart, Stewart, and McAllister[2,6] have shown that such methods converge very quickly for the NUMC. Quite recently, O'Cinneide[9] has shown that the variant of Gaussian elimination due to Grassman, Taksar, and Heyman[5] obtains small relative errors in the components of p for all irreducible, acyclic Markov chains, thus satisfying even better error bounds than are given in this paper, However, iterative methods can be used to solve larger problems, thus a more general perturbation theory is necessary. The conditioning aspects will be described in terms of the group inverse. For a matrix A, the group inverse A is the unique matrix such that :AA A = A; 2:A AA = A ;3:AA = A A: The group inverse exists if and only if rank(a) = rank(a 2 ) and the latter condition holds since zero is a simple eigenvalue of A. As shown by Meyer[7], it yields a more elegant characterization of the problem ()-(2) than does the Moore-Penrose inverse. The group inverse equals the Moore-Penrose inverse if and only if c = p. In [2], (see also Geurts[4])we used the fact that if ~p is a solution of A~p = r and p = ~p p then That leads to the normwise bound c T ~p =+! p = A r +!p: k p k k A k k r k +j!j: (4) Thus the error characterization is quite elegant. There is a similar, but less elegant characterization using the Moore-Penrose inverse[]. Meyer and Stewart [8] describe the strong relationship between k A k 2 and the sep() function that is commonly used to bound the error in eigenvectors[]. Funderlic and Meyer[3] were the rst to use this to characterize the condition of a Markov chain. For the NUMC problem, k A k will be O( ), thus bounds from (4) will be much too conservative. In section two, we give conditioning bounds on this problem as!. Some of the results in this section extend those of Zhang[4]. In section three, we demonstrate the relevance of our analysis to aggregation/disaggregation methods with appropriate stopping criteria. 2 Conditioning Aspects of NUMCs We discuss the condition of the asymtotic stationary distribution of the NUMC ()-(2), that is, the stationary distribution as!. 3

4 First, let T c =(c ;c2 T ;...;ct T ) T according to the partitioning of A. We have that T Bii = X j6=i c j T Eji = X j6=i c j T Fji : The following lemma contains a result that was also observed by Zhang[4]. Lemma 2. The diagonal blocks B ii ; i =;2;...;t have the form B ii = Bii + T c i d i where B ii is a singular M-matrix of rank such that T Bii =; and d i = X j6=i F T ij c j : Proof: Let Clearly, B ii = B ii d i T : T Bii = T B ii T d i T = T B ii d i T = by denition. Note that the o-diagonal elements of Bii are negative. Since c =(;;...;) T, Bii is diagonally dominant and thus is an M-matrix. It has rank, since if g 6= is orthogonal to c then g T Bii = g T B ii 6= since B ii is nonsingular. 2 >From Lemma 2., Bii has a unique Perron vector q i such that B ii q i = (5) T q i =: (6) We now connect this Perron vector to the solution of ()-(2). 4

5 Lemma 2.2 The i th block component p i 2< mi of p has the form where w i = z i Moreover, if k B ii k 4 then p i = B ii w i + i()q i k z i k, k z i k and i () is given by i () = kz i k d T i B ii w i d T : (7) i q i k i () p i q i k 4 k B ii k : Also, if k B ii k 8 and s i = p i = k p i k then k s i q i k 8 k B ii k : (8) Proof: Let z i = X j6=i F ij p j : Then the i th block equation of () is B ii p i = z i : Note that if z i = then B ii would be singular. Thus z i 6=. Wehave kz i k P j6=i kf ijp j k P j6=i k F ij k k p j k Adding an extra equation yields Bii P j6=i k p j k = kp i k <: p i = zi : Using the ( +)( + ) Householder transformation H such that : Then H Bii = B ii p T mi d i H ci = p! ; H zi = z i k z i k! p : mi k z i k 5

6 The system Bii p T mi d i! p i = z i k z i k! p mi k z i k is overdetermined with a zero residual. The solution p i has the form p i = B ii w i + i ()q i where B ii is any matrix such that B iibii Bii = B ii. We will suppress the argument in i (). For consistency with other arguments given here we choose B ii = B ii, the group inverse. Simply solving the last equation yields the expression (7) for i. >From the above we note that i p i q i = k z i k d T i q i T d i B ii (z i k z i k ) B ii (z i k z i k ): >From standard norm bounds k i p i q i k 2 k B ii kkz i kkd i kkq i k kz i k 2k B ii kkz i k 4 k B ii k : since k B ii k 4. Finally, we show (8). We have that k s i q i k k s i i p i k + k i p i q i k Since j i kp i k j+4k B ii k k i p i q i k j i kp i k kq i k j=j i kp i k j wehave (8)2 Note that if we use the Moore-Penrose inverse instead of the group inverse p i = B y ii (z i k z i k )+ i q i = B y ii z i + iq i since B y ii =. A similar argument with the assumption k B y ii k leads 2 to k i p i q i k 2 k B y ii k : This can be a slightly tighter bound, but the group inverse bound is more consistent with the analysis in the remainder of this paper. 6

7 Thus if k B ii k is bounded away from, the direction i p i should be close to the asymtotic direction q i. Therefore the condition of the direction p i is dependent upon the condition of B ii.we let B be dened by B = max jt k B jj k : We dene the vector x =(...; t ) T 2< t and aggregation matrix A as follows. Let A =( ij ) satisfy ii = d T i q i ij = T Fij q j j 6= i: Then the vector x =( ; 2 ;...; t ) T satises Ax =; c T x= (9) where c =(;;...;) T 2< t.we note that A is an M-matrix since we have Also, we have ii > ; i =;2;...;t ij ; i 6= j: c T A=; thus it is diagonally dominant. Wenow give a lemma that relates x =( ; 2 ;...; t ) T and x() =( (); 2 ();...; t ()) T. Lemma 2.3 The vector x() =( ();...; t ()) T the vector x =( ; 2 ;...; t ) T satisfy dened inlemma 2.2 and k x x() k 4 B (k A k + 2 ): Thus clearly, lim x() =x:! Proof: Again, we suppress the argument in i (). From (7) we have ii i =k z i k Also, the denition of z i gives us k z i k = X j6=i T Fij p j = X j6=i d T i B ii w i : () j T Fij q j + X j6=i T Fij B ii w i : () 7

8 Combining () and() yields where Ax = r (2) i = X j6=i r =( ; 2 ;...; t ) T T Fij B ii w i d i T B ii w i : Equation (2) is consistent X because () is consistent. Now to bound k r k.we have j i j kf ij k k B jj w j k T +jd i B ii w i j j6=i 2 X j6=i k F ij k k B jj k k z j k +2 k d i k k B ii k k z i k Thus 4 B =2 B [ X j6=i X kf ij k k z j k + k z i k ]: k r k 2 B [ k F ij k k z j k + i= j6=i =2 B [ k z j k X k F ij k + i= j= i6=j i= i= k z i k ] k z i k ] k z i k 4 B P t i=pj6=i k F ijp j k 4 B P t j= k p j k Pi6=j k F ij k 4 B : Now we look at the side condition on c T x.wehave c T p Thus using (2), and we have = P T i= T p i = P T i= T q i P t i= T B ii w i = c T x P t i= T B ii w i : jc T x j = P T i= i P t i= T B ii w i i= k B ii k k w i k 2 B i= k z i k jc T (x x)j2 B : (3) 8

9 Combining (2) and (3) with equation (4) yields k x x k k A k k r k +2 B 4 B (k A k + )2 2 We will consider the accuracy of the approximation to x rather than x. That error is simpler to characterize. It is likely that there are slightly better error bounds in terms of x, but they are harder to understand. The condition of the vectors q ; q 2 ;...;q t and the vector x are characterized separately. Consider, where A = B@ Here we let (A +A)~p=;c T ~p= (4) B F 2 F t F 2 B 22 F 23 F t ; B t ;t F t ;t F t F t;t B tt max jt k B jj k = B max X jt i6=j CA : (5) k F ij k = F : (6) Thus the perturbation is \structured" both in the sense that the o-diagonal blocks have structured perturbations. We express our main result in this section in two perturbation theorems. One for the directions q ; q 2 ;...;q t and one for x =( ; 2 ;...; t ) T. Theorem 2. Let ~p be the solution to (4). Assume maxf2; B g B 2. Then ~p =( ~ ~q ; ~ 2 ~q 2 ;...; ~ t ~q t ) T +v where k v k 4 B ( + F ) and k ~q i q i k B B ; i =;2;...;t for suitable constants ~ ; ~ 2 ;...; ~ t. Proof: From applying the analysis of equation (9) to A +Awehave that ~p =(~p ;~p 2 ;...;~p t ) T satises ~p i = ~ i ~q i + v i 9

10 where ~B ii ~q i = (7) c T i ~q i = (8) ~B ii =(B ii +B ii ) d i T : v i = B ~ ii ~w i X w i = ~z i k ~z i k ; ~z i = (F ij +F ij )~p j : j6=i We now need bounds on k ~q i q i k and k v k. This perturbation preserves the rank of B ~ ii since the results of Lemma 2. apply to B ~ ii.thus from a bound due to Schweitzer [], k ~ B ii k =k B ii (I +(~ B ii Bii ) B ii ) k k B ii k B 2 k B ii k 2 B We can bound k v k where v =(v ;v 2 ;...;v t ) T from Since ~w i = ~z i Therefore So we have k v i k k ~ B ii k k ~w i k 2 B k ~w i k : k ~z i k,wehavek~w i k 2kz i k,thus k v k = k v i k 4 B k ~z i k : i= k v i k 4 B i= k ~z i k : kvk 4 B P t i= [P j6=i (k F ij k + k F ij k ) k ~p j k (9) To get the bound on q i = ~q i =4 B P t j= k ~p j k [ P i6=j (k F ij k + k F ij k )] (2) =4 B ( + F ) (2) q i,we use (7)-(8) to obtain Since (7)-(8) is consistent, so is is (22)-(23) thus B ii q i =( Bii ~ Bii )~q i (22) T q i = (23) k q i k k B ii k k B ii ~ Bii k k ~q i k B B :2 We now give the perturbation theorem for x from an aggregation step.

11 Theorem 2.2 Assume the hypothesis and terminology of Theorem 2.. Let y be the solution of the system ~Ay = ~r; k ~r k r where ~ A =(~ij ) and c T y = ~ ij = T (F ij +F ij )~s j ~s j = ~p j k ~p j k i 6= j ~ ii = ~d T i ~s i ~d i = X j6=i(f ji +F ji ) T c j : Then k y x k [(6 +2 B ) B +2 F + r ]k A k Proof: We rst show that ~A = A +A where k A k (6 +2 B ) B +2 F (24) The main result comes right out of the perturbation theory. For i 6= j, wehave j ij j = j~ ij = ij j = jc T i (F ij +F ij )s j c T i F ijq j j = jc T i F T ijs j + Fij (s j q j )j k F ij s j k + k F ij k k s j q j k Therefore X j6=i k F ij k +(8 + B ) B k F ij k : j ij j X j6=i[kf ij k +(8 + B ) B k F ij k ] (25) For the diagonal elements, we have = F +(8+ B ) B : j ii j = j~ ii ii j = j ~ d T i ~s i d i T q i j

12 = j(~d i d i ) T ~s i j + jd i T (~s i q i )j k d i k k ~s i k + k d i k k ~s i q i k Thus from (25) and(26) we have (24). Now from we have with F +(8+ B ) B : (26) Ay= ~Ay = ~r Ay+~r c T y =: >From our perturbation theory, we have ky xk k A k [k A k k y k + k ~r k ] [2 F + r + (6 +2 B ) B ]k A k 2. Thus the condition of q ; q 2 ;...;q t is related solely to the condition of the diagonal blocks, whereas the condition of x appears to be bounded by the product of the condition of the diagonal blocks and that of the aggregate matrix A. Note that both conditions are independent of. The main purpose of these bounds is to give us a framework for analyzing algorithms. We now turn our attention to the accuracy of an aggregation/disaggregation algorithm. 3 Use of the above results in iterative aggregation/disaggregation methods Suppose that we have a computed solution ~p to ()-(2) such that where ~p =(~p ;~p 2 ;...;~p t ) T ~p i 2< mi A~p = r; c T ~p = (27) r =(r ;r 2 ;...;r t ) T ; r i 2< mi (28) k r i k B i =;2;...;t: (29) Moreover, assume that y =(k~p k ;k~p 2 k ;...;k ~p t k ) T =( ; 2 ;...; t ) T satises ~Ay = ~r; k r k r (3) c T y =: (3) 2

13 Here ~A =(~ ij ) ~ ij = c T i (F ij +F ij )~p j i 6= j ~ ii = X j6=i ~ ij i =;2;...;t and F ij i =;2;...;t;j 6= i satises the hypothesis of Theorem 2.. Then we can use our perturbation theory to insure that there is a \reasonable" solution to ()-(2). The stopping criteria (27)-(29) could be used with almost any iterative method that is coupled with an aggregation step, for instance, that in [2,6], provided that the aggregation conforms with the pattern of \near uncoupling". Note that ~p satises (A +A)~p= c T ~p= where k B ii k B. Thus ~p satises the hypothesis of Theorem 2. and y satises the hypothesis of Theorem 2.2. Equations (27)-(29) is a simple stopping criterion for an iterative method, whereas (3)-(3) can be achieve by Gaussian eliminationon ~ A[2] or, even better, the GTH variant of Gaussian elimination[5, 3,9]. Even when we use iterative methods for both the problem ()-(2) and the solution to (9), the above characterization can be used to determined stopping criteria for iterations for both problems. Thus, the results in this paper show that iterative aggregation/disaggregation methods will obtain an accurate solution to ()-(2). The directions s ; s 2 ;...;s t are stable without the aggregation step. The norms i =k ~p i k ; i =;2;...;t are stabilized by an aggregation step. Acknowledgements The author had valuable discussions with Pete Stewart and Dan Heyman during the course of this research. The Institute for Mathematics and Its Applications at the University of Minnesota provided the author with a very hospitable and stimulating environment during its special year in Applied Linear Algebra. References [] J.L. Barlow. On the smallest positive singular value of a singular M-matrix with applications to ergodic Markov chains. SIAM J. Alg. Dis. Methods, 7:44{424, 986. [2] J.L. Barlow. Error bounds and condition estimates for the computation of null vectors with applications to Markov chains. Technical Report CS-9-2, The Pennsylvania State University, Department of Computer Science, University Park, PA, 99. to appear, SIAM J. Matrix Anal. Appl. 3

14 [3] R.E. Funderlic and C.D. Meyer, Jr. Sensitivity of the stationary distribution for an ergodic Markov chain. Linear Alg. Appl., 76:{7, 986. [4] A.J. Geurts. A contribution to the theory of condition. Numerische Mathematik, 39:85{96, 982. [5] W.K. Grassman, M.I. Taksar, and D.P. Heyman. Regenerative analysis and steady state distribution for Markov chains. Operations Research, 33:7{ 6, 985. [6] D.F. McAllister, G.W. Stewart, and W.J. Stewart. On a Rayleigh-Ritz renement technique for nearly uncoupled stochastic matrices. Lin. Alg. Appl., 6:{25, 984. [7] C.D. Meyer, Jr. The role of the group inverse in the theory of nite Markov chains. SIAM Review, 7:443{464, 975. [8] C.D. Meyer, Jr. and G.W. Stewart. Derivatives and perturbations of eigenvectors. SIAM J. Numer. Anal., 25:679{69, 988. [9] C. A. O'Cinneide. Error analysis of a variant of Gaussian elimination for steady-state distributions of Markov chains. Technical report, Purdue University, West Lafayette, IN, 992. [] P.J. Schweitzer. Perturbation theory and nite Markov chains. Journal of Applied Probability, 5:4{43, 968. [] G.W. Stewart. Error bounds for approximate invariant subspaces for close linear operators. SIAM J. Num. Anal., 8:796{88, 97. [2] G.W. Stewart, W.J. Stewart, and D.F. McAllister. A two-stage iteration for solving nearly uncoupled Markov chains. Technical Report CSC TR- 38, Department of Computer Science, University of Maryland, College Park, MD, April 984. [3] G.W. Stewart and G. Zhang. On a direct method for the solution of nearly uncoupled Markov chains. Numerische Mathematik, 59:{, 99. [4] G. Zhang. On the sensitivity of the solution of nearly uncoupled Markov chains. Technical Report UMIACS TR 9-8, Institute for Advanced Computer Studies, Univerisity of Maryland, College Park, MD, February 99. 4

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

A Parallel Implementation of the. Yuan-Jye Jason Wu y. September 2, Abstract. The GTH algorithm is a very accurate direct method for nding

A Parallel Implementation of the. Yuan-Jye Jason Wu y. September 2, Abstract. The GTH algorithm is a very accurate direct method for nding A Parallel Implementation of the Block-GTH algorithm Yuan-Jye Jason Wu y September 2, 1994 Abstract The GTH algorithm is a very accurate direct method for nding the stationary distribution of a nite-state,

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA 1682-616 e-mail: barlow@cse.psu.edu

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From RELATIVE PERTURBATION RESULTS FOR EIGENVALUES AND EIGENVECTORS OF DIAGONALISABLE MATRICES STANLEY C. EISENSTAT AND ILSE C. F. IPSEN y Abstract. Let ^ and ^x be a perturbed eigenpair of a diagonalisable

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN

OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 534 540, April 1998 015 OVERCOMING INSTABILITY IN COMPUTING THE FUNDAMENTAL MATRIX FOR A MARKOV CHAIN

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

On the Perturbation of the Q-factor of the QR Factorization

On the Perturbation of the Q-factor of the QR Factorization NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA rdhal@eecs.wsu.edu

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

A Residual Inverse Power Method

A Residual Inverse Power Method University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

Lecture XI. Approximating the Invariant Distribution

Lecture XI. Approximating the Invariant Distribution Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.

More information

that of the SVD provides new understanding of left and right generalized singular vectors. It is shown

that of the SVD provides new understanding of left and right generalized singular vectors. It is shown ON A VARIAIONAL FORMULAION OF HE GENERALIZED SINGULAR VALUE DECOMPOSIION MOODY.CHU,ROBER. E. FUNDERLIC y AND GENE H. GOLUB z Abstract. A variational formulation for the generalized singular value decomposition

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

University of Minnesota. Michael Neumann. Department of Mathematics. University of Connecticut

University of Minnesota. Michael Neumann. Department of Mathematics. University of Connecticut ONVEXITY AND ONAVITY OF THE PERRON ROOT AND VETOR OF LESLIE MATRIES WITH APPLIATIONS TO A POPULATION MODEL Stephen J Kirkland Institute for Mathematics and its Applications University of Minnesota Minneapolis,

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

}, (n 0) be a finite irreducible, discrete time MC. Let S = {1, 2,, m} be its state space. Let P = [p ij. ] be the transition matrix of the MC.

}, (n 0) be a finite irreducible, discrete time MC. Let S = {1, 2,, m} be its state space. Let P = [p ij. ] be the transition matrix of the MC. Abstract Questions are posed regarding the influence that the colun sus of the transition probabilities of a stochastic atrix (with row sus all one) have on the stationary distribution, the ean first passage

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

A NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days.

A NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. A NOTE ON MATRI REFINEMENT EQUATIONS THOMAS A. HOGAN y Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. They can play a central role in the study of renable

More information

New concepts: rank-nullity theorem Inverse matrix Gauss-Jordan algorithm to nd inverse

New concepts: rank-nullity theorem Inverse matrix Gauss-Jordan algorithm to nd inverse Lesson 10: Rank-nullity theorem, General solution of Ax = b (A 2 R mm ) New concepts: rank-nullity theorem Inverse matrix Gauss-Jordan algorithm to nd inverse Matrix rank. matrix nullity Denition. The

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Paul Schrimpf. September 10, 2013

Paul Schrimpf. September 10, 2013 Systems of UBC Economics 526 September 10, 2013 More cardinality Let A and B be sets, the Cartesion product of A and B is A B := {(a, b) : a A, b B} Question If A and B are countable is A B countable?

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information