# Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Save this PDF as:

Size: px
Start display at page:

Download "Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices"

## Transcription

1 Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm: Let A = D +B, where D = diag(d 1,...,d n ), and B = [b ij ] M n has zeros on the diagonal. Define r i(b) = b ij C i (D,B) = {z C : z d i r i(b)} Then, all eigenvalues of A are located in n λ k (A) G(A) = C i (D,B) i=1 The C i (D,B) are called Geršgorin circles. If G(A) contains a region of k circles that are disjoint from the rest, then there are k eigenvalues in that region. k KTH Signal Processing 1 Magnus Jansson/Mats Bengtsson KTH Signal Processing 3 Magnus Jansson/Mats Bengtsson Geršgorin, Improvements Eigenvalue Perturbation Results, Motivation We know from a previous lecture that ρ(a) A for any matrix norm. That is, we know that all eigenvalues are in a circular disk with radius upper bounded by any matrix norm. Better results? What can be said about the eigenvalues and eigenvectors of A+ǫB when ǫ is small? Since A T has the same eigenvalues as A, we can do the same but summing over columns instead of rows. We conclude that λ i (A) G(A) G(A T ) Since S 1 AS has the same eigenvalues as A, the above can be improved by i λ i (A) G(S 1 AS) G ( (S 1 AS) T) for any choice of S. For it to be useful, S should be simple, e.g., diagonal (see Corollary 6.1.6). i KTH Signal Processing 2 Magnus Jansson/Mats Bengtsson KTH Signal Processing 4 Magnus Jansson/Mats Bengtsson

2 Invertibility and stability If A M n is strictly diagonally dominant such that then 1. A is invertible. a ii > a ij i 2. If all main diagonal elements are real and positive then all eigenvalues are in the right half plane. 3. If A is Hermitian with all diagonal elements positive, then all eigenvalues are real and positive. Irreducibly diagonally dominant If A M n is called irreducibly diagonally dominant if i) A is irreducible (= A has the SC property). ii) A is diagonally dominant, iii) For at least one row, i, a ii a ij i a ii > a ij KTH Signal Processing 5 Magnus Jansson/Mats Bengtsson KTH Signal Processing 7 Magnus Jansson/Mats Bengtsson Reducible matrices A matrix A M n is called reducible if n = 1 and A = 0 or n 2 and there is a permutation matrix P M n such that P T AP = B C } r 0 D } n r }{{}}{{} r n r for some integer 1 r n 1. Invertibility and stability, stronger result If A M n is irreducibly diagonally dominant, then 1. A is invertible. 2. If all main diagonal elements are real and positive then all eigenvalues are in the right half plane. 3. If A is Hermitian with all diagonal elements positive, then all eigenvalues are real and positive. A matrix A M n that is not reducible is called irreducible. A matrix is irreducible iff it describes a strongly connected directed graph, A has the SC property. KTH Signal Processing 6 Magnus Jansson/Mats Bengtsson KTH Signal Processing 8 Magnus Jansson/Mats Bengtsson

3 Perturbation theorems Thm: Let A,E M n and let A be diagonalizable, A = SΛS 1. Further, let ˆλ be an eigenvalue of A+E. Then there is some eigenvalue λ i of A such that ˆλ λ i S S 1 E = κ(s) E for some particular matrix norms (e.g., 1, 2, ). Cor: If A is a normal matrix, S is unitary = S 2 = S 1 2 = 1. This gives ˆλ λ i E 2 indicating that normal matrices are perfectly conditioned for eigenvalue computations. Perturbation of a simple eigenvalue Let λ be a simple eigenvalue of A M n and let y and x be the corresponding left and right eigenvectors. Then y x 0. Thm: Let A(t) M n be differentiable at t = 0 and assume λ is a simple eigenvalue of A(0) with left and right eigenvectors y and x. If λ(t) is an eigenvalue of A(t) for small t such that λ(0) = λ then λ (0) = y A (0)x y x Example: A(t) = A+tE gives λ (0) = y Ex y x. KTH Signal Processing 9 Magnus Jansson/Mats Bengtsson KTH Signal Processing 11 Magnus Jansson/Mats Bengtsson Perturbation cont d If both A and E are Hermitian, we can use Weyl s theorem (here we assume the eigenvalues are indexed in non-decreasing order): λ 1 (E) λ k (A+E) λ k (A) λ n (E) We also have for this case [ n ] 1/2 λ k (A+E) λ k (A) 2 E 2 k=1 where 2 is the Frobenius norm. k Perturbation of eigenvalues cont d Errors in eigenvalues may also be related to the residual r = Aˆx ˆλˆx. Assume for example that A is diagonalizable A = SΛS 1 and let ˆx and ˆλ be a given complex vector and scalar, respectively. Then there is some eigenvalue of A such that ˆλ λ i κ(s) r ˆx (for details and conditions see book). We conclude that a small residual implies a good approximation of the eigenvalue. KTH Signal Processing 10 Magnus Jansson/Mats Bengtsson KTH Signal Processing 12 Magnus Jansson/Mats Bengtsson

4 Literature with perturbation results J. R. Magnus and H. Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley & Sons Ltd., H. Krim and P. Forster. Projections on unstructured subspaces. IEEE Trans. SP, 44(10): , Oct J. Moro, J. V. Burke, and M. L. Overton. On the Lidskii-Vishik- Lyusternik perturbation theory for eigenvalues of matrices with arbitrary Jordan structure. SIAM Journal on Matrix Analysis and Applications, 18(4): , F. Rellich. Perturbation Theory of Eigenvalue Problems. Gordon & Breach, J. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press, Perturbation of eigenvectors with simple eigenvalues: The real symmetric case Assume that A M n (R) is real symmetric matrix with normalized eigenvectors x i and eigenvalues λ i. Further assume that λ 1 is a simple distinct eigenvalue. Let Â = A+ǫB where ǫ is a small scalar, B is real symmetric and let ˆx 1 be an eigenvector of Â that approaches x 1 as ǫ 0. Then a first order approximation (in ǫ) is ˆx 1 x 1 = ǫ x T k Bx 1 x k λ 1 λ k k=2 Warning: Non-unique derivative in the complex valued case! Warning, Warning Warning: No extension to multiple eigenvalues! KTH Signal Processing 13 Magnus Jansson/Mats Bengtsson KTH Signal Processing 15 Magnus Jansson/Mats Bengtsson Perturbation of eigenvectors with simple eigenvalues Thm: Let A(t) M n be differentiable at t = 0 and assume λ 0 is a simple eigenvalue of A(0) with left and right eigenvectors y 0 and x 0. If λ(t) is an eigenvalue of A(t), it has a right eigenvector x(t) for small t normalized such that x 0x(t) = 1 with derivative ( x (0) = (λ 0 I A(0)) I x 0y0 ) y0 x A (0)x 0 0 B denotes the Moore-Penrose pseudo inverse of a matrix B. (See, e.g., J. R. Magnus and H. Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley & Sons Ltd., 1988, rev. 1999) Chapter 8: Nonnegative matrices Def: A matrix A = [a ij ] M n,r is nonnegative if a ij 0 for all i,j, and we write this as A 0. (Note that this should not be confused with the matrix being nonnegative definite!) If a ij > 0 for all i,j, we say that A is positive and write this as A > 0. (We write A > B to mean A B > 0 etc.) We also define A = [ a ij ]. Typical applications where nonnegative or positive matrices occur are problems in which we have matrices where the elements correspond to probabilities (e.g., Markov chains) power levels or power gain factors (e.g., in power control for wireless systems). any other application where only nonnegative quantities appear. KTH Signal Processing 14 Magnus Jansson/Mats Bengtsson KTH Signal Processing 16 Magnus Jansson/Mats Bengtsson

5 Nonnegative matrices: Some properties Let A,B M n and x C n. Then Ax A x AB A B If A 0, then A m 0; if A > 0, then A m > 0. If A 0, x > 0, and Ax = 0 then A = 0. Nonnegative matrices: Spectral radius Thm: Let A M n and A 0. Then min i min j i=1 a ij ρ(a) max i a ij ρ(a) max j Thm: Let A M n and A 0. If Ax = λx and x > 0, then λ = ρ(a). i=1 a ij a ij If A B, then A B, for any absolute norm ; that is, a norm for which A = A. KTH Signal Processing 17 Magnus Jansson/Mats Bengtsson KTH Signal Processing 19 Magnus Jansson/Mats Bengtsson Nonnegative matrices: Spectral radius Positive matrices For positive matrices we can say a little more. Lemma: If A M n, A 0, and if the row sums of A are constant, then ρ(a) = A. If the column sums are constant, then ρ(a) = A 1. The following theorem can be used to give upper and lower bounds on the spectral radius of arbitrary matrices. Thm: Let A,B M n. If A B, then ρ(a) ρ( A ) ρ(b). Perron s theorem: If A M n and A > 0, then 1. ρ(a) > 0 2. ρ(a) is an eigenvalue of A 3. There is an x R n with x > 0 such that Ax = ρ(a)x 4. ρ(a) is an algebraically (and geometrically) simple eigenvalue of A 5. λ < ρ(a) for every eigenvalue λ ρ(a) of A 6. [A/ρ(A)] m L as m, where L = xy T, Ax = ρ(a)x, y T A = ρ(a)y T, x > 0, y > 0, and x T y = 1. The root ρ(a) is sometimes called a Perron root and the vector x = [x i ] a Perron vector if it is scaled such that n i=1 x i = 1. KTH Signal Processing 18 Magnus Jansson/Mats Bengtsson KTH Signal Processing 20 Magnus Jansson/Mats Bengtsson

6 Nonnegative matrices Generalization of Perron s theorem to general non-negative matrices? Thm: If A M n and A 0, then 1. ρ(a) is an eigenvalue of A 2. There is a non-zero x R n with x 0 such that Ax = ρ(a)x For stronger results, we need a stronger assumption on A. Irreducible matrices Frobenius theorem: If A M n, A 0 is irreducible, then 1. ρ(a) > 0 2. ρ(a) is an eigenvalue of A 3. There is an x R n with x > 0 such that Ax = ρ(a)x 4. ρ(a) is an algebraically (and geometrically) simple eigenvalue of A 5. If there are exactly k eigenvalues with λ p = ρ(a), p = 1,...,k, then λ p = ρ(a)e i2πp/k, p = 0,1,...,k 1 (suitably ordered) If λ is any eigenvalue of A, then λe i2πp/k is also an eigenvalue of A for all p = 0,1,...,k 1 diag[a m ] 0 for all m that are not multiples of k (e.g. m = 1). KTH Signal Processing 21 Magnus Jansson/Mats Bengtsson KTH Signal Processing 23 Magnus Jansson/Mats Bengtsson Irreducible matrices Reminder: A matrix A M n, n 2 is called reducible if there is a permutation matrix P M n such that P T AP = B C } r 0 D } n r }{{}}{{} r n r for some integer 1 r n 1. A matrix A M n that is not reducible is called irreducible. Thm: A matrix A M n with A 0 is irreducible iff (I +A) n 1 > 0 Primitive matrices A matrix A M n, A 0 is called primitive if A is irreducible ρ(a) is the only eigenvalue with λ p = ρ(a). Thm: If A M n, A 0 is primitive, then lim m [A/ρ(A)]m = L where L = xy T, Ax = ρ(a)x, y T A = ρ(a)y T, x > 0, y > 0, and x T y = 1. Thm: If A M n, A 0, then it is primitive iff A m > 0 for some m 1. KTH Signal Processing 22 Magnus Jansson/Mats Bengtsson KTH Signal Processing 24 Magnus Jansson/Mats Bengtsson

7 Stochastic matrices A nonnegative matrix with all its row sums equal to 1 is called a (row) stochastic matrix. A column stochastic matrix is the transpose of a row stochastic matrix. If a matrix is both row and column stochastic it is called doubly stochastic. Example, Markov processes Consider a discrete stochastic process that at each time instant is in one of the states S 1,...,S n. Let p ij be the probability to change from state S i to state S j. Note that the transition matrix P = [p ij ], is a stochastic matrix. Let µ i (t) denote the probability of being in state S i at time t and µ(t) = [µ 1 (t),...,µ n (t)], then µ(t+1) = µ(t)p contains the corresponding probabilities for time t+1. If P is primitive (other terms are used in the statistics literature), then µ(t) µ as t where µ = µ P, no matter what µ(0) is. µ is called the stationary distribution. Nice article: The Perron Frobenius Theorem: Some of its applications, S. U. Pillai, T. Suel, S. Cha, IEEE Signal Processing Magazine, Mar KTH Signal Processing 25 Magnus Jansson/Mats Bengtsson KTH Signal Processing 27 Magnus Jansson/Mats Bengtsson Stochastic matrices cont d The set of stochastic matrices in M n is a compact convex set. Let 1 = [1,1,...,1] T. A matrix is stochastic if and only if A1 = 1 = 1 is an eigenvector with eigenvalue +1 of all stochastic matrices. An example of a doubly stochastic matrix is A = [ u ij 2 ] where U = [u ij ] is a unitary matrix. Also, notice that all permutation matrices are doubly stochastic. Thm: A matrix is doubly stochastic if and only if it can be written as a convex combination of a finite number of permutation matrices. Corr: The maximum of a convex function on the set of doubly stochastic matrices is attained at a permutation matrix! Other books contain more results. Further results In Matrix Theory, vol. II by Gantmacher, for example, you can find results such as: Thm: If A M n, A 0 is irreducible, then for all α > ρ(a). (αi A) 1 > 0 (Useful, for example, in connection with power control of wireless systems). KTH Signal Processing 26 Magnus Jansson/Mats Bengtsson KTH Signal Processing 28 Magnus Jansson/Mats Bengtsson

### Perron Frobenius Theory

Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

### Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

### MATH36001 Perron Frobenius Theory 2015

MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

### Nonnegative and spectral matrix theory Lecture notes

Nonnegative and spectral matrix theory Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the first part of the course Nonnegative and spectral matrix theory with applications to

### Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

### EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

### Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

### Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

### LinGloss. A glossary of linear algebra

LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

### MAA704, Perron-Frobenius theory and Markov chains.

November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

### THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

### The Perron-Frobenius Theorem and its Application to Population Dynamics

The Perron-Frobenius Theorem and its Application to Population Dynamics by Jerika Baldin A project submitted in partial fulfillment of the requirements for the degree of Mathematics (April 2017) Department

### Section 1.7: Properties of the Leslie Matrix

Section 1.7: Properties of the Leslie Matrix Definition: A matrix A whose entries are nonnegative (positive) is called a nonnegative (positive) matrix, denoted as A 0 (A > 0). Definition: A square m m

### Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

### Notes on Linear Algebra and Matrix Theory

Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

### In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

### The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

### Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

### Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

### Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

### Perron-Frobenius theorem for nonnegative multilinear forms and extensions

Perron-Frobenius theorem for nonnegative multilinear forms and extensions Shmuel Friedland Univ. Illinois at Chicago ensors 18 December, 2010, Hong-Kong Overview 1 Perron-Frobenius theorem for irreducible

### Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

### AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

### Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

### Markov Chains and Stochastic Sampling

Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

### Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

### Eigenvalues in Applications

Eigenvalues in Applications Abstract We look at the role of eigenvalues and eigenvectors in various applications. Specifically, we consider differential equations, Markov chains, population growth, and

### Convex Optimization CMU-10725

Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

### Linear Algebra: Matrix Eigenvalue Problems

CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

### Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

### Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

### Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

### EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

### On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

### Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

### Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

### A Brief Outline of Math 355

A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

### OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

### 1 Linear Algebra Problems

Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

### T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 A vector x R n is called positive, symbolically x > 0,

### Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

### First, we review some important facts on the location of eigenvalues of matrices.

BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

### TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

TMA4115 - Calculus 3 Lecture 21, April 3 Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 www.ntnu.no TMA4115 - Calculus 3, Lecture 21 Review of last week s lecture Last week

### Matrix functions that preserve the strong Perron- Frobenius property

Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

### Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

### Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

### MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

### Moore-Penrose Conditions & SVD

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1

### DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and

### Calculating Web Page Authority Using the PageRank Algorithm

Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

### Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

### City Suburbs. : population distribution after m years

Section 5.3 Diagonalization of Matrices Definition Example: stochastic matrix To City Suburbs From City Suburbs.85.03 = A.15.97 City.15.85 Suburbs.97.03 probability matrix of a sample person s residence

### Chapter 7 Special matrices

Chapter 7 Special matrices Contents (class version) 7.0 Introduction........................................ 7.1 7.1 Companion matrices................................... 7.2 Using companion matrices to

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson

Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

### Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

### Diagonalization. P. Danziger. u B = A 1. B u S.

7., 8., 8.2 Diagonalization P. Danziger Change of Basis Given a basis of R n, B {v,..., v n }, we have seen that the matrix whose columns consist of these vectors can be thought of as a change of basis

### Lecture 8 : Eigenvalues and Eigenvectors

CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

### 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

### Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

### and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

### Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

### Math 3191 Applied Linear Algebra

Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

### 1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

### Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

### On the spectra of striped sign patterns

On the spectra of striped sign patterns J J McDonald D D Olesky 2 M J Tsatsomeros P van den Driessche 3 June 7, 22 Abstract Sign patterns consisting of some positive and some negative columns, with at

### REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

### ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

### Computational math: Assignment 1

Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

### Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

### STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

### Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

### 7. Symmetric Matrices and Quadratic Forms

Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

### EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

### Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

### Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

### Lecture 2: September 8

CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

### Lecture 3: graph theory

CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### Z-Pencils. November 20, Abstract

Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

### On the convergence of weighted-average consensus

On the convergence of weighted-average consensus Francisco Pedroche Miguel Rebollo Carlos Carrascosa Alberto Palomares arxiv:307.7562v [math.oc] 29 Jul 203 Abstract In this note we give sufficient conditions

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix

### Diagonalization of Matrix

of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

### Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

### Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

### Discrete Math Class 19 ( )

Discrete Math 37110 - Class 19 (2016-12-01) Instructor: László Babai Notes taken by Jacob Burroughs Partially revised by instructor Review: Tuesday, December 6, 3:30-5:20 pm; Ry-276 Final: Thursday, December

### An estimation of the spectral radius of a product of block matrices

Linear Algebra and its Applications 379 (2004) 267 275 wwwelseviercom/locate/laa An estimation of the spectral radius of a product of block matrices Mei-Qin Chen a Xiezhang Li b a Department of Mathematics

### Computational Linear Algebra

Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

### Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

### UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

UCSD ECE269 Handout #8 Prof Young-Han Kim Monday, March 9, 208 Final Examination (Total: 30 points) There are 5 problems, each with multiple parts Your answer should be as clear and succinct as possible

### From nonnegative matrices to nonnegative tensors

From nonnegative matrices to nonnegative tensors Shmuel Friedland Univ. Illinois at Chicago 7 October, 2011 Hamilton Institute, National University of Ireland Overview 1 Perron-Frobenius theorem for irreducible

### AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

### Wavelets and Linear Algebra

Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

### Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}