First, we review some important facts on the location of eigenvalues of matrices.

Size: px
Start display at page:

Download "First, we review some important facts on the location of eigenvalues of matrices."

Transcription

1 BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given By an inclusion region for a given matrix A we mean a region of the complex plane that contains at least one of the eigenvalues of A Some nonsingularity results for partitioned matrices are also presented Key words Normal matrices, Block matrices, Eigenvalues, Gershgorin s discs, Nonsingular matrices, Hermitian positive definite matrices, Strong square-sum criterion AMS subject classifications 15A12, 15A18 1 Introduction The purpose of this paper is to derive new inclusion regions of Gershgorin type for partitioned normal matrices; see Theorems Theorem 27 improves the results of Meyer and Veselić; see [5, p 436] We also give the block versions of nonsingularity results; see Theorem 29 Theorem 29 is an analogue of Varga s Theorem 62 for strictly block diagonally dominant matrices; see [11, p 157] We review some of the standard facts on the location of eigenvalues of block matrices which are useful in error analysis of numerical algorithms and in study convergence of block iterative methods for solving linear systems of equations The set of all n-by-n matrices over C is denoted by M n We recall that a normal matrix A M n is any matrix satisfying AA = A A, where A is the conjugate transpose of a matrix A Normal matrices A can be Hermitian (A = A), skew- Hermitian (A = A), and unitary (A A = I n ), where I n is the n n identity matrix We consider only the spectral matrix norm and the second vector norm Throughout the paper, σ(a) denotes the set of all eigenvalues of A, ρ(a) = max{ λ : λ σ(a)} is the spectral radius of A and R(x,A) = x Ax is the Rayleigh quotient of x C n with x 2 = 1 First, we review some important facts on the location of eigenvalues of matrices Theorem 11 (Gershgorin, 1931) The eigenvalues of A M n lie in the union Received by the editors on July 14, 2010 Accepted for publication on October 5, 2011 Handling Editor: Richard Brualdi Faculty of Mathematics and Information Science, Warsaw University of Technology, Pl Politechniki 1, Warsaw, Poland (JKierzkowski@minipwedupl, ASmoktunowicz@minipwedupl) 1059

2 1060 J Kierzkowski and A Smoktunowicz n G i of the n discs G i = {z C: z a ii r i (A)}, (11) where r i (A) denotes the deleted absolute row sums of A, ie, r i (A) = a ij (12) j i However, some Gershgorin s discs may contain no eigenvalues In case of normal matrices, such situation does not occur Theorem 12 ([7]) Let A M n be normal Then each of the discs E i = {z C: z a ii s i (A)}, (13) where s i (A) = a ij 2, (14) j i must contain an eigenvalue of A Using the Cauchy-Schwartz inequality we get s i (A) r i (A) n 1 s i (A), i = 1,,n (15) Thus, E i G i for all i = 1,,n, so the Euclidean discs E i are smaller than the Gershgorin discs G i Unfortunately, some eigenvalues of the normal matrix A can be located outside the set n E i Theorem 13 For any λ C and n > 2, there exists a normal matrix A M n such that λ is an eigenvalue of A and λ a ii > s i (A) (i = 1,,n) Proof Let A = (λ n)i n + J, where J has each entry equal to 1 It is easily seen that σ(j) = {0,n} Therefore, σ(a) = {λ,λ n} From this it follows that λ a ii = n 1 > s i (A) = n 1 for all i This completes the proof It is well-known that if A M n is strictly diagonally dominant (ie, A satisfies the strong row-sum criterion: a ii > r i (A) for all i = 1,,n), then A is nonsingular; see, for example, [1] Some other interesting nonsingularity results were given in [1] Now we present a new nonsingularity result involving the radii s i (A)

3 Block Normal Matrices and Gershgorin-Type Discs 1061 Theorem 14 If A M n satisfies the strong square-sum criterion, ie, then A is nonsingular ω(a) = n s 2 i (A) 2 < 1, (16) a ii Moreover, if A, satisfying the strong square-sum criterion, is Hermitian and all of its diagonal elements a ii are positive, then A is positive definite We give a proof of this theorem in Section 2; see Theorem 29 It is known that the strong square-sum criterion and the strong row-sum criterion are sufficient conditions for the convergence of the Jacobi method for solving a linear system of equations Ax = b; see [4, p 359] Notice that these criteria are not equivalent Example 15 Let A 1 = , A 2 = It is easily seen that A 1 is not strictly diagonally dominant but satisfies the strong square-sum criterion with ω(a 1 ) On the other hand, A 2 does not satisfy the strong square-sum criterion, since ω(a 2 ) However, A 2 is strictly diagonally dominant We extend these results to partitioned matrices in Section 2 Some of them can be derived from the following residual theorem; see, eg, [4, p 104] Theorem 16 If A M n is a normal matrix with eigenvalues λ 1,,λ n, then for any α C and any vector x C n with x 2 = 1 min λ i α Ax αx,,n 2 To illustrate the theory, we present numerical tests in Section 3 2 Inclusion theorems for block matrices Let the matrix A M n be partitioned into q q blocks A 11 A 12 A 1q A 21 A 22 A 2q A = [A ij ] =, (21) A q1 A q2 A qq

4 1062 J Kierzkowski and A Smoktunowicz where A i,j C ni,nj is the (i,j) block of A, {n 1,,n q } is a given set of positive integers, and n n q = n First, we recall very useful estimations of the eigenvalues of partitioned matrices, frequently used in error analysis of numerical algorithms and in the study of convergence of iterative methods for solving large linear systems of equations Theorem 21 Assume that A M n is partitioned as in (21) If λ σ(a), then λ ρ(µ(a)) µ(a) 2, (22) where µ(a) is a matricial norm of A, defined as A 11 2 A 12 2 A 1q 2 A 21 2 A 22 2 A 2q 2 µ(a) = (23) A q1 2 A q2 2 A qq 2 Theorem 21 is a generalization of the Perron-Frobenius inequality and was first proved by Ostrowski; see [2], [6] and [10] If, additionally, the matrix A is normal, one can estimate real and imaginary parts of eigenvalues of A in terms of eigenvalues of diagonal blocks A kk ; see also [8] where interesting analogue of min-max theorem for normal matrices was presented Theorem 22 Assume that A M n is normal and partitioned as in (21) Let σ(a) = {λ 1,λ 2,,λ n } Then, for any k {1,2,,q} and any α σ(a kk ), the following inequalities hold and min Re λ i Re α max Re λ i (24),,n,,n min Im λ i Im α max Im λ i (25),,n,,n Proof By the normality of A, there exist a unitary matrix U and a diagonal matrix D such that A = UDU with D = diag(λ 1,λ 2,,λ n ) Let y = U x, where x C n and x 2 = 1 Then y 2 = 1 and the Rayleigh quotient R(x,A) = x Ax can be written as R(x,A) = R(y,D) = n λ i y i 2 From this we obtain min Re λ i Re R(x,A) max Re λ i (26),,n,,n

5 Block Normal Matrices and Gershgorin-Type Discs 1063 and min Im λ i Im R(x,A) max Im λ i (27),,n,,n Assume now that u is an eigenvector of A kk such that u 2 = 1 and A kk u = αu Define x = [x 1 T,x 2 T,,x q T ] T, where x k C n k be such that x i = 0 for i k and x k = u Then R(x,A) = R(u,A kk ) = α From this and (26) (27), we get (24) (25) Notice that normality of A in Theorem 22 is crucial; see the following easy example Example 23 Let A = [ ] Then A is not normal and the eigenvalues of A are λ 1,2 = 2 ± i 3 We see that (24) is not valid in this case Feingold and Varga proved the following Gershgorin-type theorem for partitioned matrices; see [3] and [11, pp ] Theorem 24 Assume that A M n is partitioned as in (21) If λ σ(a) and λ / σ(a kk ) for all k, then there exists i {1,,q} such that 1 (λ I A ii ) 1 2 j i A ij 2 (28) However, it is difficult to compute this inclusion region in practice If all diagonal blocks A kk are normal, then the following theorem holds; see [5] Corollary 25 Assume that A M n is partitioned as in (21) Suppose that all diagonal blocks A kk of A are normal If λ is an eigenvalue of A, then there exists i {1,2,,q} such that min λ α α σ(a ii) A ij 2 (29) In [5], Meyer and Veselić obtained the following generalization of Theorem 12 Let Theorem 26 Assume that a normal matrix A M n is partitioned as in (21) Ŝ i (A) = A ji 2 2 (i = 1,,q) (210)

6 1064 J Kierzkowski and A Smoktunowicz Then each disc of the form z α Ŝi(A), where α is an eigenvalue of A ii, contains at least one eigenvalue of A Now we show that in (210) one can replace Ŝi(A) by smaller radii S i (A) Let Theorem 27 Assume that a normal matrix A M n is partitioned as in (21) S i (A) = [A T 1i,A T 2i,,A T i 1,i,A T i+1,i,,a T qi] 2 (i = 1,,q) (211) Then each disc of the form z α S i (A), where α is an eigenvalue of A ii, contains at least one eigenvalue of A Proof Let α be an eigenvalue of A ii Then there exists a vector u C ni such that A ii u = αu and u 2 = 1 Define x = [x 1 T,x 2 T,,x q T ] T, where x k C n k be such that x k = 0 for k i and x i = u Then Ax αx = A 1i u A 2i u A i 1,i u A ii u αu A i+1,i u A qi u = Taking norms, we get the desired inequality We have Thus, A 1i A 2i A i 1,i 0 A i+1,i A qi u Ax αx 2 [A T 1i,A 2i T,,A T i 1,i,0,A T i+1,i,,a T qi] T 2 u 2 Ax αx 2 S i (A) u 2 = S i (A) From this and Theorem 16, we conclude that This finishes the proof min λ i α Ax αx,,n 2 S i (A) Notice that by Ostrowski s theorem we have S i (A) Ŝi(A) for all i = 1,,q Now we give a block analogue of the inequality involving diagonal elements and eigenvalues of A; see [9] for unpartitioned case

7 Block Normal Matrices and Gershgorin-Type Discs 1065 Theorem 28 Assume that A M n is partitioned as in (21) and define R i (A) = A ij 2 2 (i = 1,,q) (212) Then each eigenvalue λ of A such that λ / σ(a ii ) for all i = 1,,q, satisfies (λ I ni A ii ) R2 i (A) q q 1 (213) Proof Let λ be an eigenvalue of A, and suppose Ax = λx with x 2 = 1 Let x = [x 1 T,x 2 T,,x q T ] T, where x i C ni We have and thus, A ij x j = λx i, j=1 i = 1,,q, (λ I ni A ii )x i = A ij x j, i = 1,,q Since λ / σ(a ii ), all matrices λ I ni A ii are nonsingular From this, it follows that Taking norms, we obtain x i = (λ I ni A ii ) 1 A ij x j, i = 1,,q x i 2 (λ I ni A ii ) 1 2 A ij 2 x j 2 (214) By the Cauchy-Schwartz inequality, we have A ij 2 x j 2 2 so we can rewrite (214) as follows A ij 2 2 x i 2 2 (λ I ni A ii ) R2 i (A) x j 2 2 = R2 i (A) x j 2 2, x j 2 2, i = 1,,q (215)

8 1066 J Kierzkowski and A Smoktunowicz For simplicity of notation, let b i stand for b i = x j 2 2 (216) Since 1 = x 2 2 = x x q 2 2, the elements we have just defined satisfy b i = 1 x i 2 2, (217) and b i = (1 x i 2 2) = q x i 2 2 = q 1 (218) Thus, we can rewrite (215) as follows x i 2 2 (λ I ni A ii ) R2 i (A) b i, i = 1,,q (219) Notice that by the assumptions of our theorem, all b i are positive From this and (217) (219), we get (λ I ni A ii ) R2 i (A) = 1 b i b i = ( ) 1 1 b i 2 x i 2 = b i = 1 b i q (220) By an application of the harmonic-arithmetic mean inequality and from (218) we obtain 1 q 2 b q i b = q2 i q 1 This together with (220) gives the desired inequality (213) As a corollary we obtain the following generalization of Theorem 14 Theorem 29 If A M n satisfies the strong block square-sum criterion, ie, then A is nonsingular A 1 ii 2 2 R2 i (A) < 1, (221) Moreover, if A is Hermitian, satisfies the strong square-sum criterion and all diagonal blocks A ii are positive definite, then A is positive definite

9 Block Normal Matrices and Gershgorin-Type Discs 1067 Proof Let us first prove nonsingularity of A On the contrary, suppose that A is singular Then there is an eigenvalue λ = 0 of A From Theorem 28, taking λ = 0 in (213), we deduce that q/(q 1) < 1, which is impossible Now assume that A is Hermitian, satisfies the strong square-sum criterion and all diagonal blocks A ii are positive definite Since A is Hermitian, all its eigenvalues are real It is sufficient to prove that they are positive Conversely, suppose that there exists a negative eigenvalue ˆλ of A By assumption, all matrices A ii are Hermitian and positive definite, so ˆλ / σ(a ii ) Since ˆλ is negative, we conclude that By Theorem 28, we get (ˆλ I ni A ii ) 1 2 < A ii 1 2, i = 1,,q A 1 ii 2 2 R2 i (A) (ˆλ I ni A ii ) R2 i (A) q q 1, but this contradicts our assumption (221) This finishes the proof Remark 210 Theorem 29 is an analogue of Varga s Theorem 62 for strictly block diagonally dominant matrices; see [11, p 157] It is clear that using A T instead of A in the above theorems a new set of the radii can be obtained It is easy to prove, by Ostrowski s theorem, that the strong block square-sum criterion (221) is sufficient condition for the convergence of the block Jacobi method for solving a linear system of equations Ax = b 3 Computational examples Numerical tests were given in MATLAB, version a (R13) with unit roundoff ǫ in IEEE double precision The eigenvalues of A were plotted as crosses x Example 31 Let A = Then A is normal and σ(a) { 3747, 1224, 3029, 7295, 10100, 10120, 11390}

10 1068 J Kierzkowski and A Smoktunowicz Table 31 Results for the computed radii for the matrix A i r i (A) s i (A) Picture 1: Gershgorin s discs G i (on the left) and the Euclidean discs E i (on the right) Notice that some eigenvalues of A are outside the set n E i Example 32 We generated random matrices of entries from the distribution N(0, 1) from Matlab s function randn We use the following Matlab s code to produce the matrix A(20 20), which is unitary in floating point arithmetic randn( state,0) n=20;a=orth(randn(n)+i*randn(n)); Picture 2: Gershgorin s discs G i (on the left) and the Euclidean discs E i (on the right)

11 Block Normal Matrices and Gershgorin-Type Discs 1069 REFERENCES [1] L Cvetković, V Kostić, and R Varga A new Geršgorin-type eigenvalue inclusion set Electron Trans Numer Anal, 18:73 80, 2004 [2] E Deutsch Matricial norms Numer Math, 16:73 84, 1970 [3] DG Feingold and RS Varga Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem Pacific JMath, 12: , 1962 [4] G Hämmerlin and Karl-Heinz Hoffmann Numerical Mathematics Springer-Verlag, 1991 [5] D Meyer and K Veselić On some new inclusion theorems for the eigenvalues of partitioned matrices Numer Math, 34: , 1980 [6] AM Ostrowski On some metrical properties of operator matrices and matrices partitioned into blocks J Math Anal Appl, 2: , 1961 [7] V Pták and J Zemánek Continuité Lipschitzienne du spectre comme fonction d un opérateur normal Commentationes Mathematicae Universitatis Carolinae, 17(3): , 1976 [8] JF Queiró and AL Duarte Imbedding conditions for normal matrices Linear Algebra Appl, 430: , 2009 [9] A Smoktunowicz An inequality involving diagonal elements and eigenvalues Solution IMAGE: The Bulletin of the International Linear Algebra Society, 24:11 13, 2000 [10] A Smoktunowicz Block matrices and symmetric perturbations Linear Algebra Appl, 429: , 2008 [11] RS Varga Geršgorin and His Circles Springer-Verlag, Berlin, 2004

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Gershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds

Gershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds Gershgorin s Circle Theorem for Estimating the Eigenvalues of a Matrix with Known Error Bounds Author: David Marquis Advisors: Professor Hans De Moor Dr. Kathryn Porter Reader: Dr. Michael Nathanson May

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Block companion matrices, discrete-time block diagonal stability and polynomial matrices Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract

More information

On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc

On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc Electronic Journal of Linear Algebra Volume 34 Volume 34 (018) Article 14 018 On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc Fouzul Atik Indian Institute

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

In 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices.

In 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices. SIMPLIFICATIONS OF THE OSTROWSKI UPPER BOUNDS FOR THE SPECTRAL RADIUS OF NONNEGATIVE MATRICES CHAOQIAN LI, BAOHUA HU, AND YAOTANG LI Abstract AM Ostrowski in 1951 gave two well-known upper bounds for the

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

A note on polar decomposition based Geršgorin-type sets

A note on polar decomposition based Geršgorin-type sets A note on polar decomposition based Geršgorin-type sets Laura Smithies smithies@math.ent.edu November 21, 2007 Dedicated to Richard S. Varga, on his 80-th birthday. Abstract: Let B C n n denote a finite-dimensional

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Generalized Numerical Radius Inequalities for Operator Matrices

Generalized Numerical Radius Inequalities for Operator Matrices International Mathematical Forum, Vol. 6, 011, no. 48, 379-385 Generalized Numerical Radius Inequalities for Operator Matrices Wathiq Bani-Domi Department of Mathematics Yarmouk University, Irbed, Jordan

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

The eigenvalue distribution of block diagonally dominant matrices and block H-matrices

The eigenvalue distribution of block diagonally dominant matrices and block H-matrices Electronic Journal of Linear Algebra Volume 20 Volume 20 (2010) Article 45 2010 The eigenvalue distribution of block diagonay dominant matrices and block H-matrices Cheng-Yi Zhang chengyizhang@yahoo.com.cn

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

THE MATRIX EIGENVALUE PROBLEM

THE MATRIX EIGENVALUE PROBLEM THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

H-matrix theory vs. eigenvalue localization

H-matrix theory vs. eigenvalue localization Numer Algor 2006) 42: 229 245 DOI 10.1007/s11075-006-9029-3 H-matrix theory vs. eigenvalue localization Ljiljana Cvetković Received: 6 February 2006 / Accepted: 24 April 2006 / Published online: 31 August

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Gershgorin type sets for eigenvalues of matrix polynomials

Gershgorin type sets for eigenvalues of matrix polynomials Electronic Journal of Linear Algebra Volume 34 Volume 34 (18) Article 47 18 Gershgorin type sets for eigenvalues of matrix polynomials Christina Michailidou National Technical University of Athens, xristina.michailidou@gmail.com

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS Rotchana Chieocan Department of Mathematics, Faculty of Science, Khon Kaen University, 40002 Khon Kaen, Thail rotchana@kku.ac.th Mihály Pituk* Department

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Dominant Eigenvalue of a Sudoku Submatrix

Dominant Eigenvalue of a Sudoku Submatrix Sacred Heart University DigitalCommons@SHU Academic Festival Apr 20th, 9:30 AM - 10:45 AM Dominant Eigenvalue of a Sudoku Submatrix Nicole Esposito Follow this and additional works at: https://digitalcommons.sacredheart.edu/acadfest

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Strict diagonal dominance and a Geršgorin type theorem in Euclidean Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Distribution for the Standard Eigenvalues of Quaternion Matrices

Distribution for the Standard Eigenvalues of Quaternion Matrices International Mathematical Forum, Vol. 7, 01, no. 17, 831-838 Distribution for the Standard Eigenvalues of Quaternion Matrices Shahid Qaisar College of Mathematics and Statistics, Chongqing University

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY. February 16, 2010

SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY. February 16, 2010 SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY MINERVA CATRAL, LESLIE HOGBEN, D. D. OLESKY, AND P. VAN DEN DRIESSCHE February 16, 2010 Abstract. A matrix A is power-positive if some positive integer

More information

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Singular Value Inequalities for Real and Imaginary Parts of Matrices Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Perov s fixed point theorem for multivalued mappings in generalized Kasahara spaces

Perov s fixed point theorem for multivalued mappings in generalized Kasahara spaces Stud. Univ. Babeş-Bolyai Math. 56(2011), No. 3, 19 28 Perov s fixed point theorem for multivalued mappings in generalized Kasahara spaces Alexandru-Darius Filip Abstract. In this paper we give some corresponding

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

An estimation of the spectral radius of a product of block matrices

An estimation of the spectral radius of a product of block matrices Linear Algebra and its Applications 379 (2004) 267 275 wwwelseviercom/locate/laa An estimation of the spectral radius of a product of block matrices Mei-Qin Chen a Xiezhang Li b a Department of Mathematics

More information