Journal of Computational and Applied Mathematics

Similar documents
Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Two Results About The Matrix Exponential

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

A Note on Eigenvalues of Perturbed Hermitian Matrices

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

A Note on Simple Nonzero Finite Generalized Singular Values

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

arxiv: v1 [math.na] 1 Sep 2018

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The skew-symmetric orthogonal solutions of the matrix equation AX = B

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

M-tensors and nonsingular M-tensors

Z-Pencils. November 20, Abstract

Tensor Complementarity Problem and Semi-positive Tensors

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

Some bounds for the spectral radius of the Hadamard product of matrices

Some inequalities for sum and product of positive semide nite matrices

Improved Newton s method with exact line searches to solve quadratic matrix equation

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Geometric Mapping Properties of Semipositive Matrices

Lecture notes: Applied linear algebra Part 2. Version 1

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

Eigenvalue and Eigenvector Problems

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ETNA Kent State University

1 Linear Algebra Problems

Synopsis of Numerical Linear Algebra

Spectral inequalities and equalities involving products of matrices

On the Stability of P -Matrices

MATH 5524 MATRIX THEORY Problem Set 4

An Even Order Symmetric B Tensor is Positive Definite

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

The Eigenvalue Problem: Perturbation Theory

Linear Algebra and its Applications

SIGN PATTERNS THAT REQUIRE OR ALLOW POWER-POSITIVITY. February 16, 2010

Permutation transformations of tensors with an application

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Maximizing the numerical radii of matrices by permuting their entries

1 Last time: least-squares problems

Matrix Perturbation Theory

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Sensitivity analysis for the multivariate eigenvalue problem

Positive definite preserving linear transformations on symmetric matrix spaces

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld. Report August 2009

ETNA Kent State University

The inverse of a tridiagonal matrix

Properties of Solution Set of Tensor Complementarity Problem

Perturbation of Palindromic Eigenvalue Problems

MINIMAL NORMAL AND COMMUTING COMPLETIONS

Matrix functions that preserve the strong Perron- Frobenius property

Interval solutions for interval algebraic equations

Eigenvalue Problems and Singular Value Decomposition

2 Computing complex square roots of a real matrix

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Linear System Theory

Properties for the Perron complement of three known subclasses of H-matrices

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

First, we review some important facts on the location of eigenvalues of matrices.

Math 108b: Notes on the Spectral Theorem

NORMS ON SPACE OF MATRICES

On the Solution of Constrained and Weighted Linear Least Squares Problems

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

EIGENVALUE PROBLEMS (EVP)

Chap 3. Linear Algebra

On a quadratic matrix equation associated with an M-matrix

On the spectra of striped sign patterns

Interpolating the arithmetic geometric mean inequality and its operator version

Journal of Computational and Applied Mathematics

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Matrix Inequalities by Means of Block Matrices 1

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Notes on Linear Algebra and Matrix Theory

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

An estimation of the spectral radius of a product of block matrices

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

LinGloss. A glossary of linear algebra

The minimum rank of matrices and the equivalence class graph

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Spectral radius, symmetric and positive matrices

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

Chapter 1. Matrix Algebra

Research Article Constrained Solutions of a System of Matrix Equations

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

On the Hermitian solutions of the

PRODUCT OF OPERATORS AND NUMERICAL RANGE

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Transcription:

Journal of Computational and Applied Mathematics 236 (212) 3218 3227 Contents lists available at SciVerse ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam A sharp version of Bauer Fike s theorem Xinghua Shi a, Yi Wei b,c, a Institute of Mathematics, School of Mathematical Sciences, Fudan University, Shanghai, 2433, PR China b School of Mathematical Sciences, Fudan University, Shanghai, 2433, PR China c Shanghai Key Laboratory of Contemporary Applied Mathematics, Fudan University, Shanghai, 2433, PR China a r t i c l e i n f o a b s t r a c t Article history: Received 18 June 211 Received in revised form 9 November 211 Dedicated to Prof. Zhi-hao Cao on the occasion of his 75th birthday In this paper, we present a sharp version of Bauer Fike s theorem. We replace the matrix norm with its spectral radius or sign-complex spectral radius for diagonalizable matrices; 1-norm and -norm for non-diagonalizable matrices. We also give the applications to the pole placement problem and the singular system. 212 Elsevier B.V. All rights reserved. MSC: 15A9 65F2 Keywords: Bauer Fike s theorem Sign-real spectral radius Sign-complex spectral radius Diagonalizable Regular matrix pair Single-input pole placement 1. Introduction In this paper, we consider matrix norms on the algebra C n n of complex n n matrices with the (multiplicative) unit I (identity matrix), which satisfy D = max d i 1 i n for all diagonal matrices D = diag(d 1, d 2,..., d n ) C n n. We are interested in spectral perturbation bounds for diagonalizable and non-diagonalizable elements of C n n. The set of all complex eigenvalues of a matrix A C n n, also known as the spectrum of A, is denoted by σ (A), and we denote the spectral radius of A by ρ(a). For any positive integer n, we denote the set {1, 2,..., n} by n. The set of all real numbers is denoted by R. In the linear spaces R n and C n, the zero vector is denoted by o. We denote the set of all n n matrices with real entries by R n n. For any A = (a ij ) C m n, we represent the matrix ( a ij ) by A. If A = (a ij ), B = (b ij ) C m n and a ij b ij for all i m and j n, we write A B. An interesting classical problem in perturbation theory is to investigate the relationship between the spectra of an A C n n and a perturbation A + E. When A is diagonalizable with X 1 AX being diagonal, Bauer and Fike [1, Theorem III a] Corresponding author. E-mail addresses: 1111831@fudan.edu.cn (X. Shi), ymwei@fudan.edu.cn, yi.wei@gmail.com (Y. Wei). 377-427/$ see front matter 212 Elsevier B.V. All rights reserved. doi:1.116/j.cam.212.2.21

X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3219 present an upper bound for the distance between a point µ σ (A + E) and λ σ (A) is given by µ λ κ(x) E, λ σ (A) (1.1) where κ(x) = X X 1 is the condition number of the matrix X. Other upper bounds for µ λ is developed by Deif [2,3], Golub and Van Loan [4] (see Lemmas 2.1 and 2.2). When A is both nonsingular and diagonalizable, and λ σ (A) is small, the estimation of the absolute error µ λ by the upper bound in (1.1) does not provide satisfactory results, and instead, the relative error µ λ is being estimated. Eisenstat λ and Ipsen [5, Corollary 2.2] display the upper bound with X 1 AX being diagonal, µ λ κ(x) A 1 E λ σ (A) λ for the relative error. For more related results on this topic, see [6 11]. We also consider the generalized eigenvalue problem Ax = λbx. If det(a λb) for λ C and A, B C n n, then the matrix pair {A, B} is called regular [6,11]. Stewart [12] investigates a eigenvalue λ = α/β of the generalized eigenvalue problem βax = αbx, (α, β) (, ) as a point in the projective complex plane G 12 = {(α, β) (, ) : α, β C} and measures the distance between two points in the chordal metric and λ = α/ β, λ = α/β, ρ((α, β), ( α, β)) = α β β α λ λ =. α 2 + β 2 α 2 + β 1 2 + λ 1 2 + λ 2 (1.2) (1.3) If {A, B}, {C, D} are regular matrix pairs, then we use it as a distance d 2 (Z, W), the gap between the corresponding subspaces, where Z = (A, B), W = (C, D). Here the gap is defined in the usual way as the norm of the difference of two orthogonal projectors, which are given by [11] P Z = Z (ZZ ) 1 Z, P W = W (WW ) 1 W, then we have the metric [11], d 2 (Z, W) = P Z P W 2. For two regular matrix pairs {A, B}, {C, D} with eigenvalues (α i, β i ) and (γ i, δ i ), respectively. For Z = (A, B), W = (C, D), we can define the generalized spectral variation of W with respect to Z by [11], S Z (W) = max j n ρ((α i, β i ), (γ i, δ i )). Elsner and Sun [13] extend the classical Bauer Fike theorem to the generalized eigenvalue problem βax = αbx, (A, B C n n, (α, β) (, )). Proposition 1.1 ([13, Theorem 2.1]). Let {A, B} be a diagonalizable regular matrix pair and Z = (A, B), A = P diag(α 1, α 2,..., α n )Q, B = P diag(β 1, β 2,..., β n )Q, (P and Q are invertible). Let {C, D} be a regular pair. Then S Z (W) Q 1 2 Q 2 d 2 (Z, W), where W = (C, D). For a generalization of the above result to regular matrix pairs by p-norm (1 p ) are referred to Li [1,14]. In this paper, we investigate sharp versions of Bauer Fike s theorem. We replace the matrix norm with its spectral radius or sign-complex spectral radius for diagonalizable matrices in Section 2; 1-norm and -norm for non-diagonalizable matrices in Section 4. We present an example for diagonalizable matrices in Section 2 and conclude with remarks in Section 6. We also improve the perturbation bound of the generalized eigenvalue problem by the sign-complex spectral radius in Section 3. We present the applications to perturbation bound of the pole placement problem and the stability for the singular system in Section 5. 2. Diagonalizable matrix First we recall two important lemmas on the absolute error of eigenvalues. Lemma 2.1 (Bauer Fike [15, Theorem 6.3.2]). Let A C n n be diagonalizable with A = XΛX 1 and Λ = diag(λ 1, λ 2,..., λ n ), κ(x) = X X 1. Suppose that E C n n, and let be a matrix norm which satisfies diag(d i ) = max d i. If µ σ (A + E), then µ λ i κ(x) E. A slightly different version is developed by Deif [2,3], Golub and Van Loan [4, page 328, Problem 7.2.8], respectively.

322 X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 Lemma 2.2. Let A C n n be diagonalizable with A = XΛX 1 and Λ = diag(λ 1, λ 2,..., λ n ). Suppose that E C n n, and let be a matrix norm. If µ σ (A + E), then µ λ i X 1 E X. There are some classical results on the spectral radius of nonnegative matrices. Lemma 2.3 ([15, Theorem 8.1.18]). Let A, B and C be n n complex matrices such that A B. Then ρ(ac) ρ( AC ) ρ( A C ) ρ(b C ). Now we introduce the real spectral radius due to Rohn [16, Chapter 5] in 1989. Definition 2.4. The real spectral radius of a matrix A C n n is defined by ρ (A) := max{ λ : λ is a real eigenvalue of A}. If A has no real eigenvalues, then we set ρ (A) =. A complex diagonal matrix S = diag(s 1, s 2,..., s n ) is called complex signature matrix, if s i = 1 for all i n. We denote by CS n, the set of all n n complex signature matrices. It is obvious that S is an orthogonal matrix with S = I and S = max s i = 1 (cf. [4,17]). Especially when the entries of the diagonal matrix S CS n are real, i.e., s i = { 1, 1}, it is called the real signature matrix. We denote by RS n, the set of all n n real signature matrices. It is clear that RS n CS n. Definition 2.5 ([16, Proposition 7.3], [18, Proposition 2.6]). The sign-complex spectral radius of square matrix A is defined by ρ CS n (A) = max S CS n ρ (SA). Lemma 2.6 ([19, Lemma 2.3]). Suppose that A C n n, o x C n. Then Ax tx ρ CS n (A) t. Remark 2.7. The real version of Lemma 2.6 can be expressed by the sign-real spectral radius (cf. [2]). If A R n n, o x R n and t R + (nonnegative real numbers). Then Ax t x ρ RS n (A) t. Now we can develop a sharp version of Bauer Fike s theorem with the sign-complex spectral radius. Theorem 2.8. Let A C n n be a diagonalizable matrix with σ (A) = {λ 1, λ 2,..., λ n } C and µ is an eigenvalue of A + E. In addition, assume that there is an X C n n with X 1 AX = D = diag(λ 1, λ 2,..., λ n ) and X 1 EX C n n. Then µ λ i ρ CS n (X 1 EX) {ρ( X 1 EX ), X 1 EX }. (2.1) If, in addition, A is nonsingular, then µ λ i ρ CS n (X 1 A 1 EX) {ρ( X 1 A 1 EX ), X 1 A 1 EX }. (2.2) λ i Proof. If µ σ (A), then the result follows. Assume that µ σ (A). Since µ σ (A + E) and X 1 AX = D, we see that µ σ (D + X 1 EX). Then µi D X 1 EX is singular. Thus from the invertibility of µi D, and X 1 EX C n n, we see that there exists a nonzero vector a C n such that a = (µi D) 1 (X 1 EX)a. Thus, a ( µ λ i ) 1 (X 1 EX)a. Then ( µ λ i ) a (X 1 EX)a. It follows from Lemma 2.6 that λ i µ ρ CS n (X 1 EX). (2.3) Let S CS n. Since S = I and S = 1, we see from Lemma 2.3, Definition 2.4 and the fact that ρ(b) B for all B C n n that ρ (SX 1 EX) ρ(sx 1 EX) {ρ( SX 1 EX ), SX 1 EX } {ρ( X 1 EX ), X 1 EX }. Hence from the definition of ρ CS n (X 1 EX) and Eq. (2.3), Eq. (2.1) follows. Now, assume, in addition, that A is invertible, it follows the same approach from [5, Corollary 2.2] that we rewrite (A + E)ˆx = µˆx as (Ā + Ē)ˆx = ˆx, where Ā = µa 1, Ē = µa 1 E. It is obvious that 1 is an eigenvalue of Ā+Ē, the matrix Ā = µa 1 (which is also diagonalizable) has the same eigenvector matrix as A and its eigenvalue is µ/λ i. Applying the result of Eq. (2.1) to Ā and the eigenvalue 1 of Ā+Ē, Eq. (2.2) follows.

X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3221 Remark 2.9. If A and E are real matrices, suppose A is diagonalizable and σ (A) R, we can derive the upper bounds by the sign-real spectral radius, µ λ i ρ RS n (X 1 EX) {ρ( X 1 EX ), X 1 EX }. (2.4) Similarly, we can prove for the relative error, µ λ i ρ RS n (X 1 A 1 EX) {ρ( X 1 A 1 EX ), X 1 A 1 EX }. λ i Here we will present an example to show the sharpness of our new bounds for the diagonalizable matrices. Example 2.1 ([2,3]). Let us consider the real matrix 2 1 1 1 2 1 1 A = 1 1 5 3, 2 1 1 3 2 the eigenvalues of A are λ 1 = 2, λ 2 = 1 and λ 3 = 6 with X 1 AX = diag(2, 1, 6) and 3 2 1.75.5 11.25 1 1 X = 2 1 1 2 1 1 2 1 1, X 1 =.5.5 1 1.5 1 1. 1 1 2 1 1 1 1.25.5 1 1.25 1 1 If we take the real perturbation E such that E 7.5 1 7 A, 1.5125125e 6 1.555e + 4 1.499997575e + 4 E = 1.51e 16 9.999999999999997e 7 9.99995999999e 7, 4.999975175e 17 4.99995151e 7 4.999925251e 7 then we can compute X 1 EX = 1 16 1 5 1 1 1 1 1 5 1 First we compare absolute perturbation bounds of Lemmas 2.1, 2.2 and Remark 2.9, we estimate the upper bounds with respect to 2-norm, respectively, κ(x) E 2 = 7.4246 1 14 X 1 E X 2 = 2.3569 1 5 > X 1 EX 2 = X 1 EX 2 = 1. 1 6 ρ( X 1 EX ) = 2 1 11 > ρ RS 3 (X 1 EX) = 2 1 11. Second we study relative perturbation bounds of Eq. (1.2), Remark 2.9, and the upper bound with 2-norm holds, κ(x) A 1 E 2 = 3.7123 1 14 X 1 A 1 EX 2 = 5. 1 7 The computed eigenvalues of A + E are ρ( X 1 A 1 EX ) = 1.5 1 11 > ρ RS 3 (X 1 A 1 EX) = 1.288 1 11. ˆλ 1 = 2 + 1. 1 11, ˆλ 2 = 1 + 1. 1 11, ˆλ 3 = 6 + 1 16. We can obtain and λ 1 ˆλ 1 = 1. 1 11, λ 2 ˆλ 2 = 1. 1 11, λ 3 ˆλ 3 = 1 1 16 ; λ 1 ˆλ 1 λ 1 = 5. 1 12, λ 2 ˆλ 2 λ 2 Our perturbation bounds are sharper than the known results. = 1. 1 11, 3. Perturbation bound for the generalized eigenvalue problem. λ 3 ˆλ 3 λ 3 = 1 6 1 17. In this section, we present a sharp version of Bauer Fike s theorem to the generalized eigenvalue problem.

3222 X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3.1. Either the matrix A or B is invertible Assume that we have a diagonal matrix D C n n and a matrix X C n n such that A X B X D, which means A xi λ i B x i for all i n, where λ i and x i denote the (i, i)-element of D and the i-th column of X, respectively. Theorem 3.1. Let Y be an arbitrary n n complex matrix and R 1, R 2 be defined as follows, R 1 := Y(A X B X D), R2 := YB X I. If ρ(r 2 ) < 1, then B, X and Y are nonsingular, and λ λ i ρ CS n ((I + R 2 ) 1 R 1 ) {ρ( (I + R 2 ) 1 R 1 ), (I + R 2 ) 1 R 1 } R 1 1 R 2. (3.1) Proof. From ρ(r 2 ) < 1 and YB X = I + R2, it is obvious that B, X and Y are nonsingular. Since B is invertible, Eq. (1.3) is equivalent to the standard eigenvalue problems B 1 Ax = λx, which implies that B 1 A λi is singular. If λ i σ ( D), then the result follows. Assume that λ i σ ( D), in this case D λi is nonsingular. Then it holds that ( D λi) 1 X 1 (B 1 A λi) X = [( D λi) 1 X 1 ][ X( D λi) X( D λi) + (B 1 A λi) X] = I ( D λi) 1 X 1 B 1 (B X D A X). (3.2) From the singularity of Eq. (3.2) and the invertibility of D λi, we see that there exists a nonzero vector b C, such that b = ( D λi) 1 X 1 B 1 (B X D A X)b. Thus, 1 b λ i λ X 1 B 1 (B X D A X)b. Then ( λ i λ ) b X 1 B 1 (B X D A X)b. It follows from Lemma 2.6 that λ i µ ρ CS n ( X 1 B 1 (B X D A X)). (3.3) It is easy to show that X 1 B 1 (B X D A X) = [I + (YB X I)] 1 Y(A X B X D) = (I + R2 ) 1 R 1. (3.4) Since (I + R 2 ) 1 R 1 (I + R 2 ) 1 R 1 R 1 1 R 2, and from Eqs. (3.3) and (3.4), Eq. (3.1) follows. Remark 3.2. On comparing Theorem 3.1 with [21, Theorem 1], it is obvious that the condition of Theorem 3.1 is weaker than that of [21, Theorem 1], but our result is stronger. 3.2. Diagonalizable regular matrix pair In this subsection, we derive a sharp version of Bauer Fike s theorem of the diagonalizable regular matrix pair. Definition 3.3 ([11]). A regular matrix pair {A, B} is called diagonalizable, if there exist nonsingular matrices P and Q such that for any µ C, µb A = P(µI D)Q, where D = diag(λ 1, λ 2,..., λ n ) and λ i is eigenvalue of a regular matrix pair {A, B}. Now we develop a sharp version of Bauer Fike s theorem for the diagonalizable regular matrix pair. Theorem 3.4. Let {A, B} be a diagonalizable regular matrix pair with σ ({A, B}) = {λ 1, λ 2,..., λ n } C and µ is eigenvalue of (A + δa)y = µ(b + δb)y. In addition, assume that there are nonsingular matrices P and Q with µb A = P(µI D)Q. Then λ i µ ρ CS n (P 1 EQ 1 ), where E = µδb δa. Proof. If µ σ ({A, B}), then the result follows. Assume that µ σ ({A, B}). Since µ σ ({A + δa, B + δb}) and P 1 (µb A)Q 1 = µi D, we see that µ σ (D + P 1 EQ 1 ). Then µi D P 1 EQ 1 is singular. From the invertibility of µi D and P 1 EQ 1 C n n, there exists a nonzero vector z C n such that z = (µi D) 1 (P 1 EQ 1 )z. Thus z ( ı n µ λ i ) 1 (P 1 EQ 1 )z. Then ( ı n µ λ i ) z (P 1 EQ 1 )z. It follows from Lemma 2.6 that λ i µ ρ CS n (P 1 EQ 1 ).

X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3223 4. Non-diagonalizable case It is a position to develop another version of Bauer Fike s theorem of the non-diagonalizable case [22,23]. Suppose C n is the set of n n column stochastic matrices, R n is the set of n n row stochastic matrices, and E n is the set of n n nonnegative matrices such that each column consists of exactly one entry with value one and all other entries zero, F n is the set of n n nonnegative matrices such that each row consists of exactly one entry with value one and all other entries zero. We recall a useful lemma on nonnegative matrices [24]. Lemma 4.1 ([24, Corollary 3.2]). Let A R n n be a nonnegative matrix. Then there exists a matrix E in E n such that ρ(ea) = max C Cn ρ(ca). Corollary 4.2. Let B R n n be a nonnegative matrix. Then there exists a matrix F in F n such that ρ(bf) = max R Rn ρ(br). Proof. Let B = A T, there exists a matrix F T E n, we have ρ(bf) = ρ(f T A) = max C C n ρ(ca) = max C T R n ρ(bc T ) = max R R n ρ(br). Now we present a sharp version of Bauer Fike s theorem for the non-diagonalizable matrices with 1-norm. Assume the perturbation of the generalized eigenvalue problem of Ax = λbx is (A + δa)y = µ(b + δb)y. For the regular matrix pair {A, B}, we consider the Kronecker canonical form [25, page 264, Chapter 8.7.2] of µb A P 1 (µb A)Q 1 J = diag(m 1, M 2,..., M b ), (4.1) where P and Q are nonsingular matrices and M i = A i µb i must be either J(λ i ) (i = 1, 2,..., s) or N i (i = s+1, s+2,..., b). λ i µ 1 1 µ....... J(λ i ) =... and N..... i =. 1... (4.2) µ λ i µ 1 J(λ i ) is a block corresponding to finite eigenvalue λ i and N i is a block corresponding to the infinite eigenvalue. Next we present another version of the non-diagonalizable Bauer Fike theorem for the n-by-n generalized eigenvalue problem Ax = λbx. Theorem 4.3. Let the regular matrix pair {A, B} with µb A = P diag(j(λ 1 ), J(λ 2 ),..., J(λ s ), N s+1, N s+2,..., N b )Q = diag(m 1, M 2,..., M b ). The size of block M i is m i m i (i = 1, 2,..., b) and m 1 +m 2 + +m b = n. Suppose that δa, δb C n n and µ σ ({A + δa, B + δb}). Then σ 1 i,m i b i ρ( P 1 EQ 1 T) P 1 EQ 1 1, (4.3) where T E m, E = µδb δa and σ ij = j λ k=1 i µ k, if i = 1, 2,..., s j 1 k= µ k, if i = s + 1, s + 2,..., b. Proof. If µ σ ({A, B}), then the result follows. Assume that µ σ ({A, B}). Since µ σ ({A + δa, B + δb}) and P 1 (µb A)Q 1 J, we see that µ σ (J + P 1 EQ 1 ). Then µi J P 1 EQ 1 is singular. From the invertibility of µi J, we deduce that I (µi J) 1 (P 1 EQ 1 ) is singular. Thus ρ((µi J) 1 (P 1 EQ 1 )) 1. (4.4) Next, let us denote that (µi J) 1 diag(j 1, J 2,..., J b ). Then we have λ i µ 1 1 λ i µ 1 λ i µ 2 λ i µ m i J i = J(λ i ) = λ i µ....... = λ i µ 1..... 1... λi µ 2 λ i µ λ i µ 1 (i = 1, 2,..., s),

3224 X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 and 1 µ J i = N i = 1 1 1 µ µ m i 1....... = 1..... µ... µ 1 1 (i = s + 1, s + 2,..., b). Suppose Σ = diag(σ 1, Σ 2,..., Σ b ), where Σ i = diag(σ i1, σ i2,..., σ i,mi ) R m i m i. We can rewrite (µi J) 1 = diag(s 1, S 2,..., S b )Σ, where S i is an m i m i column stochastic matrix, i.e., λ i µ 2 λ i µ 3 λ i µ m i 1 σ i2 σ i3 σ i,mi λ i µ 1 λ i µ 2 λ i µ m i+1 σ i2 σ i3 σ i,mi S i = λ i µ 1 λ i µ m i+2 C σ i3 σ m. i,mi...... λ i µ 1 σ i,mi Now we apply Lemma 2.3 to Eq. (4.4), (there exists a matrix T E m ) 1 ρ[(µi J) 1 (P 1 EQ 1 )] ρ[ (µi J) 1 P 1 EQ 1 ] = ρ (diag(s 1, S 2,..., S b )Σ P 1 EQ 1 ) ρ(tσ P 1 EQ 1 ) = ρ(σ P 1 EQ 1 T) max i b max j m i σ ij ρ( P 1 EQ 1 T) = max i b σ i,m i ρ( P 1 EQ 1 T). Thus i b σ 1 i,m i ρ( P 1 EQ 1 T) P 1 EQ 1 1. Next we try to make the left side of Eq. (4.3) more clearly. Since µ σ ({A, B}), and we suppose the matrix pair {A + δa, B + δb} are also regular, then µ is finite. In this case, when λ i is infinite, we cannot estimate the bound of λ i µ. We only consider the case when λ i is finite. We assume that the perturbation matrices δa and δb is not large, then λ i µ is small enough such that λ i µ 1 µ, then ( j λ k=1 i µ k ) ( j 1 k= µ k ). Suppose the largest size of the block J i is m m, we can get a more simple result from Eq. (4.3) that mi m P 1 EQ 1 1 ρ( P 1 EQ 1 T) σ 1 i,m i b i i=1 λ i µ i 1 λ i µ m = 1 + λ i µ + λ i µ 2 + + λ i µ. m 1 In a conclusion, we can obtain a useful corollary. i=1 λ i µ i 1 Corollary 4.4. Suppose λ i is an eigenvalue of regular matrix pair {A, B} and µ is an eigenvalue of regular matrix pair {A + δa, B + δb}. There exist nonsingular matrices P and Q such that µb A = P diag(j(λ 1 ), J(λ 2 ),..., J(λ s ), N s+1, N s+2,..., N b )Q. Let the largest size of J i be m m, then there exists a matrix T E m, such that where E = µδb δa. λ i µ m 1 + λ i µ + λ i µ 2 + + λ i µ m 1 ρ( P 1 EQ 1 T) P 1 EQ 1 1, (4.5) Similarly, we can obtain the upper bound with -norm.

X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3225 Corollary 4.5. Suppose λ i is an eigenvalue of regular matrix pair {A, B} and µ is an eigenvalue of regular matrix pair {A + δa, B + δb}. There exist nonsingular matrix P and Q such that µb A = P diag(j(λ 1 ), J(λ 2 ),..., J(λ s ), N s+1, N s+2,..., N b )Q. Let the largest size of J i be m m, then there exists a matrix W F m, such that λ i µ m 1 + λ i µ + λ i µ 2 + + λ i µ m 1 ρ(w P 1 EQ 1 ) P 1 EQ 1, (E = µδb δa). (4.6) It follows from [14, Lemma 3.1] that we can deduce the corollary. Corollary 4.6. If P 1 EQ 1 p 1, (p = 1, ) then m P s 1 EQ 1 1 n p + P 1 EQ 1 2 n p, if P 1 EQ 1 p 1, P 1 EQ 1 1 n p + P 1 EQ 1 p, if P 1 EQ 1 p > 1, where s = max j m { i m λ i µ j }, λ i σ ({A, B}) and µ j σ ({A + δa, B + δb}). Corollary 4.7. If µ is any eigenvalue of B = A + E, and A = XJX 1 is Jordan canonical form of A, m is the largest size of Jordan block, then λ i µ m 1 + λ i µ + λ i µ 2 + + λ i µ m 1 X 1 EX p, (p = 1, ). Proof. It is the special case of Eqs. (4.5) and (4.6) when B = I and δb =. Remark 4.8 ([26]). If µ is any eigenvalue of B = A + E R n n, and A = XJX 1 is the Jordan canonical form of A, then there exists an eigenvalue λ σ (A) such that µ λ m (1 + µ λ ) m 1 X 1 EX 2, where m is the largest Jordan block with respect to λ. If X 1 EX 2 1, then 2 m 1 λ i µ 2 1 1 m X 1 EX 1 m Stewart and Sun [11, page 174, Theorem 1.12] present a sharp bound with respect to 2-norm, λ µ m 1 + λ µ + λ µ 2 + + λ µ m 1 X 1 EX 2. 2. 5. Applications to the pole placement problem and the singular system In this section, we give the application of sharp versions of Bauer Fike s theorem to the pole placement problem [27,28] and the singular system. Proposition 5.1 ([27]). Single-input pole placement (SIPP) Given a set of n numbers P = {λ 1, λ 2,..., λ n }, find a vector f, such that the spectrum of A bf T is equal to P. It has been proved that [27]: the vector f exists for all set P if and only if (A, b) is controllable, i.e., rank[b, A λi] = n, λ. It is easy to know from the fact that if A bf T is non-diagonalizable, it has two different eigenvalues corresponding to the same eigenvalue λ, then rank[a bf T λi] n 2, and thus rank[b, A bf T λi] n 1 which contradicts the controllability of (A, b). So we could denote A bf T = GDG 1 and D = diag(λ 1, λ 2,..., λ n ) is a diagonal matrix. Lemma 5.2 ([27, Theorem 3.3]). Consider the SIPP problem with data A, b, P = {λ 1, λ 2,..., λ n } and consider a perturbed problem with data  := A + δa, ˆb := b + δb, P ˆ := {λ 1 + δλ 1, λ 2 + δλ 2,..., λ n + δλ n }. Assume that the desired poles λ j, and the perturbed poles λ j + δλ j, (j = 1, 2,..., n) are each pairwise different. Suppose further that δa, δb, δλ ϵ for sufficiently small ϵ. Let f, ˆf be the feedback gains of the unperturbed and the perturbed system, respectively, and let ˆκ be the spectral condition number of the perturbed closed loop system  ˆbˆf T and its spectral set is {ˆλ 1, ˆλ 2,..., ˆλ n }. Then for each of the eigenvalues µ of A bˆf T, there is a pole λ i of the unperturbed closed loop system A bf T such that λ i µ ϵ[1 + (1 + ˆf ˆκ)]. (5.1) We can improve the above result as follows.

3226 X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 Theorem 5.3. With the same condition of Lemma 5.2, suppose  ˆbˆf T = Ĝ ˆDĜ 1, ˆD = diag(ˆλ 1, ˆλ 2,..., ˆλ n ) and ˆκ = Ĝ 2 Ĝ 1 2, we can get λ i µ ρ CS n [Ĝ(δA δbˆf )Ĝ 1 ] + ϵ ϵ[1 + (1 + ˆf ˆκ)]. (5.2) Proof. Since A bˆf T =  ˆbˆf T δa + δbˆf T, we can obtain µ λ i µ ˆλ i + δλ i ρ CS n [Ĝ(δA δbˆf )Ĝ 1 ] + ϵ Ĝ(δA δbˆf )Ĝ 1 + ϵ ˆκ δa δbˆf + ϵ ϵ[1 + (1 + ˆf ˆκ)]. Similarly, we can present a sharp bound of multi-input pole placement (MIPP) problem of [29]. λ i µ ρ CS n [Ĝ(δA δbˆf)ĝ 1 ] + ϵ ϵ[1 + (1 + ˆF ˆκ)]. Next we discuss the application of a sharp version of the non-diagonalizable Bauer Fike theorem. Dai [3] reveals the fact that a stable homogeneous singular system Bẋ = Ax, (5.3) where x R n and rank(b) < n may not be structurally stable. The following proposition gives the condition of structurally stability of a stable homogeneous singular system. Proposition 5.4 ([3, Proposition 4.1]). Let the system (5.3) be stable, i.e., σ ({A, B}) C (eigenvalues in the open left complex plane). Then (5.3) is structurally stable if and only if deg(det(λb A)) = rank(b), (5.4) where deg( ) means the degree of a polynomial. It follows from (5.4) that the matrix pair {A, B} must be regular, if the matrix pair {A, B} is singular [6,11], i.e., det(λb A), then the matrix B is degenerative. We can use the above result to give a simple perturbation bound of structurally stable homogeneous singular system. The problem is given as follows. Consider the homogeneous singular system (5.3). Assume that σ ({A, B}) C and (5.4) holds. Next we consider which condition of the perturbation matrix δa should be satisfied to preserve the stability, i.e., σ ({A + δa, B}) C. Dai [3, Theorem 4.1] gives a complicated result, we manage to give another version. Theorem 5.5. Consider the structurally homogeneous singular system, if P 1 δaq 1 p m, (p = 1, ) (5.5) 1 + + 2 + + m 1 then the stability is preserved, where = { Re(λ i ) λ i σ ({A, B})}, Re(λ i ) is the real part of λ i, m, P, and Q are the same as Eq. (4.1). Proof. Applying Corollaries 4.4 and 4.5, when δb =, we can get λ i µ m 1 + λ i µ + λ i µ 2 + + λ i µ m 1 P 1 δaq p. x m 1+x+x 2 + +x m 1 Because of the function f (x) = Re(µ) <, then the stability is preserved. is increasing when x >, if P 1 δaq p m 1+ + 2 + + m 1, λ i µ, 6. Concluding remarks We present sharp versions of Bauer Fike s theorem for the diagonalizable and non-diagonalizable matrices, which we replace the matrix norm with its spectral radius or sign-complex spectral radius, 1-norm and -norm, respectively. If the matrix A is invertible, then we can provide the relative error bound. It is natural to ask if we can extend our results to the singular case [31 33] for the sharp relative error bound, which will be our future research topic.

X. Shi, Y. Wei / Journal of Computational and Applied Mathematics 236 (212) 3218 3227 3227 Acknowledgments We would like to thank the editor M.K. Ng and the referee for their useful comments. We also thank Professors Eric K. Chu, Moody T. Chu, Ren-cang Li and Hong-guo Xu for their valuable suggestions. The first author is supported by the Programme for Cultivating Innovative Students in Key Disciplines of Fudan University, Doctoral Program of the Ministry of Education under grant 2971113 and 973 Program Project (No. 21CB3279). The second author is supported by the National Natural Science Foundation of China under grant 187151, Doctoral Program of the Ministry of Education under grant 2971113, Shanghai Science & Technology Committee and Shanghai Education Committee (Dawn Project, 8SG1). References [1] F.L. Bauer, C.T. Fike, Norms and exclusion theorems, Numer. Math. 2 (196) 137 141. [2] A.S. Deif, Realistic a priori and a posteriori error bounds for computed eigenvalues, IMA J. Numer. Anal. 1 (199) 323 329. [3] A.S. Deif, Rigorous perturbation bounds for eigenvalues and eigenvectors of a matrix, J. Comput. Appl. Math. 57 (1995) 43 412. [4] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, Baltimore, MD, 1996. [5] S.C. Eisenstat, I.C.F. Ipsen, Three absolute perturbation bounds for matrix eigenvalues imply relative bounds, SIAM J. Matrix Anal. Appl. 2 (1998) 149 158. [6] R. Bhatia, Perturbation Bounds for Matrix Eigenvalues, second ed., SIAM, Philadelphia, PA, 27. [7] Z.-H. Cao, J. Xie, R.-C. Li, A sharp version of Kahan s theorem on clustered eigenvalues, Linear Algebra Appl. 245 (1996) 147 155. [8] N. Higham, A survey of componentwise perturbation theory in numerical linear algebra, in: W. Gautschi (Ed.), Mathematics of Computation 1943 1993: A Half Century of Computational Mathematics, in: Proceedings of Symposia in Applied Mathematics, vol. 48, American Mathematical Society, Providence, RI, USA, 1994, pp. 49 77. [9] I.C.F. Ipsen, Relative perturbation results for matrix eigenvalues and singular values, in: Acta Numerica, vol. 7, Cambridge University Press, Cambridge, 1998, pp. 151 21. [1] R.-C. Li, in: L. Hogben, R. Brualdi, A. Greenbaum, R. Mathias (Eds.), Handbook of Linear Algebra, in: Matrix Perturbation Theory, Chapman & Hall/CRC, New York, 27 (Chapter 15). [11] G.W. Stewart, J.-G. Sun, Matrix Perturbation Theory, Academic Press, Inc., Boston, MA, 199. [12] G.W. Stewart, On the sensitivity of the eigenvalue problem Ax = λbx, SIAM J. Numer. Anal. 9 (1972) 669 689. [13] L. Elsner, J. Sun, Perturbation theorems for the generalized eigenvalue problem, Linear Algebra Appl. 48 (1982) 341 357. [14] R.-C. Li, On the perturbation for the generalized eigenvalues of regular matrix pairs, Math. Numer. Sin. 1 (1989) 1 19 (in Chinese). [15] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, New York, 1985. [16] J. Rohn, Systems of linear interval equations, Linear Algebra Appl. 126 (1989) 39 78. [17] O.H. Hald, A converse to the Bauer Fike theorem, Linear Algebra Appl. 9 (1974) 267 274. [18] S.M. Rump, Almost sharp bounds for the componentwise distance to the nearest singular matrix, Linear Multilinear Algebra 42 (1997) 93 17. [19] S.M. Rump, Perron Frobenius theory for complex matrices, Linear Algebra Appl. 363 (23) 251 273. [2] S.M. Rump, Ill-conditioned matrices are componentwise near to singularity, SIAM Rev. 41 (1999) 12 112. [21] S. Miyajima, Fast enclosure for all eigenvalues in generalized eigenvalue problems, J. Comput. Appl. Math. 233 (21) 2994 34. [22] E.K. Chu, Generalization of the Bauer Fike theorem, Numer. Math. 49 (1986) 685 691. [23] E.K. Chu, Exclusion theorems and the perturbation analysis of the generalized eigenvalue problem, SIAM J. Numer. Anal. 24 (1987) 1114 1125. [24] M. Neumann, N. Sze, Optimization of the spectral radius of nonnegative matrice, Operators and Matrices 1 (27) 593 61. [25] Z. Bai, J. Demmel, J. Dongarra, H. van der Vorst (Eds.), Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, SIAM, Philadelphia, 2. [26] W. Kahan, B.N. Parlett, E. Jiang, Residual bounds on approximate eigensystems of nonnormal matrices, SIAM J. Numer. Anal. 19 (1982) 47 484. [27] V. Mehrmann, H.-G. Xu, An analysis of the pole placement problem. I. The singule-input case, Electronic Transaction on Numerical Analysis 4 (1996) 89 15. [28] V. Mehrmann, H.-G. Xu, Choose poles so that the single-input pole placement problem is well conditioned, SIAM J. Matrix Anal. 19 (1998) 664 681. [29] V. Mehrmann, H.-G. Xu, An analysis of the pole placement problem. II. The multi-input case, Electron. Trans. Numer. Anal. 5 (1997) 77 97. [3] L.-Y. Dai, Structural stablity for singular system a quantitative approach, SIAM J. Control Optim. 26 (1988) 557 568. [31] S.C. Eisenstat, A perturbation bound for the eigenvalues of a singular diagonalizable matrix, Linear Algebra Appl. 416 (26) 742 744. [32] Y. Wei, X. Li, F. Bu, F. Zhang, Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices application of perturbation theory for simple invariant subspaces, Linear Algebra Appl. 419 (26) 765 771. [33] Y. Wei, C. Deng, A note on additive results for the Drazin inverse, Linear Multilinear Algebra 59 (211) 1319 1329.