PERTURBATION ANALYSIS OF THE EIGENVECTOR MATRIX AND SINGULAR VECTOR MATRICES. Xiao Shan Chen, Wen Li and Wei Wei Xu 1. INTRODUCTION A = UΛU H,
|
|
- Branden Skinner
- 6 years ago
- Views:
Transcription
1 TAIWAESE JOURAL O MATHEMATICS Vol. 6, o., pp , ebruary 0 This paper is available online at PERTURBATIO AALYSIS O THE EIGEVECTOR MATRIX AD SIGULAR VECTOR MATRICES Xiao Shan Chen, Wen Li and Wei Wei Xu Abstract. Let A be an n n Hermitian matrix and A = UΛU H be its spectral decomposition, where U is a unitary matrix of order n and Λ is a diagonal matrix. In this note we present the perturbation bound and condition number of the eigenvector matrix U of A with distinct eigenvalues. A perturbation bound of singular vector matrices is also given for a real n n or (n +) n matrix. The results are illustrated by numerical examples.. ITRODUCTIO Let A be a n n Hermitian matrix with the spectral decomposition (.) A = UΛU H, where U is an n n unitary matrix, U H denotes the conjugate transpose of U and Λ=diag(λ,λ,...,λ n ). Let B be an m n(m n) complex matrix with the singular value decomposition (.) B = W ΣV H, where the m m left singular vector matrix W and the n n right singular vector matrix V are unitary, and ( ) Σ (.3) Σ=, Σ 0 = diag(σ,...,σ n ), σ i 0, i=,...,n. Received ebruary 3, 00, accepted October 3, 00. Communicated by Wen-Wei Lin. 00 Mathematics Subject Classification: 655, Key words and phrases: Eigenvector matrix, Singular vector matrix, Condition number, robenius norm. This work is supported by the atural Science oundation of Guangdong Province ( , ) and by the ational atural Science oundation of China (097075). 79
2 80 Xiao Shan Chen, Wen Li and Wei Wei Xu The spectral decomposition of a Hermitian matrix and the singular value decomposition of a general matrix are very useful tools in many matrix computation problems (see, e.g.[, 5, 6]). In this paper we focus on perturbation analysis for the eigenvector matrix U in (.) and singular vector matrices W and V in (.). As to perturbation of eigenvalues, singular values, eigenspaces and singular subspaces have been studied [,, 4, 9, 0,, 5, 6, 7]. We know that eigenspaces and singular subspaces are usually much less sensitive to perturbations compared with the corresponding basis. It is well-known that the maximum eigenspace spanned by column vectors of U and the maximum singular subspaces spanned by column vectors of W and V are is not sensitive at all, but the eigenvector matrix U and the singular vector matrices W and V may be infinitely sensitive. The following simple example illustrates this case. Example.. Let A = I (i.e. identity matrix). Then A has the eigendecomposition (.) with U = I, Λ=A. If its perturbed matrix à has following form ( ) ε à =, ε where ε is a nonzero real number. It is easy to see that the eigenvector matrix of à has the following form d d Ũ = d d, where d = d =. Then for any nonzero real number ε, we have Ũ U =4 (d + d + d + d ) 4, where a denotes the conjugate of a complex number a. This example shows that the eigenvector matrix U isn t a continuous function of matrix elements. Based on the technique of splitting operators and Lyapunov majorants, perturbation bounds of four kinds of Schur decompositions have been presented by some authors(see, e.g.[3, 7, 8, 3, 4]). In this paper, we give the absolute and relative perturbation bounds and condition numbers of the eigenvector matrix U of the matrix A with distinct eigenvalues under Hermitian perturbations. We also present absolute and relative perturbation bounds and condition numbers of singular vector matrices W and V for the singular value decomposition of a real n n matrix with
3 Eigenvector Matrix and Singular Vector Matrices 8 distinct singular values and a real (n +) n matrix with distinct nonzero singular values, respectively. We use the following notation: Let A H and A T stand for the conjugate transpose and transpose of a matrix A, respectively. I n is the identity matrix of order n. The robenius norm and spectral norm of a matrix A are denoted by A and A, respectively. Let C m n (R m n ), C m n D (Rm n D ) and Cm n (Rm n ) denote the sets of complex(real) m n matrices, complex(real) m n diagonal matrices, and complex(real) m n matrices whose diagonal entries are zeros, respectively. Let X =(x ij ) C m n (R m n ), we defined X and X D by { { xij i j 0 i j (X ) ij = and (X 0 i = j D ) ij = i = j respectively. It is evident that any X C m n can be split uniquely as X = X + X D, or example we have a c e x ij X C m n, X D C m n D. b 0 b d = c 0 + f e f a 0 0 d A PERTURBATIO BOUD O THE EIGEVECTOR MATRIX In this section we study the perturbation bound of the eigenvector matrix U of A in the decomposition (.). irst we define the linear operator L : C n n C n n by ( ) (.) L τ X = (XΛ ΛX), ν where Λ=diag(λ,λ,...,λ n ) and τ,ν are positive parameters. The following lemma, whose proof is omitted, is similar to Theorem in [6]. Lemma.. The linear operator L is defined by (.). Then L is nonsingular if and only if all eigenvalues of Λ are simple, i. e. λ i λ j,i j, i, j =,,...,n. Let Λ=diag(λ,λ,...,λ n ), and let L be the operator defined by (.). ow we define the function { } η(λ) = min (XΛ ΛX) ν : X C n n, τ X =. It is easy to see that if L is nonsingular then
4 8 Xiao Shan Chen, Wen Li and Wei Wei Xu (.) η(λ) = L = τ ν min i j λ i λ j, where L is defined by L =max{ L (X) : X C n n, X =}. Lemma.. Let X C n n. Then we have (.3) (X H X) n X. Proof. [7]. oting that X H X is a Hermitian matrix, (.3) follows from (6) in In this section we have the main theorem below. Theorem.. Let A C n n be a Hermitian matrix with distinct eigenvalues and have the spectral decomposition (.). Let à = A+ A be a Hermitian matrix, and let ( ɛ = A, α = τ ) (.4) + A, η(λ) ν νη(λ) n where η(λ) is defined by (.). If ɛ satisfies the following condition (.5) ɛ α + τ +4α, then à has a spectral decomposition à = Ũ ΛŨ H such that (.6) Ũ U τ ɛ αɛ + 4αɛ τ ɛ b u(ɛ). Proof. otice that the robenius norm is unitarily invariant norm. Hence without loss of generality we may assume that U = I n in (.). Write Λ = Λ Λ and U = Ũ U = Ũ I n. Then by ÃŨ AU = Ũ Λ UΛ, we can obtain (.7) AŨ + A U = Ũ Λ + UΛ. Let (.8) E = Ũ H AŨ. oting that Ũ H = I n + U H, then from (.7) we get (.9) UΛ Λ U = E + U H Λ U U H UΛ Λ.
5 Eigenvector Matrix and Singular Vector Matrices 83 By the definition (.) of the operator L and (.9), we can get ( ) (.0) L τ U = ν E + ν ( U H Λ U U H UΛ). Choose a spectral decomposition of à so that the diagonal elements of Ũ are real. Then by U + U H + U H U =0, we get (.) U D = diag( U H U). rom Lemma., we know that L is nonsingular. Hence (.0)-(.) can be rewritten as a continuous mapping Φ:C n n C n n expressed by (.) τ U = ν L (E )+ ν L ( ( U H Λ U U H UΛ) ), U D = diag( U H U). Since ( U H Λ U U H UΛ) Lemma. we have = ( U H Λ U) ( U H U) Λ, from ( U H Λ U U H UΛ) ( U H Λ U) + ( U H U) Λ ) τ (+ A n τ U. otice that diag( U H U) U, E A. Hence the mapping Φ expressed by (.) satisfies τ U ɛ + α τ U, τ U D τ τ U (.3) where ɛ, α are defined by (.4). Let z =(c,c ) T C. Consider the system, (.4) c = ɛ + α(c + c ), c = τ (c + c ). rom (.4) we get the equation (.5) ( 4α τ If ɛ satisfies (.5), then (.6) + τ)c +(αɛ )c + τɛ =0. c = τɛ αɛ + 4αɛ τ ɛ
6 84 Xiao Shan Chen, Wen Li and Wei Wei Xu is a solution to (.5). By c = ɛ+ α τ c and (.6), we get a solution c =(c,c )T to the system (.4). Let X c = {X : X C n n, X τc, X D τc }. Then, X c is a bounded closed convex set of C n n, and the relation (.3) shows that the continuous mapping Φ maps X c into X c. By the Brouwer fixed-point theorem, the mapping Φ has a fixed point U X c such that τ U c = τ c, which yields the desired result (.6). Remark.. Taking τ = ν =and ν = A,τ = U = n, the bound in (.6) is called the absolute and relative perturbation bound, respectively. Remark.. or small ɛ the upper bound b u (ɛ) defined by (.6) has the Taylor expansion ( ) τ b u (ɛ) =ɛ + αɛ (.7) + 8 +α ɛ 3 + O(ɛ 4 ), ɛ 0. Combining (.7) with (.4) we see that the quantities A η(λ) = min i j n λ i λ j for ν = τ =and η(λ) = n min λ for ν = A i λ j,τ = U = n can be i j n regarded as absolute and relative condition numbers of the eigenvector matrix U of the spectral decomposition (.), respectively. Remark.3. Let A C n n have the Schur decomposition A = UTU H, where U is an n n unitary matrix and T is an n n upper triangular matrix. Let Y be strictly lower triangular matrix and the quantity η A is defined by η A =min{ low(ty YT) : Y =}. Konstantinov, Petkov and Christov[7] show that absolute and relative condition numbers of the Schur factor U are A η A and n η A, respectively. When both A and its perturbed matrix are Hermitian, from Remark. it is obvious that condition numbers of the eigenvector matrix U in (.) improve ones of the Schur factor by improving a numerical factor.
7 Eigenvector Matrix and Singular Vector Matrices A PERTURBATIO BOUD O SIGULAR VECTOR MATRICES In this section we investigate perturbation analysis of singular vector matrices for a m n real matrix. Let B R m n (m n) have the singular value decomposition (.)-(.3). Since B is a real matrix, V,Σ, and W may all be taken to be real. irst we define the linear operator P : R m m ( P δ X, ) ( (3.) ω Y = µ (ΣT X Y Σ T ), R n n R n m (ΣY XΣ) µ ), R m n where Σ is defined by (.3), and δ, ω, µ are positive parameters. Let T =(t ij ) R n m,r =(r ij) R m n (m n). or i =,,...,n we define row vectors l i with respect to T and R by (3.) l i =(t i,r i,...,t i,i,r i,i,t i,i+,r i,i+,...,t in,r in ), i =,,...,n. The following lemma determine when the operator P is nonsingular. Lemma 3.. Let P be the linear operator defined by (3.). () If m = n, then P is nonsingular if and only if all the singular values of B are simple, i. e. σ i σ j, i j, i, j =,...,n. () If m = n +, then P is nonsingular if and only if all the singular values of B are simple and σ i > 0,i =,,...,n. (3) If m>n+, then P is singular. by (3.3) and Proof. Let d ij = ( δσi ωσ j µ δσ j ωσ i ),i j, i, j =,...,n (3.4) P =diag(d,...,d n,...,d i,...,d i,i,d i,i+,...,d in...,d n,...,d n,n ). () If m = n, we define a column vector nvec(t,r) of T =(t ij ),R=(r ij ) C n n by nvec(t,r)=(l,l,...,l n ) T, where l i are defined by (3.). Then we have ( (3.5) nvec µ (ΣT X Y Σ T ), ) ( (ΣY XΣ) = P nvec µ δ X, ) ω Y. Hence the operator P is nonsingular if and only if P is a nonsingular matrix. Obviously, P is nonsingular if and only if σ i σ j,i j, i, j =,...,n.
8 86 Xiao Shan Chen, Wen Li and Wei Wei Xu () If m = n +, we define nvec(t,r) of T =(t ij ) R n (n+) and R =(r ij ) R (n+) n by nvec(t,r)=(l,...,l n,t,n+,...,t n,n+,r n+,,...,r n+,n ) T and also define nvec(t,r) of T =(t ij ) R (n+) (n+) and R =(r ij ) R n n by nvec(t,r)=(l,...,l n,t,n+,...,t n,n+,t n+,,...,t n+,n ) T, where l i,i=,,...,nare defined by (3.). Hence we have ( ) ( (3.6) nvec µ (ΣT X Y Σ T ), (ΣY XΣ) = Q nvec µ δ X, ) ω Y, where (3.7) ( Q = diag P, δ µ Σ, ) δ µ Σ. So the operator P is nonsingular if and only if Q is a nonsingular matrix. Obviously, Q is nonsingular if and only if σ i σ j,i j, i, j =,...,n and σ i > 0,i =,,...,n. (3) If m>n+, we take X =(x ij ) R m m and Y R n n as the following form x ij = { 0, (i, j) (n +,n+), (i, j)=(n +,n+), Y =0. It is easy to verify that P annihilates the nonzero matrix pair ( δ X, ω Y ) and must be singular. The proof is complete. Let P be the operator defined by (3.). ow we define the function { ( η(σ) = min µ (ΣT X Y Σ T XΣ)) ), µ (ΣY : ( X R m m,y R n n, δ X, ) } ω Y =. When the operator P is nonsingular, from (3.3)-(3.7) we can obtain (3.8) η(σ)= P µ min ϕ(σ i,σ j ), i j n { = µ min min ϕ(σ i,σ j ), i j n min i=,,...,n δσ i m=n, }, m=n+,
9 Eigenvector Matrix and Singular Vector Matrices 87 where P is defined by P =max{ P (X, Y ) : X R n m,y Rm n, (X, Y ) =} and δω(σi +σ j ) σ i σ j (3.9) ϕ(σ i,σ j )=. (δ +ω )(σ i +σj )+ (σi +σ j ) (δ ω ) +6δ ω σi σ j ow we give the following main result. Theorem 3.. Let m = n or m = n +and B R m n have the singular value decomposition (.)-(.3) with for m = n, σ i σ j, i j n and for m = n +, σ i σ j,σ i > 0, i j n. Moreover, let B = B + B R m n, and B ε =, ζ =max{δ, ω}, η(σ) µ (3.0) ( ) γ = max{δ,ω } m µη(σ) + δω B, where η(σ) is defined by (3.8). If ε satisfies the following condition (3.) ε γ + ζ +4γ, then B has the singular value decomposition (3.) B = W ΣṼ T, where W and Ṽ are m m and n n orthogonal matrices, respectively, and ) Σ Σ =(, Σ = diag( σ, σ,..., σ n ), σ i 0,i=,,...,n 0 such that (3.3) W W δ + Ṽ V ε ω γε+ b w,v (ε). 4γε ζ ε Proof. otice that the robenius norm is unitarily invariant norm. Hence without loss of generality we may assume that W = I m and V = I n in (.). By (.) and (3.) we obtain (3.4) ( B B)Ṽ + B(Ṽ I n)= W ( Σ Σ) + ( W I m )Σ.
10 88 Xiao Shan Chen, Wen Li and Wei Wei Xu Let (3.5) = W T BṼ, Σ = Σ Σ, W = W I m, V = Ṽ I n. (3.4) can be written as the following form (3.6) + W T Σ V = Σ+ W T W Σ. Similarly, from W T ( B B)+( W T I m )B =( Σ Σ)Ṽ T +Σ(Ṽ T I n ) we have (3.7) + W T ΣṼ = Σ+Σ V T Ṽ. Since W = I m + W and Ṽ = I n + V, from (3.6)-(3.7) we can obtain (3.8) Σ T W V Σ T = T + Σ T + V T V Σ T V T Σ T W, Σ V W Σ= + Σ+ W T W Σ W T Σ V. Moreover, the matrices W and V satisfy (3.9) W + W T + W T W =0, V + V T + V T V =0. In terms of the definition (3.) of the operator P, we can obtain from (3.8) ( P δ W, ) ω V ( ) (3.0) = µ ( V T V Σ T V T Σ T W ), µ ( W T W Σ W T Σ V ) rom (3.9) we have µ (( T ), ). (3.) ( W D, V D )= ( ( W T W ) D, ( V T ) V ) D. Since the operator P is nonsingular, then (3.0) can be rewritten as ( δ W, ) ω V (3.) = µ P ( ( V T V Σ T V T Σ T W ), ( W T W Σ W T Σ V ) ) µ P ( ( T ), ). By Lemma. we have
11 Eigenvector Matrix and Singular Vector Matrices 89 ( V T V Σ T ), ( W T W Σ) and = (( V T V ) Σ T, ( W T W ) Σ) max{δ,ω } Σ ( ω ( V T V ), δ ( W T W ) ) max{δ,ω } B ( m δ W, ) ω V ( V T Σ W ), ( W T Σ V ) δω B ( δ W, ω V ). Hence we get ( ( V T V Σ T V T Σ W ), ( W T W Σ W T ) Σ V ) ( ) (3.3) max{δ,ω } m ( + δω B δ W, ) ω V. rom (3.5) we have (3.4) (( T ), ) B. Hence by (3.)-(3.4), we get ( ( δ W, ) ω V ε + γ δ W, ) (3.5) ω V. By (3.) we get ( (3.6) δ W D, ω V D) ( ) ζ δ W, ω V. where ζ is defined by (3.0). ext in terms of (3.5) and (3.6), we take the similar proof as Theorem.3 to get the bound (3.3). The proof is complete. Remark 3.. Taking δ = ω = µ = and µ = B,δ = W,ω = V, the bound in (3.3) is called the absolute and relative perturbation bounds, respectively. Remark 3.. or small ε the upper bound b w,v (ε) defined by (3.3) has the Taylor expansion ( ) ζ b w,v (ε) =ε + γε (3.7) + 8 +γ ε 3 + O(ε 4 ), ε 0.
12 90 Xiao Shan Chen, Wen Li and Wei Wei Xu Combining (3.7) with (3.8)-(3.9) we see that the quantity η(σ) can be regarded as a condition number of the left singular vector matrix W and the right singular vector matrix V of the singular value decomposition (.) of a real n n or (n +) n matrix B. Taking µ = δ = ω =, the quantity min η(σ) = σ i σ j, m = n, i j n { max min σ i σ j, min σ i i j n i=,,...,n }, m = n + is regarded as the absolute condition number; Taking µ = B,δ = W,ω = V, the quantity B n min σ i σ j, m = n, η(σ) = i j n { } B max min min n+σi, m = n +, i j n ϕ(σ i,σ j ), i=,,...,n where ϕ(σ i,σ j )= n(n +)(σi + σ j ) σ i σ j (n +)(σ i + σj )+ (σi + σ j ) +6n(n +)σi σ j regarded as the relative condition number. 4. UMERICAL EXAMPLES In this section we use two simple examples to illustrate the results of previous two sections. All computations were performed by using MATLAB 6.5. The relative machine precision is Example 4.. Let A = diag(,,.00) and its perturbed matrix 0. 5 A = Taking U = I, Λ=A, then A + A has the spectral decomposition A + A = Ũ Ũ H, where Ũ = and
13 Eigenvector Matrix and Singular Vector Matrices 9 = diag( , , ). Computation gives Ũ U = , Ũ U U = Taking ν = τ = in Theorem.3, we obtain the following absolute perturbation bound b u (ɛ) = ; Taking ν = A,τ = U relative perturbation bound = 3 in Theorem.3, we obtain the following b u (ɛ) = The numerical results shows that the upper bound of (.6) is fairly sharp. Example 4.. Let B = , B = , and B =B+ B. Let B =W ΣV T be the singular value decomposition of B, where W =I 4,V =I 3. Then B has the singular value decomposition B = W ΣṼ T, where W = , Σ = and Ṽ =
14 9 Xiao Shan Chen, Wen Li and Wei Wei Xu Computation gives Ũ U + Ṽ V = , Ũ U U + Ṽ V V = Taking µ = δ = ω =in Theorem 3., we obtain the absolute perturbation bound b w,v (ε) = ; Taking µ = B,δ = U =,ω = V = 3 in Theorem 3., we obtain the relative perturbation bound b w,v (ε) = The numerical results show that the upper bound of (3.3) is correct. 5. COCLUSIO In this paper we have presented perturbation bounds of the eigenvector matrix of a Hermitian matrix and the singular vector matrices of a n n or (n +) n real matrix. The analysis is based on techniques proposed in [3, 7, 8, 3, 4]. ote that for the singular value decomposition B = W ΣV H of a complex matrix B, the diagonal elements of W and V may not be real at the same time. Hence for complex matrices, (3.) maybe not holds. It remains an open problem how the result in Theorem 3. is extended to the complex case. ACKOWLEDGMETS The authors are grateful to the referee for his/her many useful suggestions. REERECES. Z. Bai, J. Demmel, J. Dongarra, A. Ruhe and H. Van Der Vorst, Templates for the solution of Algebraic Eigenvalue Problems: A Practical Guide, SIAM, Philadelphia, R. Bhatia, C. Davis and A. Mcintosh, Erturbation of spectral subspaces and solution of linear operator equations, PLinear Algebra Appl., 5/53 (983), X. S. Chen, Perturbation bounds for the periodic Schur decomposition, BIT umer. Math., 50 (00), 4-58.
15 Eigenvector Matrix and Singular Vector Matrices C. Davis and W. Kahan, The rotation of eigenvectors by a perturbation, III, SIAM J. umer. Anal., 7 (970), R. A. Horn and C. R. Johnson, Matrix Analysis, Cambrdge University Press, ew York, R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, MA, M. Konstantinov, P. Hr. Petkov and. D. Christov, on-local perturbation analysis of the Schur system of a matrix, SIAM J. Matrix Anal. Appl., 5 (994), M. Konstantiov, V. Mehrmann and P. Petkov, Perturbation analysis of Hamilton Schur and block-schur forms, SIAM J. Matrix Anal. Appl., 3 (00), R. C. Li, On perturbations of matrix pencils with real spectra, Math. Compt., 6 (994), R. C. Li, Relative perturbation theory: II. Eigenspace and singular subspace variations, SIAM J. Matrix Anal. Appl., 0 (998), J. M. Ortega and W. C. Rheinboldt, Iterative solution of nonlinear equations in several variables, Academic Press, ew York, B.. Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cliffs, J, J. G. Sun, Perturbation bounds for the Schur decomposition, Report UMI-9.0, ISS Institute of Information Processing, University of Umea, Sweden, J. G. Sun, Perturbation bounds for the generalized Shur decomposition, SIAM J. Matrix Anal. Appl., 6 (995), G. W. Stewart and J. G. Sun, Matrix Perturbation Theory, Academic Press, Boston, G. W. Stewart, Error and perturbation bounds for subspaces associated with certain eigenvalue problems, SIAM Rev., 5 (973), P. A. Wedin, Perturbation bounds in connection with singular value decomposition, BIT umer. Math., (97), 99-. Xiao Shan Chen and Wen Li School of Mathematics South China ormal University Guangzhou 5063 P. R. China chenxs33@63.com liwen@scnu.edu.cn
16 94 Xiao Shan Chen, Wen Li and Wei Wei Xu Wei Wei Xu Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and Systems Science Chinese Academy of Sciences Beijing P. R. China
ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationA DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION
SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationIterative methods for symmetric eigenvalue problems
s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationOn the Modification of an Eigenvalue Problem that Preserves an Eigenspace
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov
More information1 Quasi-definite matrix
1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationSensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system
Advances in Computational Mathematics 7 (1997) 295 31 295 Sensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system Mihail Konstantinov a and Vera
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationPolynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems
Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)
More informationMajorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)
1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationLinear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices
Linear maps preserving rank of tensor products of matrices Journal: Manuscript ID: GLMA-0-0 Manuscript Type: Original Article Date Submitted by the Author: -Aug-0 Complete List of Authors: Zheng, Baodong;
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationSchur s Triangularization Theorem. Math 422
Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationOn the Hermitian solutions of the
Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationInterpolating the arithmetic geometric mean inequality and its operator version
Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationON PERTURBATIONS OF MATRIX PENCILS WITH REAL SPECTRA. II
MATEMATICS OF COMPUTATION Volume 65, Number 14 April 1996, Pages 637 645 ON PERTURBATIONS OF MATRIX PENCILS WIT REAL SPECTRA. II RAJENDRA BATIA AND REN-CANG LI Abstract. A well-known result on spectral
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationMATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that
More informationData Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples
Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationOff-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems
Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationSymmetric and self-adjoint matrices
Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that
More informationZ-Pencils. November 20, Abstract
Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is
More informationThe Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationMatrix Perturbation Theory
5 Matrix Perturbation Theory Ren-Cang Li University of Texas at Arlington 5 Eigenvalue Problems 5-5 Singular Value Problems 5-6 53 Polar Decomposition 5-7 54 Generalized Eigenvalue Problems 5-9 55 Generalized
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationHomework 1. Yuan Yao. September 18, 2011
Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationDepartment of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known
University of Kentucky Lexington Department of Mathematics Technical Report 2000-23 Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationMultiple eigenvalues
Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions
More informationON MATRIX BALANCING AND EIGENVECTOR COMPUTATION
ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION RODNEY JAMES, JULIEN LANGOU, AND BRADLEY R. LOWERY arxiv:40.5766v [math.na] Jan 04 Abstract. Balancing a matrix is a preprocessing step while solving the
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More information