Linear Algebra and its Applications
|
|
- Derek Blankenship
- 5 years ago
- Views:
Transcription
1 Linear Algebra and its Applications 437 (2012) Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: Perturbation of multiple eigenvalues of Hermitian matrices Ren-Cang Li a,1, Yuji Nakatsukasa b,2, Ninoslav Truhar c,,3, Wei-guo Wang d,4 a Department of Mathematics, University of Texas at Arlington, P.O. Box 19408, Arlington, TX , USA b School of Mathematics, The University of Manchester, Manchester M13 9PL, UK c Department of Mathematics, J.J. Strossmayer University of Osijek, Trg Lj. Gaja 6, Osijek, Croatia d School of Mathematical Sciences, Ocean University of China, 238 Songling Road, Qingdao , PR China ARTICLE INFO Article history: Received 25 September 2011 Accepted 26 January 2012 Available online 29 February 2012 Submitted by V. Mehrmann AMS classification: 15A22 15A42 65F15 Keywords: Graded perturbation Multiple eigenvalue Generalized eigenvalue problem ABSTRACT This paper is concerned with the perturbation of a multiple eigenvalue μ of the Hermitian matrix A = diag(μi, A 22 ) when it undergoes an off-diagonal perturbation E whose columns have widely varying magnitudes. When some of E s columns are much smaller than the others, some copies of μ are much less sensitive than any existing bound suggests. We explain this phenomenon by establishing individual perturbation bounds for different copies of μ. They show that when A 22 μi is definite the ith bound scales quadratically with the norm of the ith column, and in the indefinite case the bound is necessarily proportional to the product of E s ith column norm and E s norm. An extension to the generalized Hermitian eigenvalue problem is also presented Elsevier Inc. All rights reserved. 1. Introduction Consider the eigenvalue problem for Hermitian matrix Ã: ( m n m A Ã = 11 E n ), A 11 = μi m, (1.1) Corresponding author. addresses: rcli@uta.edu (R.-C. Li), yuji.nakatsukasa@manchester.ac.uk (Y. Nakatsukasa), ntruhar@mathos.hr (N. Truhar), wgwang@ouc.edu.cn (W.-g. Wang). 1 Supported in part by National Science Foundation Grants DMS and DMS Supported in part by Engineering and Physical Sciences Research Council Grant EP/I005293/1. 3 Supported in part by the Passive control of mechanical models", Grant No of the Croatian MZOS. 4 Supported in part by the National Natural Science Foundation of China under grant and , China Scholarship Council, Shandong Province Natural Science Foundation (Y2008A07). This work was initiated while this author was visiting University of Texas at Arlington from September 2010 to August /$ - see front matter 2012 Elsevier Inc. All rights reserved.
2 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) where the superscript takes the complex conjugate transpose of a matrix or a vector, and I m (or simply I later if its dimension is clear from the context) is the m m identity matrix. If E is a zero block, then μ is a multiple eigenvalue with multiplicity m. In general, if E is small then à has m eigenvalues close to μ. In fact more can be said qualitatively. Let η be the eigenvalue gap between A 11 = μi and A 22 defined as η = min μ ν, ν eig(a 22 ) (1.2) where eig(a 22 ) is the set of the eigenvalues of A 22, and let ε = E 2, (1.3) where 2 is either the spectral norm of a matrix or the l 2 -norm of a vector. The main result in [1] says à has m eigenvalues θ 1,...,θ m such that 2ε 2 μ θ j for 1 j m. (1.4) η + η 2 + 4ε 2 The right-hand side of (1.4) is of second order in ε if η>0and is never larger than ε. As confirmed by the 2-by-2 example in [1], in general these inequalities cannot be improved without knowing more information on E than just ε = E 2. Suppose now that we do have additional information on E. For example, consider the case where one of the columns of E is zero for which θ i = μ for some i. Can we derive bounds that reflect this a zero column leads to some θ i being μ? A possible and quick answer can be given as follows: first zero out the jth column of E, and then use the well-known perturbation theorem (attributed to Lidskii, Weyl, Wiedlandt and Mirsky in various forms [2, pp ]) to conclude that à has an eigenvalue θ that differs from μ by no more than E (:,j) 2, where E (:,j) denotes the jth column of E. Itobviously implies that if E s jth column is a zero column, then μ must be an eigenvalue of Ã. But there are two drawbacks to this quick answer: 1. E (:,j) 2 can be potentially (much) larger than the right-hand side of (1.4), making the estimate less favorable to (1.4). 2. This does not imply that à has m eigenvalues θ j such that μ θ j E (:,j) 2 because some of the θ by this argument could be the same eigenvalues of Ã, asmentionedin[3, Section 11.5]. The purpose of this article is to develop a theory that will reflect the effect of disparity in the magnitudes of the columns of E on the eigenvalues of Ã,unlike(1.4), through establishing different bounds for the m eigenvalues of à closest to μ. For the sake of convenience, throughout this paper η and ε are always defined by (1.2) and (1.3), respectively, and set ɛ j = E (:,ij ) 2 for 1 j m, (1.5) where {i 1, i 2,...,i m } is the permutation of {1, 2,...,m} such that ɛ 1 ɛ 2 ɛ m. (1.6) It is well-known that ɛ m ascending order: ε m ɛ m. The eigenvalues of E E are τ 1,τ 2,...,τ m, arranged in 0 τ 1 τ 2 τ m. (1.7) We will also use X Y (X Y) for two Hermitian matrices of the same size to mean Y X is positive definite (semi-definite), and X Y (X Y) tomeany X (Y X). In particular, X 0(X 0) means that X is positive definite (semi-definite).
3 204 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) Our perturbation bounds are actually presented in terms of τ j, the eigenvalues of E E. They can be easily turned into bounds in terms of ɛ j, because of Lemma 3.1 below, in order to serve our purpose of developing a theory that reflects the effect of disparity in the magnitudes of the columns of E. The rest of this paper is organized as follows. We first investigate specific examples in Section 2, which provide insights into possible bounds that could be expected. In Section 3 we give our main results, in which we separately deal with the cases where A 22 μi is definite or indefinite. For the indefinite case, we give asymptotic estimates that are correct up to fourth-order terms, as well as strict bounds that are slightly larger than the asymptotic estimates. In Section 4 we describe how our bounds can be extended to the generalized eigenvalue problem. Finally we summarize our conclusions in Section Motivational examples The examples below will shape our expectation on possible effects of different magnitudes of the columns of E on the eigenvalues of à nearest 0. Example 2.1. Consider the 4-by-4 matrix à given by E = , A 22 = 1 0, à = 0 E In this case A 11 approximately = 0, i.e., μ = 0in(1.1), and η = 1. The two eigenvalues of à closest to 0 are and , (2.1) which are about ɛ1 2 = E (:,1) 2 = and ɛ2 2 = E (:,2) 2 = , respectively. The inequality (1.4)saysà has two eigenvalues that differ from 0 by no more than This estimate is very good for the second eigenvalue in (2.1) but not so for the first one which is about less than the square of the estimate. The quick answer, on the other hand, says à has an eigenvalue that differs from 0 by no more than ɛ 1 = and an eigenvalue from 0 by no more than ɛ 2 = , providing even worse estimates than by (1.4). Example 2.1 may lead us to believe that there are m properly ordered eigenvalues θ 1,...,θ m of à with each difference μ θ j being of second order in ɛ j = E (:,j) 2 if η>0. Later we will show this is indeed true if A 22 μi is definite, but not so in the general case as we can see by the next example. Example 2.2. Consider the 4-by-4 matrix à given by E = δ 1 0, A 22 = 01, à = 0 E, 0 δ 2 10 where both δ i are real numbers and δ i 1. The characteristic equation of à is λ 4 (δ δ )λ2 + δ 2 1 δ2 2 = 0, whose two smallest eigenvalues in magnitude satisfy 2 λ = δ1 2 + δ [1 + (δ 1 + δ 2 ) 2 ][1 + (δ 1 δ 2 ) 2 ] δ 1δ 2. Thus λ / δ 1 δ 2 =1 + O(δ1 2 + δ2 2 ). It follows that the smaller λ can be made arbitrarily larger than O(min{δ1 2,δ2 2 }).
4 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) Main results Throughout this section, à is Hermitian and given by (1.1). Without loss of generality, we assume μ = 0. Since by assumption μ is not an eigenvalue of A 22, A 22 is non-singular as a result of assuming μ = 0, and the gap η as defined by (1.2)nowis η = 1/ A For any λ eig(a 22 ),set X = I E (A 22 λi) 1. 0 I Then X(à λi)x = ( λ)i E (A 22 λi) 1 E, (3.1) A 22 λi and thus det(ã λi) = det ( E (A 22 λi) 1 E λi) det(a22 λi). (3.2) From this we see that any eigenvalue λ of à not in eig(a 22) is a root of det( E (A 22 λi) 1 E + ( λ)i). (3.3) Recall from (1.1) that for E sufficiently small in magnitude, the eigenvalues of à consist of two subsets: one spawned from m copies of μ and another from the eigenvalues in eig(a 22 ) upon being moved by E. Henceà has m eigenvalues close to 0 and these m eigenvalues are zeros of (3.3) near0.notethat for λ A = λ /η < 1 we can write (A 22 λi) 1 = j=0 λj A j 1 22,soforsuchλ we have E (A 22 λi) 1 E + ( λ)i = λ j E A j 1 22 E + ( λ)i. (3.4) j=0 Theorem 3.1. Let à be a Hermitian matrix of form (1.1) with μ = Assume ε< 3/4 η.then (a) à has exactly m eigenvalues θ j in the open interval ( η/2,η/2), andmoreover 2ε 2 θ j, (3.5) η + η 2 + 4ε 2 for 1 j m; (b) The function (3.3) has exactly m zeros in ( η/2, η/2) and these zeros are precisely the eigenvalues θ j of Ã. 2. Ãhasmeigenvaluesθ j = ϑ j + O(ε 4 /η 2 ),whereϑ j for 1 j m are the eigenvalues of E A 1 22 E. In particular, if η = O(1), thenθ j = ϑ j + O(ε 4 ). Proof. Since 4t 2 /( t 2 )<1ift 2 < 3/4, we have 2ε 2 < η ε η + η 2 + 4ε 2 2 if η < 3. 4
5 206 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) By the main result of [1], we conclude that à has exactly m eigenvalues θ j in the open interval ( η/2,η/2) and (3.5) holds. Item 1(b) is a consequence of Item 1(a), (3.2) and det(a 22 λi) = 0forλ ( η/2,η/2). The expression in (3.4) isequalto E A 1 22 E + ( λ)i, uptoo(ε4 /η 2 ),for λ =O(ε 2 /η). Since by (1.4) à has exactly m eigenvalues no larger than O(ε2 /η) in magnitude, we conclude that θ j = ϑ j + O(ε 4 /η 2 ) for 1 j m. Example 2.1 (revisit). The eigenvalues of E A 1 22 E are , which are extremely close to the exact values given in (2.1). Theorem 3.1 gives asymptotic estimates for θ j in terms of ϑ j. In the subsections that follow, we will establish bounds that reflect the effect of disparity in the magnitudes of the columns of E. To this end, we normalize the columns of E by their l 2 -norms to get E = E 0 D, (3.6) where D = diag( E (:,1) 2, E (:,2) 2,..., E (:,m) 2 ), { E (E 0 ) (:,j) = (:,j) / E (:,j) 2, if E (:,j) = 0, 0, if E (:,j) = 0. (3.7a) (3.7b) Lemma 3.1. Let τ 1,τ 2,...,τ m be the eigenvalues of E E, arranged in ascending order as in (1.7),andlet ɛ j be defined as in (1.5) and (1.6).Then τ j E ɛ2 j m ɛ 2 j. (3.8) Proof. Use 0 E E = DE 0 E 0D E D2 to get τ j E D2 (i j,i j ) = E ɛ2 j. The second inequality is due to E 0 2 m. Next, we separately consider the cases according to whether A 22 is definite or not. All bounds will begivenintermsofτ j. Corresponding bounds in terms of ɛ j can then be easily derived by using (3.8) Positive (negative) definite A 22 Theorem 3.2. For Hermitian matrix Ãasin(1.1) with μ = 0, suppose ε< 3/4 η. IfA 22 is positive (negative) definite, then à has m nonpositive (nonnegative) eigenvalues θ 1,...,θ m arranged in ascending order satisfying 2τ j 0 θ m j+1 η +, if A 22 0, (3.9a) η 2 + 4τ j 2τ j 0 θ j η +, if A 22 0, (3.9b) η 2 + 4τ j for 1 j m. Proof. The case in which A 22 0 can be turned into the case in which A 22 0 by considering à instead. Suppose that A 22 0, i.e., A 22 is positive definite. Set
6 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) B(t) = E (A 22 ti) 1 E (3.10) for t R. ByTheorem3.1 and the assumption ε< 3/4 η, we know à has exactly m eigenvalues in ( η/2,η/2) and these m eigenvalues are the zeros of det (B(t) ti) in ( η/2,η/2). Sinceforany t ( η/2,η/2), 0 A 22 ti and thus B(t) 0; so B(t) ti 0 for t (0,η/2). Therefore the m eigenvalues of à are in ( η/2, 0]. Denotethemby η/2 <θ 1 θ 2 θ m 0. Also denote by λ 1 (t) λ 2 (t) λ m (t) 0 (3.11) the m eigenvalues of B(t) for t ( η/2, 0]. They are continuous. The fixed points of λ i (t) within t ( η/2, 0] give all the θ j. In fact, we have λ j (θ j ) = θ j.thisisbecauseλ j (t) is a decreasing function for t ( η/2, 0] and thus λ j (t) = t has a unique solution on ( η/2, 0]. Henceθ j is the jth smallest eigenvalue of B(θ j ). This implies that θ j = θ j is the jth largest eigenvalue of B(θ j ).Since we have B(θ j ) = E (A 22 θ j I) 1 E E E η + θ j, θ j τ m j+1 η + θ j 2τ m j+1 implying θ j η + η 2 + 4τ m j+1 which gives (3.9a). Remark 3.1. Since the right-hand sides in (3.9) are increasing as τ j does, replacing τ j by its upper bound in (3.8) yieldsboundson θ j in terms of ɛ j, the norms of E s columns. Example 3.1. Consider the 4-by-4 matrix à given by E = , A 22 = 10, à = 0 E. (3.12) Fig. 1. The log log scale plot of λ i (t) for à in (3.12).
7 208 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) In this case A 11 = 0, i.e., μ = 0in(1.1), and η = 1. The following table displays the eigenvalues θ j of à nearest to 0, the eigenvalues ϑ j of E A 1 22 E, and the upper bounds in (3.9) and the ones after τ j replaced by mɛ 2 j. θ j ϑ 2τ j 2mɛ 2 j j η+ η 2 +4τ j η+ η 2 +4mɛ 2 j Thus our bounds are remarkably sharp. Let λ i (t) be as in the proof of Theorem 3.2 for this example. Fig. 1 plots λ 1 (t) and λ 2 (t) as functions of t. The intersections with the curve for t are the eigenvalues θ 1 and θ 2.NotethatinFig.1 λ 1 (t) and λ 2 (t) appear to be nearly constants. That is because they decrease very slowly, which is a typical behavior of λ i (t) when ε η/2. In fact it can be shown that ε2 η 2 dλ i(t) dt 0fort ( η/2, 0] and 1 i m Indefinite A 22 Consider now that A 22 μi is indefinite. We will use the following result, which is a direct consequence of the proof of [4,Theorem1]. Lemma 3.2. Let W be an l-by-l Hermitian matrix, and let D = diag(δ 1,δ 2,...,δ l ) with δ 1 δ 2 δ l. Denote the eigenvalues of D WD by ω 1,...,ω l arranged such that ω 1 ω 2 ω l.thenfor1 i l ω i min δ l j+1 δ i+j 1 W 2 1 j l i+1 δ l δ i W 2. Two types of bounds will be proven for the eigenvalues θ j of interest of Ã: asymptotical bounds up to O(ε 4 ), and strict bounds at a tradeoff of being slightly larger than the asymptotic bounds if higher order terms O(ε 4 ) are ignored. Lemma 3.3. Let ϑ j for 1 j m be the eigenvalues of E A 1 22 Earrangedsuchthat ϑ 1 ϑ 2 ϑ m. Then ϑ j ζ j def = 1 η min τm+1 k τ j+k 1 1 k m j+1 1 η τm τ j, (3.13a) (3.13b) where τ i (1 i m) are the eigenvalues of E EasinLemma 3.1. Proof. Inequality (3.13b) follows from (3.13a) by simply picking k = 1 without the minimization. We now prove (3.13a). Let E = UΣV be the SVD of E, where U and V are unitary and ( diag( τ1, τ 2,..., ) τ m ), if n m, Σ = ( 0 (n m) m diag( τ m n+1, τ m+n+2,..., τ m ) 0 n (m n) ), if n < m.
8 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) Note that in the case when n < m, τ 1 = = τ m n = 0. We have E A 1 22 E = VΣ U A 1 which has the same eigenvalues as Σ U A 1 22 UΣ. It can be proven that for either n m or n < m, 22 UΣV Σ U A 1 22 UΣ = DWD for some matrix W satisfying W 2 1/η and D = diag( τ 1,..., τ m ). Now apply Lemma 3.2 to complete the proof. Remark 3.2. When A 22 isdefinite,wecanget ϑ j τ j /η which is stronger than (3.13a) and thus (3.13b). Theorem 3.3. For Hermitian matrix Ãasin(1.1) with μ = 0, suppose ε < 3/4 η. ThenÃhasm eigenvalues θ 1,...,θ m arranged such that θ 1 θ 2 θ m (3.14) satisfying θ j ζ j + O(ε 4 ), (3.15) where ζ j is defined by (3.13a). Proof. It is a consequence of Theorem 3.1 and Lemma 3.3. Next we derive strict bounds, i.e., without the term O(ε 4 ) in (3.15). One difficulty here is that λ i (t) is no longer monotonic. However, the fact remains true that if θ i ( η/2,η/2) is an eigenvalue of B(θ i ) = E (A 22 θ i I) 1 E, then θ i is also an eigenvalue of Ã. Lemma 3.4. Let B(t) be defined as in (3.10) with eigenvalues λ 1 (t) λ 2 (t) λ m (t) (3.16) each of which are piecewise differentiable 5.Ifε<η/2, then db(t) 4ε2 dt 2 η < 2 1 and dλ j (t) 4ε2 < 1 for t ( η/2,η/2). (3.17) dt η2 Proof. We have B(t) B(t + Δt) = E (A 22 ti) 1 E E [A 22 (t + Δt)I] 1 E = E { (A 22 ti) 1 [A 22 (t + Δt)I] 1} E = E (A 22 ti) 1 {I [ I Δt(A 22 ti) 1] 1 } E. Therefore B(t) B(t + Δt) E (A 22 ti) {I 1 [ } I Δt(A 22 ti) 1] 1 E = 2 Δt 2 Δt ε (A 22 ti) 1 2 I [ I Δt(A 22 ti) 1] 1 2 ε. Δt 5 By [5, Theorem 4.8], there are countable points in ( η/2,η/2) such that between any two nearby points, each λ i (t) is differentiable.
9 210 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) Noting that for t ( η/2,η/2), wehave (A 22 ti) 1 2 < 2 η, I [ I Δt(A 22 ti) 1] < 1 Δt 2/η 1 = Δt 2/η 1 Δt 2/η, and thus B(t) B(t + Δt) ε2 Δt 2/η 2/η 1 Δt 2/η 4ε 2 = Δt 2 Δt η 2 (1 Δt 2/η). Let Δt 0toget db(t) 4ε2 dt 2 η < 1, 2 since ε < η/2. Finally, we use the well-known perturbation theorem (attributed to Lidskii, Weyl, Wiedlandt and Mirsky in various forms [2, pp ]) to conclude that dλ j (t) dt db(t) 4ε2 dt 2 η < 1, 2 as expected. Theorem 3.4. For Hermitian Ãasin(1.1),ifε<η/2,thenÃhasmeigenvaluesθ 1,...,θ m (arranged as in (3.14)) satisfying ζ j θ j 1 4ρ, 2 for 1 j m, where ρ = ε/η < 1/2 and ζ j is defined by (3.13a). (3.18) Proof. Instead of proving (3.18) directly, we shall prove that for any given j {1,...,m} there are j of θ i s satisfying θ i ζ j /(1 4ρ 2 ). Thus (3.18)musthold. Adopt the notations in Lemmas 3.3 and 3.4. By(3.17), for any t ( η/2,η/2), wehave λ i (t) λ i (0) for 1 i m. Letδ j = t 0 dλ i (τ) dτ dτ 4ε2 t = 4ρ 2 t η 2 (3.19) ζ j 1 4ρ 2. We claim that there are at least j of λ i (t) such that λ i (t) [ δ j,δ j ] for all t [ δ j,δ j ]. (3.20) This means that each function λ i (t) maps the interval t [ δ j,δ j ] into itself. By Brouwer s fixed point theorem, each of such λ i (t) has a fixed point t i [ δ j,δ j ] such that λ i (t i ) = t i. Hence, recalling (3.2) we see that t i is an eigenvalue of Ã. Note that the second inequality in (3.17) implies that t i is a unique fixed point of λ i (t) in ( η/2,η/2). Therefore all counted, Ã has at least j eigenvalues in [ δ j,δ j ]. It remains to show that there are at least j of λ i (t) satisfying (3.20). To see this, we notice ϑ k [ ζ k,ζ k ] [ ζ j,ζ j ] [ δ j,δ j ] for 1 k j. These ϑ k for 1 k j are taken by j different λ i (t) at t = 0, i.e., ϑ k = λ lk (0), where l k {1,...,m} are distinct for k {1,...,j}.Wenowprovethatλ lk (t) for k {1,...,j} are the j of λ i (t) satisfying (3.20). In fact, for t [ δ j,δ j ] and k {1,...,j}
10 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) λ lk (t) λ lk (0) + λ lk (t) λ lk (0) = ϑ k + λ lk (t) λ lk (0) ζ j + 4ρ 2 δ j = δ j, as expected. Remark 3.3. Compared with (3.15)inTheorem3.3,theboundin(3.18) removes the term O(ε 4 ) at the expense of the factor (1 4ρ 2 ) 1. Example 2.1 (revisit). The following table displays the eigenvalues θ j of à nearest to 0, the eigenvalues ϑ j of E A 1 22 E, and the upper bounds in (3.18) and the ones after τ j replaced by mɛ 2 j. θ j ϑ j ζ j 1 4ρ 2 mɛ j ɛ m 1 4ρ The bounds are rather sharp. 4. Possible extensions to the generalized eigenvalue problem So far we have focused on the Hermitian eigenvalue problem (1.1). We now consider the following Hermitian definite generalized eigenvalue problem à = μb 11 E, B = B 11 F, (4.1) F B 22 where B ii 0, and F 2 is sufficiently small 6 so that B 0also. If E = F = 0, then μ is an eigenvalue of the pencil à λ B of multiplicity m. Inthissectionwe outline how to develop perturbation bounds using what we have gotten in Section Special case: B ii = Iandμ = 0 In this case, à = 0 E, B = I m F. (4.2) F I n Assume that F 2 < 1. A similar approach to the one in [6, Section 2.1] can be applied as follows. We first let X = I m F, W = I m 0 0 I n 0 [I FF, (4.3) ] 1/2 and then let def B = X BX = I m 0 0 I FF = W 2, (4.4a) 6 For example, F 2 < min i {σ min (B ii )} guarantees B 0, where σ min (B ii ) is the smallest singular value of B ii.
11 212 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012)  def = X ÃX = 0 E E Â22, (4.4b) where W is the unique Hermitian definite square root [7, Chapter6]of B, and  22 = A 22 EF FE. à λ B has the same eigenvalues as W 1 ÂW 1 λi N.SinceW 1 ÂW 1 takes the form of (1.1), our theory in Section 3 applies to W 1 ÂW 1, leading to various bounds General case Now we consider the general case (4.1). Assume μ = 0; otherwise we shall consider (à μ B) λ B instead. Suppose à = 0 E, B = B 11 F. (4.5) F B 22 Set Y = diag(b 1/2 11, B 1/2 22 ) to get Y ÃY = 0 Ê, Y BY = I m F, (4.6) Ê Â22 F In which reduces to the case in Section 4.1, where  22 = B 1/2 22 A 22 B 1/2 22, F = B 1/2 22 FB 1/2 11, Ê = B 1/2 22 EB 1/2 11. (4.7) 5. Conclusion We established perturbation bounds for the multiple eigenvalue μ of Hermitian matrix A under a perturbation in the off-diagonal block: A = μi m 0 perturbed to à = μi m E, 0 A 22 with an emphasis on the case where the magnitudes of the columns of E vary widely. We show that whether A 22 μi m is definite or not plays a major role: if it is (positive or negative) definite, then à has m eigenvalues θ i (1 i m) such that the ith difference θ i μ is bounded by a quantity that is proportional to the square of the norm of E s ith column, but when A 22 μi m is indefinite the quantity is proportional to the product of the ith column norm and the norm of E. We also outline a possible extension to the Hermitian definite generalized eigenvalue problem. Acknowledgment We would like to thank the anonymous reviewers for their valuable comments and suggestions as thesehaveimprovedthepaper. References [1] C.-K. Li, R.-C. Li, A note on eigenvalues of perturbed Hermitian matrices, Linear Algebra Appl. 395 (2005) [2] G.W. Stewart, J.-G. Sun, Matrix Perturbation Theory, Academic Press, Boston, 1990.
12 R.-C. Li et al. / Linear Algebra and its Applications 437 (2012) [3] B.N. Parlett, The Symmetric Eigenvalue Problem, SIAM, Philadelphia, [4] Y. Nakatsukasa, On the condition numbers of a multiple eigenvalue of a generalized eigenvalue problem, Numer. Math., in press. arxiv: [5] K.R. Parthasarathy, Eigenvalues of matrix-valued analytic maps, J. Austral. Math. Soc. Ser. A 26 (1978) [6] R.-C. Li, Y. Nakatsukasa, N. Truhar, S.-F. Xu, Perturbation of partitioned hermitian generalized eigenvalue problem, SIAM J. Matrix Anal. Appl. 32 (2) (2011) [7] N.J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia, PA, USA, 2008.
A Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationTrace Minimization Principles for Positive Semi-Definite Pencils
Trace Minimization Principles for Positive Semi-Definite Pencils Xin Liang Ren-Cang Li Zhaojun Bai Technical Report 2012-14 http://www.uta.edu/math/preprint/ Trace Minimization Principles for Positive
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More information1 Quasi-definite matrix
1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 435 (2011) 480 493 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Bounding the spectrum
More informationPERTURBATION ANALYSIS OF THE EIGENVECTOR MATRIX AND SINGULAR VECTOR MATRICES. Xiao Shan Chen, Wen Li and Wei Wei Xu 1. INTRODUCTION A = UΛU H,
TAIWAESE JOURAL O MATHEMATICS Vol. 6, o., pp. 79-94, ebruary 0 This paper is available online at http://tjm.math.ntu.edu.tw PERTURBATIO AALYSIS O THE EIGEVECTOR MATRIX AD SIGULAR VECTOR MATRICES Xiao Shan
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationSpectrally arbitrary star sign patterns
Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 431 (29) 188 195 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Lattices associated with
More informationMinimum number of non-zero-entries in a 7 7 stable matrix
Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a
More informationInterpolating the arithmetic geometric mean inequality and its operator version
Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 430 (2009) 532 543 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Computing tight upper bounds
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 435 (2011) 1029 1033 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Subgraphs and the Laplacian
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationDeterminants of Partition Matrices
journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised
More informationthe Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012
Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationApplied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system
Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity
More informationOn matrix equations X ± A X 2 A = I
Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationJournal of Mathematical Analysis and Applications
J. Math. Anal. Appl. 381 (2011) 506 512 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa Inverse indefinite Sturm Liouville problems
More informationEquivalence constants for certain matrix norms II
Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationLaplacian spectral radius of trees with given maximum degree
Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 1962 1969 www.elsevier.com/locate/laa Laplacian spectral radius of trees with given maximum degree Aimei Yu a,,1,
More informationInequalities for the spectra of symmetric doubly stochastic matrices
Linear Algebra and its Applications 49 (2006) 643 647 wwwelseviercom/locate/laa Inequalities for the spectra of symmetric doubly stochastic matrices Rajesh Pereira a,, Mohammad Ali Vali b a Department
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 36 (01 1960 1968 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa The sum of orthogonal
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 437 (2012) 2719 2726 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Lie derivations
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationMinimizing the Laplacian eigenvalues for trees with given domination number
Linear Algebra and its Applications 419 2006) 648 655 www.elsevier.com/locate/laa Minimizing the Laplacian eigenvalues for trees with given domination number Lihua Feng a,b,, Guihai Yu a, Qiao Li b a School
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationAn Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)
An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationAn Even Order Symmetric B Tensor is Positive Definite
An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationNOTES ON LINEAR ODES
NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary
More informationLinear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices
Linear maps preserving rank of tensor products of matrices Journal: Manuscript ID: GLMA-0-0 Manuscript Type: Original Article Date Submitted by the Author: -Aug-0 Complete List of Authors: Zheng, Baodong;
More informationOn the Hermitian solutions of the
Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationOn the second Laplacian eigenvalues of trees of odd order
Linear Algebra and its Applications 419 2006) 475 485 www.elsevier.com/locate/laa On the second Laplacian eigenvalues of trees of odd order Jia-yu Shao, Li Zhang, Xi-ying Yuan Department of Applied Mathematics,
More informationBlock Lanczos Tridiagonalization of Complex Symmetric Matrices
Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic
More informationEigenvectors and Reconstruction
Eigenvectors and Reconstruction Hongyu He Department of Mathematics Louisiana State University, Baton Rouge, USA hongyu@mathlsuedu Submitted: Jul 6, 2006; Accepted: Jun 14, 2007; Published: Jul 5, 2007
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationA property of orthogonal projectors
Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,
More informationNotes on Linear Algebra
1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar
More informationPOSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS
POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationPositive entries of stable matrices
Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationInequalities for the numerical radius, the norm and the maximum of the real part of bounded linear operators in Hilbert spaces
Available online at www.sciencedirect.com Linear Algebra its Applications 48 008 980 994 www.elsevier.com/locate/laa Inequalities for the numerical radius, the norm the maximum of the real part of bounded
More informationc 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March
SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.
More informationA New Shuffle Convolution for Multiple Zeta Values
January 19, 2004 A New Shuffle Convolution for Multiple Zeta Values Ae Ja Yee 1 yee@math.psu.edu The Pennsylvania State University, Department of Mathematics, University Park, PA 16802 1 Introduction As
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation
More informationAbstract. 1. Introduction
Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationThe maximal determinant and subdeterminants of ±1 matrices
Linear Algebra and its Applications 373 (2003) 297 310 www.elsevier.com/locate/laa The maximal determinant and subdeterminants of ±1 matrices Jennifer Seberry a,, Tianbing Xia a, Christos Koukouvinos b,
More informationSensitivity analysis for the multivariate eigenvalue problem
Electronic Journal of Linear Algebra Volume 6 Volume 6 (03) Article 43 03 Sensitivity analysis for the multivariate eigenvalue problem Xin-guo Liu wgwang@ouc.edu.cn Wei-guo Wang Lian-hua Xu Follow this
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationHigh Dimensional Covariance and Precision Matrix Estimation
High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationThe effect on the algebraic connectivity of a tree by grafting or collapsing of edges
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 855 864 www.elsevier.com/locate/laa The effect on the algebraic connectivity of a tree by grafting or collapsing
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More information