Matrix Perturbation Theory

Size: px
Start display at page:

Download "Matrix Perturbation Theory"

Transcription

1 5 Matrix Perturbation Theory Ren-Cang Li University of Texas at Arlington 5 Eigenvalue Problems 5-5 Singular Value Problems Polar Decomposition Generalized Eigenvalue Problems Generalized Singular Value Problems 5-56 Relative Perturbation Theory for Eigenvalue Problems Relative Perturbation Theory for Singular Value Problems 5-5 References 5-6 There is a vast amount of material in matrix (operator) perturbation theory Related books that are worth mentioning are [SS90] [Par98] [Bha96] [Bau85] and [Kat70] In this chapter we attempt to include the most fundamental results up to date except those for linear systems and least squares problems for which the reader is referred to Section 38 and Section 396 Throughout this chapter UI denotes a general unitarily invariant norm Two commonly used ones are the spectral norm and the Frobenius norm F 5 Eigenvalue Problems The reader is referred to Sections 43 4 and 4 for more information on eigenvalues and their locations Definitions: Let A C n n A scalar vector pair (λ x) C C n is an eigenpair of A if x 0 and Ax = λx A vector scalar vector triplet (y λ x) C n C C n is an eigentriplet if x 0 y 0 and Ax = λx y A = λy The quantity cond(λ) = x y y x is the individual condition number for λ where (y λ x) C n C C n is an eigentriplet Let σ (A) ={λ λ λ n } the multiset of A s eigenvalues and set = diag(λ λ λ n ) τ = diag(λ τ () λ τ () λ τ (n) ) 5-

2 5- Handbook of Linear Algebra where τ is a permutation of { n} For real ie all λ j s are real = diag(λ λ λ n ) is in fact a τ for which the permutation τ makes λ τ ( j ) = λ j for all j Given two square matrices A and A the separation sep(a A )betweena and A is defined as [SS90 p 3] sep(a A ) = inf XA A X X = A is perturbed to à = A + A The same notation is adopted for à except all symbols with tildes Let X Y C n k with rank(x) = rank(y ) = k Thecanonical angles between their column spaces are θ i = arc cos σ i where{σ i } i= k are the singular values of (Y Y ) / Y X(X X) / Define the canonical angle matrix between X and Y as (X Y ) = diag(θ θ θ k ) For k = ie x y C n (both nonzero) we use (x y) instead to denote the canonical angle between the two vectors Facts: [SS90 p 68] (Elsner) max min λ i λ j ( A + à i j ) /n A /n [SS90 p 70] (Elsner) There exists a permutation τ of { n} such that n τ ( A + à ) /n A /n 3 [SS90 p 83] Let (y µ x) be an eigentriplet of A A changes µ to µ + µ with µ = y ( A)x + O( A y x ) and µ cond(µ) A + O( A ) 4 [SS90 p 05] If A and A + A are Hermitian then UI A UI 5 [Bha96 p 65] (Hoffman Wielandt)IfA and A+ A are normal then there exists a permutation τ of { n} such that τ F A F 6 [Sun96] If A is normal then there exists a permutation τ of { n} such that τ F n A F 7 [SS90 p 9] (Bauer Fike) IfA is diagonalizable and A = X X is its eigendecomposition then max min λ i λ j X ( A)X p κ p (X) A p i j 8 [BKL97] Suppose both A and à are diagonalizable and have eigendecompositions A = X X and à = X X (a) There exists a permutation τ of { n} such that τ F κ (X)κ ( X) A F (b) UI κ (X)κ ( X) A UI for real and

3 Matrix Perturbation Theory [KPJ8] Let residuals r = A x µ x and s = ỹ A µỹ where x = ỹ = and let ε = max { r s } The smallest error matrix A in the -norm for which (ỹ µ x) isanexact eigentriplet of à = A + A satisfies A = ε and µ µ cond( µ) ε + O(ε ) for some µ σ (A) 0 [KPJ8] [DK70][Par98 pp 73 44] Suppose A is Hermitian and let residual r = A x µ x and x = (a) The smallest Hermitian error matrix A (in the -norm) for which ( µ x) is an exact eigenpair of à = A + A satisfies A = r (b) µ µ r for some eigenvalue µ of A (c) Let µ be the closest eigenvalue in σ (A)to µ and x be its associated eigenvector with x = and let η = µ λ Ifη>0 then min µ λ σ (A) µ µ r η sin ( x x) r η Let A be Hermitian X C n k have full column rank and M C k k be Hermitian having eigenvalues µ µ µ k Set R = AX XM There exist k eigenvalues λ i λ i λ ik of A such that the following inequalities hold Note that subset {λ i j } k j= may be different at different occurrences (a) [Par98 pp 53 60] [SS90 Remark 46 p 07] (Kahan Cao Xie Li) max µ j λ i j R j k σ min (X) k (µ j λ i j ) R F σ min (X) j= (b) [SS90 pp 54 57] [Sun9] If X X = I and M = X AX and if all but k of A s eigenvalues differ from every one of M s by at least η>0 and ε F = R F /η < then k (µ k λ i j ) R F η ε F j = (c) [SS90 pp 54 57] [Sun9] If X X = I and M = X AX and there is a number η>0 such that either all but k of A s eigenvalues lie outside the open interval (µ η µ k + η)orallbut k of A s eigenvalues lie inside the closed interval [µ l + η µ l+ η] for some l k and ε = R /η < then max µ j λ i j R j k η ε [DK70] Let A be Hermitian and have decomposition X A[X X ] = X [ A A ]

4 5-4 Handbook of Linear Algebra where [X X ] is unitary and X C n k LetQ C n k have orthonormal columns and for a k k Hermitian matrix M set R = AQ QM Let η = min µ ν over all µ σ (M) and ν σ (A ) If η>0 then sin (X Q) F R F η 3 [LL05] Let M E M 0 A = E H Ã = 0 H be Hermitian and set η = min µ ν over all µ σ (M) and ν σ (H) Then max j n λ j λ j E η + η + 4 E 4 [SS90 p 30] Let [X Y ] be unitary and X C n k and let X A G A[X Y ] = E A Y Assume that σ (A ) σ (A ) = and set η = sep(a A ) If G E <η /4 then there is a unique W C (n k) k satisfying W E /η such that [ X Ỹ] is unitary and X [Ã A[ X G ] Ỹ] = where Thus tan (X X ) < E η Ỹ 0 Ã X = (X + Y W)(I + W W) / Ỹ = (Y X W )(I + WW ) / Ã = (I + W W) / (A + GW)(I + W W) / Ã = (I + WW ) / (A WG)(I + WW ) / Examples: Bounds on τ UI are in fact bounds on λ j λ τ ( j ) in disguise only more convenient and concise For example for UI = (spectral norm) τ = max j λ j λ τ ( j ) and for UI = F (Frobenius norm) [ n τ F = j = λ j λ τ ( j ) ] / Let A Ã Cn n as follows where ε>0 µ µ µ A = Ã = µ µ ε µ It can be seen that σ (A) = {µ µ} (repeated n times) and the characteristic polynomial det(ti Ã) = (t µ)n ε which gives σ (Ã) ={µ + ε/n e ijπ/n 0 j n } Thus

5 Matrix Perturbation Theory 5-5 λ µ =ε /n = A /n This shows that the fractional power A /n in Facts and cannot be removed in general 3 Consider 3 A = A s eigenvalues are easily read off and is perturbed by A = λ = x = [ 0 0] T y = [ ] T λ = 4 x = [ ] T y = [ ] T λ 3 = 400 x 3 = [ ] T y 3 = [0 0 ] T On the other hand à s eigenvalues computed by MATLAB s eig are λ = 000 λ = 3947 λ 3 = 4058 The following table gives λ j λ j with upper bounds up to the st order by Fact 3 j cond(λ j ) cond(λ j ) A λ j λ j We see that cond(λ j ) A gives a fairly good error bound for j = but dramatically worse for j = 3 There are two reasons for this: One is in the choice of A and the other is that A s order of magnitude is too big for the first order bound cond(λ j ) A to be effective for j = 3 Note that A has the same order of magnitude as the difference between λ and λ 3 and that is too big usually For better understanding of this first order error bound the reader may play with this y j x example with A = ε j y j x for various tiny parameters ε j 4 Let = diag(c c c k ) and Ɣ = diag(s s s k ) where c j s j 0 and c j + s j = for all j The canonical angles between I k X = Q 0 V Y = Q Ɣ U 0 0 are θ j = arccos c j j = k whereq U V are unitary On the other hand every pair of X Y C n k with k n and X X = Y Y = I k having canonical angles arccos c j can be represented this way [SS90 p 40] 5 Fact 3 is most useful when E is tiny and the computation of A s eigenvalues is then decoupled into two smaller ones In eigenvalue computations we often seek unitary [X X ] such that X M E X M 0 A[X X ] = Ã[X X ] = E H 0 H X and E is tiny Since a unitarily similarity transformation does not alter eigenvalues Fact 3 still applies 6 [LL05] Consider the Hermitian matrix α ε A = ε β X

6 5-6 Handbook of Linear Algebra where α>βand ε>0 It has two eigenvalues and λ ± = α + β ± (α β) + 4ε { } λ 0 < + α ε = β λ (α β) + (α β) + 4ε The inequalities in Fact 3 become equalities for the example 5 Singular Value Problems The reader is referred to Section 56 Chapters 7 and 45 for more information on singular value decompositions Definitions: B C m n has a (first standard form) SVD B = U V whereu C m m and V C n n are unitary and = diag(σ σ ) R m n is leading diagonal (σ j starts in the top-left corner) with all σ j 0 Let SV(B) ={σ σ σ min{mn} } the set of B s singular values and σ σ 0 and let SV ext (B) = SV(B) unless m > n for which SV ext (B) = SV(B) {0 0} (additional m n zeros) A vector scalar vector triplet (u σ v) C m R C n is a singular-triplet if u 0 v 0 σ 0 and Bv = σ u B u = σ v B is perturbed to B = B + B The same notation is adopted for B except all symbols with tildes Facts: [SS90 p 04] (Mirsky) UI B UI Let residuals r = Bṽ µũ and s = B ũ µṽ and ṽ = ũ = (a) [Sun98] The smallest error matrix B (in the -norm) for which (ũ µ ṽ) is an exact singulartriplet of B = B + B satisfies B = εwhereε = max { r s } (b) µ µ εfor some singular value µ of B (c) Let µ be the closest singular value in SV ext (B)to µ and (u σ v) be the associated singular-triplet with u = v = and let η = min µ σ over all σ SV ext (B) and σ µ Ifη>0 then µ µ ε /η and [SS90 p 60] r sin (ũ u) + sin + s (ṽ v) η 3 [LL05] Let B F B = C m n B B 0 = E B 0 B where B C k k and set η = min µ ν over all µ SV(B ) and ν SV ext (B ) and ε = max{ E F } Then ε max σ j σ j j η + η + 4ε

7 Matrix Perturbation Theory [SS90 p 60] (Wedin) LetB B C m n (m n) have decompositions [Ũ ] U B B[V V ] = 0 B B[Ṽ 0 B Ṽ] = 0 0 B U where [U U ] [V V ] [Ũ Ũ] and [Ṽ Ṽ] are unitary and U Ũ C m k V Ṽ C n k Set If SV( B ) SV ext (B ) = then Ũ R = B Ṽ Ũ B S = B Ũ Ṽ B R sin (U Ũ) F + sin (V Ṽ) F F + S F η where η = min µ ν over all µ SV( B ) and ν SV ext (B ) Examples: Let B = [ ] [ B = ] [ = [e e ] ] e T e T Then σ = 0000 σ = and σ = σ = Fact gives max σ j σ j σ j σ j j Let B be as in the previous example and let ṽ = e ũ = e µ = Then r = Bṽ µũ = e and s = B ũ µṽ = e Fact applies Note that without calculating SV(B) one may bound η needed for Fact (c) from below as follows Since B has two singular values that are near and µ = respectively with errors no bigger than then η ( ) = Let B and B be as in Example Fact 3 gives max σ j σ j a much better bound j than by Fact 4 Let B and B be as in Example Note B s SVD there Apply Fact 4 with k = to give a similar bound as by Fact (c) 5 Since unitary transformations do not change singular values Fact 3 applies to B B C m n having decompositions U B B[V V ] = F U B B[V E B V ] = 0 0 B U where [U U ] and [V V ] are unitary and U C m k V C n k U j= 53 Polar Decomposition The reader is referred to Chapter 7 for definition and for more information on polar decompositions Definitions: B F m n is perturbed to B = B + B and their polar decompositions are B = QH B = Q H = (Q + Q)(H + H) where F = R or C B is restricted to F for B F

8 5-8 Handbook of Linear Algebra Denote the singular values of B and B as σ σ and σ σ respectively The condition numbers for the polar factors in the Frobenius norm are defined as cond F (X) = lim δ 0 sup B F δ X F for X = H or Q δ B is multiplicatively perturbed to B if B = D L BD R for some D L F m m and D R F n n B is said to be graded if it can be scaled as B = GS such that G is well-behaved (ie κ (G) isof modest magnitude) where S is a scaling matrix often diagonal but not required so for the facts below Interesting cases are when κ (G) κ (B) Facts: [CG00] The condition numbers cond F (Q) and cond F (H) are tabulated as follows where κ (B) = σ /σ n R C Factor Q m = n /(σ n + σ n ) /σ n m > n /σ n /σ n Factor H [Kit86] H F B F 3 [Li95] If m = n and rank(b) = n then m n ( + κ (B) ) + κ (B) 4 [Li95] [LS0] If rank(b) = n then ( Q UI Q F Q UI + σ n + σ n B F σ n + σ n σ n + σ n B UI ) B UI max{σ n σ n } 5 [Mat93] If B R n n rank(b) = n and B <σ n then Q UI B UI B ln ( B ) σ n + σ n where is the Ky Fan -norm ie the sum of the first two largest singular values (See Chapter 73) 6 [LS0] If B R n n rank(b) = n and B <σ n + σ n then Q F 4 σ n + σ n + σ n + σ n B F 7 [Li97] Let B and B = D BD L R having full column rank Then Q F I D + D L F L I + I D F R + D F R I F 8 [Li97] [Li05] Let B = GS and B = GS and assume that G and B have full column rank If G G < then

9 Matrix Perturbation Theory 5-9 where γ = Q F γ G G F ( ) ( H)S F γ G G + G F + ( G G ) Examples: Take both B and B to have orthonormal columns to see that some of the inequalities above on Q are attainable Let B = [ ] 0 50 = [ ] and [ ] B = obtained by rounding each entry of B to have five significant decimal digits B = QH can be read off above and B = Q H can be computed by Q = Ũ Ṽ and H = Ṽ Ṽ where B s SVD is Ũ Ṽ One has and SV(B) ={ } SV( B) ={ } B B F Q Q F H H F Fact gives H F and Fact 6 gives Q F [Li97] and [Li05] have examples on the use of inequalities in Facts 7 and 8 54 Generalized Eigenvalue Problems The reader is referred to Section 43 for more information on generalized eigenvalue problems Definitions: Let A B C m n Amatrix pencil is a family of matrices A λb parameterized by a (complex) number λ The associated generalized eigenvalue problem is to find the nontrivial solutions of the equations Ax = λbx and/or y A = λy B where x C n y C m and λ C and (x λ y) is called a generalized eigentriplet A λb is regular if m = n and det(a λb) 0 for some λ C In what follows all pencils in question are assumed regular An eigenvalue λ is conveniently represented by a nonzero number pair so-called a generalizedeigenvalue α β interpreted as λ = α/β β = 0 corresponds to eigenvalue infinity

10 5-0 Handbook of Linear Algebra A generalized eigenpair of A λb refers to ( α β x) such that β Ax = α Bx wherex 0 and α + β > 0 A generalized eigentriplet of A λb refers to (y α β x) such that β Ax = α Bx and βy A = αy Bwherex 0 y 0 and α + β > 0 The quantity cond( α β ) = x y y Ax + y Bx is the individualconditionnumber for the generalized eigenvalue α β where (y α β x) is a generalized eigentriplet of A λb A λb is perturbed to à λ B = (A + A) λ(b + B) Let σ (A B) ={ α β α β α n β n } be the set of the generalized eigenvalues of A λb and set Z = [A B] C n n A λb is diagonalizable if it is equivalent to a diagonal pencil ie there are nonsingular X Y C n n such that Y AX = Y BX = where = diag(α α α n ) and = diag(β β β n ) A λb is a definite pencil if both A and B are Hermitian and γ (A B) = min x C n x = x Ax + i x Bx > 0 The same notation is adopted for à λ B except all symbols with tildes The chordal distance between two nonzero pairs α β and α β is χ ( α β α β ) = βα αβ α + β α + β Facts: [SS90 p 93] Let (y α β x) be a generalized eigentriplet of A λb[ A B] changes α β = y Ax y Bx to α β = α β + y ( A)x y ( B)x +O(ε ) where ε = [ A B] and χ ( α β α β ) cond( α β ) ε + O(ε ) [SS90 p 30] [Li88] If A λb is diagonalizable then 3 [Li94 Lemma 33] (Sun) max min χ ( α i β i α j β j ) κ (X) sin (Z Z ) i j sin (Z Z Z ) UI Z UI max{σ min (Z) σ min ( Z)} where σ min (Z) isz s smallest singular value 4 The quantity γ (A B) is the minimum distance of the numerical range W(A + ib) to the origin for definite pencil A λb 5 [SS90 p 36] Suppose A λb is a definite pencil If à and B are Hermitian and [ A B] < γ (A B) then à λ B is also a definite pencil and there exists a permutation τ of { n} such that max χ( α j β j α τ ( j ) β τ ( j ) ) [ A B] j n γ (A B) 6 [SS90 p 38] Definite pencil A λb is always diagonalizable: X AX = and X BX = and with real spectra Facts 7 and 0 apply

11 Matrix Perturbation Theory 5-7 [Li03] Suppose A λb and à λ B are diagonalizable with real spectra ie Y AX = Y BX = and Ỹ à X = Ỹ B X = and all α j β j and all α j β j are real Then the follow statements hold where = diag(χ( α β α τ () β τ () ) χ( α n β n α τ (n) β τ (n) )) for some permutation τ of { n} (possibly depending on the norm being used) In all cases the constant factor π/ can be replaced by for the -norm and the Frobenius norm (a) UI π κ (X)κ ( X) sin (Z Z ) UI (b) If all α j + β j = α j + β j = in their eigendecompositions then UI π X Y X Ỹ [ A B] UI 8 Let residuals r = β A x α B x and s = βỹ A αỹ Bwhere x = ỹ = The smallest error matrix [ A B] in the -norm for which (ỹ α β x) is an exact generalized eigentriplet of à λ B satisfies [ A B] = ε whereε = max { r s } and χ ( α β α β ) cond( α β ) ε + O(ε ) for some α β σ (A B) 9 [BDD00 p 8] Suppose A and B are Hermitian and B is positive definite and let residual r = A x µb x and x = (a) For some eigenvalue µ of A λb µ µ r B x B B r where z M = z Mz (b) Let µ be the closest eigenvalue to µ among all eigenvalues of A λb and x its associated eigenvector with x = and let η = min µ ν over all other eigenvalues ν µ of A λb Ifη>0 then µ µ ( η r B x B ) B r η sin ( x x) B κ (B) r η 0 [Li94] Suppose A λb and à λ B are diagonalizable and have eigendecompositions ] ] Y [ A[X X ] = Y [ B[X X ] = Y Y X = [W W ] and the same for à λ B except all symbols with tildes where X Y W C n k C k k Suppose α j + β j = α j + β j = for j n in the eigendecompositions and set η = min χ ( α β α β ) taken over all α β σ ( ) and α β σ ( ) If η>0 then sin (X X ) F X W η Ỹ ( Z Z) [ X X ] F

12 5- Handbook of Linear Algebra 55 Generalized Singular Value Problems Definitions: Let A([ ]) C m n and B C l n A matrix pair {A B} is an (m l n)-grassmann matrix pair if rank A B = n In what follows all matrix pairs are (m l n)-grassmann matrix pairs A pair α β is a generalized singular value of {A B} if det(β A A α B B) = 0 α β 0 0 α β 0 ie α β = µ ν for some generalized eigenvalue µ ν of matrix pencil A A λb B Generalized Singular Value Decomposition (GSVD) of {A B}: U AX = A V BX = B where U C m m V C l l are unitary X C n n is nonsingular A = diag(α α )isleading diagonal (α j starts in the top left corner) and B = diag( β n β n )istrailing diagonal (β j ends in the bottom-right corner) α j β j 0 and α j + β j = for j n (Setsomeα j = 0 and/or some β j = 0 if necessary) {A B} is perturbed to {Ã B} ={A + A B + B} Let SV(A B) [ ={ α ] β α β α n β n } be the set of the generalized singular values of {A B} A and set Z = C B (m+l) n The same notation is adopted for {Ã B} except all symbols with tildes Facts: If {A B} is an (m l n)-grassmann matrix pair then A A λb B is a definite matrix pencil [Van76] The GSVD of an (m l n)-grassmann matrix pair {A B} exists 3 [Li93] There exist permutations τ and ω of { n} such that n max χ( α i β i α τ (i) β τ (i) ) sin (Z Z) j n j= [ χ ( α i β i α ω(i) β ω(i) )] sin (Z Z) F 4 [Li94 Lemma 33] (Sun) sin (Z Z) Z UI Z UI max{σ min (Z) σ min ( Z)} where σ min (Z) isz s smallest singular value 5 [Pai84] If αi + βi = α i + β i = for i = n then there exists a permutation ϖ of { n} such that n [(α j α ϖ ( j ) ) + (β j β ] ϖ ( j ) ) min Z 0 Z 0 Q F Q unitary j = where Z 0 = Z(Z Z) / and Z 0 = Z( Z Z) /

13 Matrix Perturbation Theory [Li93] [Sun83] Perturbation bounds on generalized singular subspaces (those spanned by one or a few columns of U V and X in GSVD) are also available but it is quite complicated 56 Relative Perturbation Theory for Eigenvalue Problems Definitions: Let scalar α be an approximation to α and p Define relative distances between α and α as follows For α + α 0 d(α α) = α α α α = (classical measure) α α α ϱ p (α α) = p α p + α ([Li98]) p ζ (α α) = α α α α ([BD90] [DV9]) ς(α α) = ln( α/α) for αα > 0 ([LM99a] [Li99b]) and d(0 0) = ϱ p (0 0) = ζ (0 0) = ς(0 0) = 0 A C n n is multiplicatively perturbed to à if à = D L AD R for some D L D R C n n Denote σ (A) ={λ λ λ n } and σ (Ã) ={ λ λ λ n } A C n n is said to be graded if it can be scaled as A = S HS such that H is well-behaved (ie κ (H) is of modest magnitude) where S is a scaling matrix often diagonal but not required so for the facts below Interesting cases are when κ (H) κ (A) Facts: [Bar00] ϱ p ( ) is a metric on C for p Let A à = D AD C n n be Hermitian where D is nonsingular (a) [HJ85 p 4] (Ostrowski) There exists t j satisfying λ min (D D) t j λ max (D D) such that λ j = t j λ j for j = n and thus (b) [LM99] [Li98] max j n d(λ j λ j ) I D D diag (ς(λ λ ) ς(λ n λ ) n ) UI ln(d D) UI diag (ζ (λ λ ) ζ (λ n λ ) n ) UI D D UI 3 [Li98] [LM99] Let A = S HS be a positive semidefinite Hermitian matrix perturbed to à = S (H + H)S Suppose H is positive definite and H / ( H)H / < and set M = H / SS H / M = DMD where D = [ I + H / ( H)H /] / = D Then σ (A) = σ (M) and σ (Ã) = σ ( M) and the inequalities in Fact above hold with D here Note that

14 5-4 Handbook of Linear Algebra D D UI H / ( H)H / UI H / ( H)H / H H H H UI 4 [BD90] [VS93] Suppose A and à are Hermitian and let A =(A ) / be the positive semidefinite square root of A If there exists 0 δ< such that x ( A)x δx A x for all x C n then either λ j = λ j = 0or δ λ j /λ j + δ 5 [Li99a] Let Hermitian A à = D AD have decompositions ] X [A A[X X ] = X Ã[ X X ] = A X X [à à ] where [X X ] and [ X X ] are unitary and X X C n k Ifη = min ϱ (µ µ) > 0 µ σ (A ) µ σ (à ) then sin (X X ) F (I D )X F + (I D )X F η 6 [Li99a] Let A = S HSbe a positive semidefinite Hermitian matrix perturbed to à = S (H + H)S having decompositions in notation the same as in Fact 5 Let D = [ I + H / ( H)H /] / Assume H is positive definite and H / ( H)H / < If η ζ = ζ (µ µ) > 0 then sin (X X ) F D D F η ζ min µ σ (A ) µ σ (à ) Examples: [DK90] [EI95] Let A be a real symmetric tridiagonal matrix with zero diagonal and off-diagonal entries b b b n Suppose à is identical to A except for its off-diagonal entries which change to β b β b β n b n where all β i are real and supposedly close to Then à = DAD where D = diag(d d d n ) with d k = β β 3 β k β β 4 β k d k+ = β β 4 β k β β 3 β k Let β = n j= max{β j /β j } Then β I D βi and Fact and Fact 5 apply Now if all ε β j + ε then ( ε) n β β ( + ε) n Let A = SHS with S = diag( ) and 0 0 A = H =

15 Matrix Perturbation Theory 5-5 Suppose that each entry A ij of A is perturbed to A ij (+δ ij ) with δ ij ε Then ( H) ij ε H ij and thus H ε Since H 0/8 Fact 3 implies ζ (λ j λ j ) 5ε/ 5ε 5ε 57 Relative Perturbation Theory for Singular Value Problems Definitions: B C m n is multiplicatively perturbed to B if B = D BD L R for some D L C m m and D R C n n Denote the singular values of B and B as SV(B) ={σ σ σ min{mn} } SV( B) ={ σ σ σ min{mn} } B is said to be (highly) graded if it can be scaled as B = GS such that G is well-behaved (ie κ (G) is of modest magnitude) where S is a scaling matrix often diagonal but not required so for the facts below Interesting cases are when κ (G) κ (B) Facts: Let B B = D BD L R C m n whered L and D R are nonsingular (a) [EI95] For j n (b) [Li98] [LM99] σ j D L D R σ j σ j D L D R diag (ζ (σ σ ) ζ (σ n σ n )) UI D L D L UI + D R D R UI [Li99a] Let B B = D BD L R C m n (m n) have decompositions [Ũ ] U B B[V V ] = 0 B B[Ṽ 0 B Ṽ] 0 = 0 B U where [U U ] [V V ] [Ũ Ũ] and [Ṽ Ṽ] are unitary and U Ũ C m k V Ṽ C n k Set U = (U Ũ) V = (V Ṽ) If SV(B ) SV ext ( B ) = then sin U F + sin V F η [ (I D L )U F + (I D R )V F Ũ + (I D )U L F + (I D R )V F] / where η = min ϱ (µ µ) overallµ SV(B ) and µ SV ext ( B ) 3 [Li98] [Li99a] [LM99] Let B = GS and B = GS be two m n matrices where rank(g) = n and let G = G G Then B = DBwhereD = I + ( G)G Fact and Fact apply with D L = D and D R = I Note that ( ) D D ( G)G UI UI + ( G)G

16 5-6 Handbook of Linear Algebra Examples: [BD90] [DK90] [EI95] B is a real bidiagonal matrix with diagonal entries a a a n and offdiagonal (the one above the diagonal) entries are b b b n B is the same as B except for its diagonal entries which change to α a α a α n a n and its off-diagonal entries which change to β b β b β n b n Then B = D BD L R with ( D L = diag α α α α α α 3 β ( D R = diag β β β α α α ) β β ) Let α = n j= max{α j /α j } and β = n j= max{β j /β j } Then (αβ) ( D L D R ) DL D R αβ and Fact and Fact apply Now if all ε α i β j (αβ) ( + ε) n Consider block partitioned matrices + ε then ( ε) n (αβ) B B B = 0 B B 0 B = = B 0 B I B B = BD R 0 I By Fact ζ (σ j σ j ) B B Interesting cases are when B B is tiny enough to be treated as zero and so SV( B) approximates SV(B) well This situation occurs in computing the SVD of a bidiagonal matrix Author Note: Supported in part by the National Science Foundation under Grant No DMS References [BDD00] Z Bai J Demmel J Dongarra A Ruhe and H van der Vorst (Eds) Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide SIAM Philadelphia 000 [BD90] J Barlow and J Demmel Computing accurate eigensystems of scaled diagonally dominant matrices SIAM J Numer Anal 7: [Bar00] A BarrlundThe p-relative distance is a metric SIAM J Matrix Anal Appl (): [Bau85] HBaumgärtel Analytical Perturbation Theory for Matrices and Operators Birkhäuser Basel 985 [Bha96] R Bhatia Matrix Analysis Graduate Texts in Mathematics Vol 69 Springer New York 996 [BKL97] R Bhatia F Kittaneh and R-C Li Some inequalities for commutators and an application to spectral variation II Lin Multilin Alg 43(-3): [CG00] F Chatelin and S Gratton On the condition numbers associated with the polar factorization of a matrix Numer Lin Alg Appl 7: [DK70] C Davis and W Kahan The rotation of eigenvectors by a perturbation III SIAM J Numer Anal 7: [DK90] J Demmel and W Kahan Accurate singular values of bidiagonal matrices SIAM J Sci Statist Comput : [DV9] J Demmel and K Veselić Jacobi s method is more accurate than QR SIAM J Matrix Anal Appl 3:

17 Matrix Perturbation Theory 5-7 [EI95] SC Eisenstat and ICF Ipsen Relative perturbation techniques for singular value problems SIAM J Numer Anal 3: [HJ85] RA Horn and CR Johnson Matrix Analysis Cambridge University Press Cambridge 985 [KPJ8] W Kahan BN Parlett and E Jiang Residual bounds on approximate eigensystems of nonnormal matrices SIAM J Numer Anal 9: [Kat70] T Kato Perturbation Theory for Linear Operators nd ed Springer-Verlag Berlin 970 [Kit86] F Kittaneh Inequalities for the schatten p-norm III Commun Math Phys 04: [LL05] Chi-Kwong Li and Ren-Cang Li A note on eigenvalues of perturbed Hermitian matrices Lin Alg Appl 395: [LM99] Chi-Kwong Li and R Mathias The Lidskii Mirsky Wielandt theorem additive and multiplicative versions Numer Math 8: [Li88] Ren-Cang Li A converse to the Bauer-Fike type theorem Lin Alg Appl 09: [Li93] Ren-Cang Li Bounds on perturbations of generalized singular values and of associated subspaces SIAM J Matrix Anal Appl 4: [Li94] Ren-Cang Li On perturbations of matrix pencils with real spectra Math Comp 6: [Li95] Ren-Cang Li New perturbation bounds for the unitary polar factor SIAM J Matrix Anal Appl 6: [Li97] Ren-Cang Li Relative perturbation bounds for the unitary polar factor BIT 37: [Li98] Ren-Cang Li Relative perturbation theory: I Eigenvalue and singular value variations SIAM J Matrix Anal Appl 9: [Li99a] Ren-Cang Li Relative perturbation theory: II Eigenspace and singular subspace variations SIAM J Matrix Anal Appl 0: [Li99b] Ren-Cang Li A bound on the solution to a structured Sylvester equation with an application to relative perturbation theory SIAM J Matrix Anal Appl : [Li03] Ren-Cang Li On perturbations of matrix pencils with real spectra a revisit Math Comp 7: [Li05] Ren-Cang Li Relative perturbation bounds for positive polar factors of graded matrices SIAM J Matrix Anal Appl 7: [LS0] W Li and W Sun Perturbation bounds for unitary and subunitary polar factors SIAM J Matrix Anal Appl 3: [Mat93] R Mathias Perturbation bounds for the polar decomposition SIAM J Matrix Anal Appl 4: [Pai84] CC Paige A note on a result of Sun Ji-Guang: sensitivity of the CS and GSV decompositions SIAM J Numer Anal : [Par98] BN Parlett The Symmetric Eigenvalue Problem SIAM Philadelphia 998 [SS90] GW Stewart and Ji-Guang Sun Matrix Perturbation Theory Academic Press Boston 990 [Sun83] Ji-Guang Sun Perturbation analysis for the generalized singular value decomposition SIAM J Numer Anal 0: [Sun9] Ji-Guang Sun Eigenvalues of Rayleigh quotient matrices Numer Math 59: [Sun96]Ji-Guang Sun On the variation of the spectrum of a normal matrix Lin Alg Appl 46: [Sun98] Ji-Guang Sun Stability and accuracy perturbation analysis of algebraic eigenproblems Technical Report UMINF Department of Computer Science Umeå Univeristy Sweden 998 [Van76] CF Van Loan Generalizing the singular value decomposition SIAM J Numer Anal 3: [VS93] Krešimir Veselić and Ivan Slapničar Floating-pointperturbations of Hermitian matrices Lin Alg Appl 95:

18

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Key words. multiplicative perturbation, relative perturbation theory, relative distance, eigenvalue, singular value, graded matrix

Key words. multiplicative perturbation, relative perturbation theory, relative distance, eigenvalue, singular value, graded matrix SIAM J MATRIX ANAL APPL c 1998 Society for Industrial and Applied Mathematics Vol 19, No 4, pp 956 98, October 1998 009 RELATIVE PERTURBATION THEORY: I EIGENVALUE AND SINGULAR VALUE VARIATIONS REN-CANG

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) 1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics

More information

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

ON PERTURBATIONS OF MATRIX PENCILS WITH REAL SPECTRA. II

ON PERTURBATIONS OF MATRIX PENCILS WITH REAL SPECTRA. II MATEMATICS OF COMPUTATION Volume 65, Number 14 April 1996, Pages 637 645 ON PERTURBATIONS OF MATRIX PENCILS WIT REAL SPECTRA. II RAJENDRA BATIA AND REN-CANG LI Abstract. A well-known result on spectral

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

1 Quasi-definite matrix

1 Quasi-definite matrix 1 Quasi-definite matrix The matrix H is a quasi-definite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This

More information

PERTURBATION ANALYSIS OF THE EIGENVECTOR MATRIX AND SINGULAR VECTOR MATRICES. Xiao Shan Chen, Wen Li and Wei Wei Xu 1. INTRODUCTION A = UΛU H,

PERTURBATION ANALYSIS OF THE EIGENVECTOR MATRIX AND SINGULAR VECTOR MATRICES. Xiao Shan Chen, Wen Li and Wei Wei Xu 1. INTRODUCTION A = UΛU H, TAIWAESE JOURAL O MATHEMATICS Vol. 6, o., pp. 79-94, ebruary 0 This paper is available online at http://tjm.math.ntu.edu.tw PERTURBATIO AALYSIS O THE EIGEVECTOR MATRIX AD SIGULAR VECTOR MATRICES Xiao Shan

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 236 (212) 3218 3227 Contents lists available at SciVerse ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Trace Minimization Principles for Positive Semi-Definite Pencils

Trace Minimization Principles for Positive Semi-Definite Pencils Trace Minimization Principles for Positive Semi-Definite Pencils Xin Liang Ren-Cang Li Zhaojun Bai Technical Report 2012-14 http://www.uta.edu/math/preprint/ Trace Minimization Principles for Positive

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems

Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun) NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From

deviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From RELATIVE PERTURBATION RESULTS FOR EIGENVALUES AND EIGENVECTORS OF DIAGONALISABLE MATRICES STANLEY C. EISENSTAT AND ILSE C. F. IPSEN y Abstract. Let ^ and ^x be a perturbed eigenpair of a diagonalisable

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Accuracy and Stability in Numerical Linear Algebra

Accuracy and Stability in Numerical Linear Algebra Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 105 112 Accuracy and Stability in Numerical Linear Algebra Zlatko Drmač Abstract. We

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

arxiv: v1 [math.na] 5 Oct 2018

arxiv: v1 [math.na] 5 Oct 2018 SHARP ERROR BOUNDS FOR RITZ VECTORS AND APPROXIMATE SINGULAR VECTORS arxiv:181002532v1 [mathna] 5 Oct 2018 YUJI NAKATSUKASA Abstract We derive sharp bounds for the accuracy of approximate eigenvectors

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems

Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Solving Symmetric Semi-definite (ill-conditioned) Generalized Eigenvalue Problems Zhaojun Bai University of California, Davis Berkeley, August 19, 2016 Symmetric definite generalized eigenvalue problem

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION MANTAS MAŽEIKA Abstract. The purpose of this paper is to present a largely self-contained proof of the singular value decomposition (SVD), and

More information

arxiv: v1 [math.na] 6 Aug 2010

arxiv: v1 [math.na] 6 Aug 2010 GERSCHGORIN S THEOREM FOR GENERALIZED EIGENVALUE PROBLEMS IN THE EUCLIDEAN METRIC arxiv:10081202v1 [mathna] 6 Aug 2010 YUJI NAKATSUKASA Abstract We present Gerschgorin-type eigenvalue inclusion sets applicable

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 437 (2012) 202 213 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Perturbation

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA 1682-616 e-mail: barlow@cse.psu.edu

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

the Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012

the Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012 Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES H. FAßBENDER AND KH. D. IKRAMOV Abstract. A matrix A C n n is unitarily quasi-diagonalizable if A can be brought by a unitary similarity transformation

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION

ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION RODNEY JAMES, JULIEN LANGOU, AND BRADLEY R. LOWERY arxiv:40.5766v [math.na] Jan 04 Abstract. Balancing a matrix is a preprocessing step while solving the

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Principal Angles Between Subspaces and Their Tangents

Principal Angles Between Subspaces and Their Tangents MITSUBISI ELECTRIC RESEARC LABORATORIES http://wwwmerlcom Principal Angles Between Subspaces and Their Tangents Knyazev, AV; Zhu, P TR2012-058 September 2012 Abstract Principal angles between subspaces

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Singular Value Decomposition and Polar Form

Singular Value Decomposition and Polar Form Chapter 14 Singular Value Decomposition and Polar Form 14.1 Singular Value Decomposition for Square Matrices Let f : E! E be any linear map, where E is a Euclidean space. In general, it may not be possible

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

arxiv: v2 [math.na] 21 Sep 2015

arxiv: v2 [math.na] 21 Sep 2015 Forward stable eigenvalue decomposition of rank-one modifications of diagonal matrices arxiv:1405.7537v2 [math.na] 21 Sep 2015 N. Jakovčević Stor a,1,, I. Slapničar a,1, J. L. Barlow b,2 a Faculty of Electrical

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information