Sensitivity analysis for the multivariate eigenvalue problem
|
|
- Gloria Griffin
- 5 years ago
- Views:
Transcription
1 Electronic Journal of Linear Algebra Volume 6 Volume 6 (03) Article Sensitivity analysis for the multivariate eigenvalue problem Xin-guo Liu wgwang@ouc.edu.cn Wei-guo Wang Lian-hua Xu Follow this and additional works at: Recommended Citation Liu, Xin-guo; Wang, Wei-guo; and Xu, Lian-hua. (03), "Sensitivity analysis for the multivariate eigenvalue problem", Electronic Journal of Linear Algebra, Volume 6. DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu.
2 SENSITIVITY ANALYSIS FOR THE MULTIVARIATE EIGENVALUE PROBLEM XIN-GUO LIU, WEI-GUO WANG, AND LIAN-HUA XU Abstract. This paper concerns with the sensitivity analysis for the multivariate eigenvalue problem (MEP). The concept of a simple multivariate eigenvalue of a matrix is generalized to the MEP and the first-order perturbation expansions of a simple multivariate eigenvalue and the corresponding multivariate eigenvector are presented. The explicit expressions of condition numbers, perturbation upper bounds and backward errors for multivariate eigenpairs are derived. Key words. Multivariate eigenvalue problem, Simple eigenvalue, Condition number, Perturbation bound, Backward error. AMS subject classifications. 65H0, 5A, 65F5.. Introduction. Given a symmetric positive definite matrix A R n n, and a set of positive integers P m = {n,n,...,n m } with m n i = n, i= partition A into the block form A A A m A A A m A =......, A m A m A mm where A ij R ni nj. Let λ,λ,...,λ m be m real scalars and Λ := diag{λ I n,λ I n,...,λ m I nm }, Received by theeditors onapril7, 0. Accepted forpublication onaugust5, 03. Handling Editor: Panayiotis Psarrakos. School of Mathematical Sciences, Ocean University of China, Qingdao, 6600, P.R. China (liuxinguo@ouc.edu.cn). Supported in part by the National Natural Science Foundation of China Grant and 078, and Shandong Province Natural Science Foundation Grant Y008A07. School of Mathematical Sciences, Ocean University of China, Qingdao, 6600, P.R. China (wgwang@ouc.edu.cn). Supported in part by the Fundamental Research Funds for the Central Universities Grant School of Mathematical Sciences, Ocean University of China, Qingdao, 6600, P.R. China (lotus xu006@6.com). 636
3 Sensitivity Analysis for the Multivariate Eigenvalue Problem 637 where I ni R ni ni is the identity matrix. Denote S := {x = [ x T x T... x T m] T R n xj R nj, x j =, j =,,...,m}. The multivariate eigenvalue problem (MEP) is to find a pair (Λ,x) such that (.) { Ax = Λx, x S, where Λ is called a multivariate eigenvalue, x a multivariate eigenvector, and (Λ, x) a multivariate eigenpair or a solution to MEP (.). MEP arises as a necessary condition to the solution of the maximal correlation problem (MCP): (.) max x T Ax, s.t. x S, and finds applications in pattern analysis [0]. For more information about background materials, see [, 6, 3, ]. Several special cases of MEP have been well understood. It is a symmetric eigenvalue problem for a positive definite matrix when m = (see e.g., [3]). When m = n, the MEP has exactly n solutions which can be explicitly formulated. Chu and Watterson [] also discovered that there are precisely m i= (n i) solutions for a generic matrix A whose n eigenvalues are distinct. Horst [8] proposed an iterative method (named the power method in [] and the Horst method in [9, 3]) for computing a solution to the MEP. Detailed convergence analysis for the Horst iteration method are presented in [] and [9]. Following the innate iterative structure, a SOR-style generalization (P-SOR) has been proposed by Sun [4] as a natural improvement upon the Horst method. The convergence of the P-SOR method has been established in [4]. Recently, several results on the global maximizer of the maximal correlation problem are presented by [5, 8, 9, 0, ]. Although, as mentioned above, MEP is a symmetric eigenvalue problem for a positive definite matrix when m =, it must be pointed out that the generic case < m < n has essential differences from the classical eigenvalue problem. For example, neither characteristic polynomial nor Schur-type decomposition can be readily employed. It is known [3, 7] that both condition number and backward error play important roles in matrix computation. However, to our knowledge, little attention has been paid to the sensitivity analysis of the MEP. Golub and Zha [4] present a perturbation analysis of the canonical correlations of matrix pair.
4 638 X.-G. Liu, W.-G. Wang, and L.-H. Xu This paper deals with the sensitivity analysis for MEP. The rest of this paper is organized as follows. Section establishes the concept of simple multivariate eigenpair and presents first-order expansions. A sufficient and necessary condition for the simple multivariate eigenpairs is examined. In Section 3, following Rice s idea [], we introduce the condition numbers for multivariate eigenvalues and eigenvectors and present explicit formulae for them. In Section 4, we derive perturbation bounds for multivariate eigenpair by applying the Brouwer fixed-point theorem. Section 5 deals with backward error of approximate multivariate eigenpair. In Section 6, some simple numerical examples are shown. Finally, we give our concluding remarks in Section 7.. First order expansions of simple multivariate eigenpair. The MEP (.) can be equivalently written as the following nonlinear system (.) Ax = Λx, xt i x i =, x i R ni, i =,,...,m. Then the Jacobi matrix at a multivariate eigenpair (Λ,x) is the (n+m) (n+m) matrix [ ] A Λ B(x) J(Λ,x) := B(x) T, 0 where x x B(x) :=... Rn m. x m First we generalize the concept of simple root of a function (see, e.g., []) to the nonlinear system (.) as follows. Definition.. A multivariate eigenpair (Λ, x) of the MEP (.) with nonsingular J(Λ, x) is called a simple multivariate eigenpair. Remark.. It is easy to see that if m =, then (λ,x) is a simple eigenpair of A if and only if λ is a simple eigenvalue of A. After a careful look at the proof of [, Lemma 3.6], we know that the following result holds. Theorem.3. Let A R n n be a symmetric positive definite matrix whose n eigenvalues are distinct. Then any multivariate eigenpair of the MEP (.) is simple.
5 Sensitivity Analysis for the Multivariate Eigenvalue Problem 639 To simplify the notation, without loss of generality, we assume that n j >, j =,,...,m. For x j R nj, x j =, there exists an orthogonal matrix Q j = [ xj X j ] such that (If n j =, we just take Q j = x j.) Set Q T j x j = [ ] T, j =,,...,m. (.) (.3) A λ = G(x) T AG(x), G(x) = diag(x,...,x m ), Λ = diag(λ I n,...,λ m I nm ). Theorem.4. Let (Λ,x) be a multivariate eigenpair of MEP (.). Then (Λ,x) is simple if and only if the matrix A λ Λ is nonsingular. Proof. Let Q = diag(q,...,q m ), and let P R n n be a permutation matrix such that [ ] P diag(e () Im,...,e(m) ) =, 0 where e (j) = [ ] T R n j, j =,,...,m. Then (.4) [ PQ T 0 0 I m ] [ ] QP T 0 J(Λ, x) = J pq, 0 I m where (.5) M M I m J pq = M T M 0, I m 0 0 [ ] M M M T = PQ T (A Λ)QP T, M = A λ M Λ. From (.4) and (.5), we see that and our conclusion holds. det(j(λ,x)) = det(a λ Λ), In the rest of this paper, we assume that (Λ,x) is a simple multivariate eigenpair of the MEP (.).
6 640 X.-G. Liu, W.-G. Wang, and L.-H. Xu Consider the perturbation of A A(t) = A+tE, where t is a real parameter and E R n n is a symmetric matrix. Partition A(t) as where A ij (t) R ni nj with m n j = n. A (t) A (t) A m (t) A (t) A (t) A m (t) A(t) =......, A m (t) A m (t) A mm (t) j= We consider the associated MEP A(t)x(t) = Λ(t)x(t), where Λ(t) = diag(λ (t)i n,λ (t)i n,...,λ m (t)i nm ) and x(t) S. Set λ(t) = [ λ (t) λ (t)... λ m (t) ] T [ ] T., and λ = λ λ... λ m First applying the implicit function theory, we prove the following result. Theorem.5. Let E be a real symmetric matrix. Then there exist unique functions x(t) and λ(t) which are analytic in some neighborhood of t 0 = 0 such that (.6) { A(t)x(t) = Λ(t)x(t), x(t) S, with x(0) = x and Λ(0) = Λ. Proof. Let F(y,µ,t) := A(t)y Ωy, G(y,t) := [ (yt y )... (yt m y m ) ] T. where µ = [ µ... µ m ] T, Ω = diag(µ I n,...,µ m I nm ), and y = [ y T... y T m] T, y j R nj. Then we have. F(y,µ,0) = 0, and G(y,0) = 0.. F(y,µ,t) and G(y,t) are real analytic functions of the real variables y, µ, and t. [ ] F F 3. det y G y µ G µ (y,µ,t)=(x,λ,0) = det(j(λ,x)) 0.
7 Sensitivity Analysis for the Multivariate Eigenvalue Problem 64 Applying the implicit function theory establishes our conclusion. Theorem.5 implies that x(t) and λ(t) in (.6) have expansions of the form (.7) x(t) = x+ẋ(0)t+o(t ), λ(t) = λ+ λ(0)t+o(t ), where ẋ(0) = dx(t) dt and λ(0) = dλ(t) dt. t=0 t=0 The following theorem gives the expressions for the first order derivatives ẋ(0) and λ(0). Theorem.6. Assume that (Λ, x) is a simple multivariate eigenpair of MEP (.). Let P be an orthogonal matrix and Q a permutation matrix such that (.4) and (.5) hold. Then (.8) (.9) [ ] 0 0 ẋ(0) = Q T P 0 M PQ T Ex, λ(0) = [ ] I m Z PQ T Ex, where Z = M M, and the matrices M and M are defined in (.5). Proof. Differentiating the first equation in (.6) with respect to t at t = 0 gives λ (0)x I n (.0) Ex+Aẋ(0) =. + λ... λ m (0)x m Also, differentiating x i (t) T x i (t) = at t = 0 gives (.) x T i ẋ i (0) = 0, i =,,...,m. Combining (.0) with (.), we get [ẋ(0) ] [ ] Ex J(Λ, x) =. λ(0) 0 λ m I nm Since (Λ,x) is simple, then J(Λ,x) is nonsingular. Thus, we get (.) [ẋ(0) ] λ(0) [ ] Ex = J(Λ,x). 0 ẋ(0). From (.4), we get [ ] [ ] Q J(Λ,x) = T P 0 PQ Jpq T 0, 0 I m 0 I m
8 64 X.-G. Liu, W.-G. Wang, and L.-H. Xu where (.3) J pq = 0 0 I m 0 M Z T, I m Z Z Z = M M, Z = M M MT M. Substituting this into (.) yields (.8) and (.9). 3. Condition numbers. Although there were earlier references dealing with conditioning, e.g. [7], the general framework of the conditioning theory was introduced by Rice []. Note that the multivariate eigenvalue problem (.) is equivalent to the nonlinear system (.). Naturally, the condition numbers of multivariate eigenvalue Λ and the corresponding eigenvector x of the MEP (.) are defined as follows. Definition 3.. Let (λ(t), x(t)) satisfy (.6). Then the condition number of the multivariate eigenvalue Λ is defined by (3.) κ(λ) := lim t 0 sup E ξ { λ(t) λ α t and the condition number of the corresponding eigenvector x is defined by (3.) κ(x) := lim t 0 sup E ξ { x(t) x α t where α,α and ξ are all positive parameters which allow us some flexibility. For example: Taking α = α = ξ =, the condition numbers defined by (3.) and (3.) are the absolute condition numbers. Taking α = λ, α = x, ξ = A, the condition numbers defined by (3.) and (3.) are the relative condition numbers. The following results give explicit expressions for κ(λ) and κ(x). Theorem 3.. }, }, (3.3) (3.4) κ(λ) = κ(x) = m ξ [ ] I m Z, α m ξ M α, where the matrices Z and M are defined in (.3) and (.5), respectively.
9 Sensitivity Analysis for the Multivariate Eigenvalue Problem 643 Proof. From (.7), (.8) and (.9), we see that { } λ(0) κ(λ) = sup Similarly, we get E ξ { = sup E ξ = max = α y R n, y mξ α [ I m Z ] PQ T Ex m α ξ [ I m Z ]. κ(x) = sup E ξ } { [ } ] I m Z PQ T y α { } ẋ(0) α { [ ] } = sup 0 0 E ξ α 0 M PQ T Ex m = ξ M α. It is worthy pointing out that both κ(λ) and κ(x) depend on M. The following corollary indicates a particular case when the eigenvalue Λ is wellconditioned. Corollary 3.3. m (3.5) κ(λ) = ξ Xi T α A ijx j = 0, i j and Xi T (A ii λ i I ni )x i = 0. Proof. From (3.3), we have m κ(λ) = ξ Z = 0 α M = 0 (see the second equality in (.3)) X T i A ij x j = 0. i j and X T i (A ii λ i I ni )x i = 0. Although the matrix X j is not unique, both κ(λ) and κ(x) are independent of choices of X j. Several remarks on (3.3) (3.5) are in order. Remark 3.4. Corollary 3.3 involves the condition X T i (A ii λ i I ni )x i = 0. This means that (A ii λ i I ni )x i is orthogonal to all columns of X i implying it must point in the direction of x i. Therefore, A ii x i = µ i x i for some µ i follows.
10 644 X.-G. Liu, W.-G. Wang, and L.-H. Xu Proposition 3.5. If there exist orthogonal matrices X i R ni ni, (i =,...,m) such that Q T i A ijq j are diagonal for i,j =,...,m, then for any eigenvalue Λ of MEP (.), κ(λ) = m α ξ. Proof. The hypotheses that Q T i A ijq j are diagonal imply X T i A ijx j = 0 for i j and X T i (A ii λ i I ni )x i = 0. Thus, the desired result follows from Corollary 3.3. Below we outline several cases where Proposition 3.5 is applicable.. m =, A = I n, A = I n. For this case, we can take the matrices whose columns are comprised of the left and right singular vectors of A as Q and Q.. m = n. For this case, we can take Q i = (). 3. A ij = 0, (i j), i.e., A = diag(a,...,a mm ) is a block-diagonal matrix. For this case, we can take the eigenvector matrix of A ii as Q i. Remark 3.6. From (3.3)-(3.5), we see that both κ(λ) and κ(x) depend on M for general m and A. If M is large, then x is ill-conditioned, but λ may be wellconditioned (if, for example M = 0) or ill-conditioned. It is known [5, Theorem 3.6] that all eigenvalues are well-conditioned for the standard symmetric eigenvalue problem. [ ] A A Example 3.7. Consider the MEP (.) with A = A T, where A = A [ ] [ ] a 0 α 0 0, A =, A = diag(a 33,a 44,a 55 ). Assume that A has n 0 a 0 α 0 distinct eigenvalues. From [], A has precisely 4 eigenpairs, one of them is Λ = diag((a +α )I, (a 33 +α )I 3 ), x = [ ] T. Accordingly, a a α α 0 A λ Λ = α a 44 a 33 α a 55 a 33 α If a 55 a 33 +α, then x is ill-conditioned. On the other hand, from Proposition 3.5, all multivariate eigenvalues of A are
11 well-conditioned. Sensitivity Analysis for the Multivariate Eigenvalue Problem 645 Example 3.8. Let 0 A = , n = n =, 0 < ε. 0 4+ε The matrix A is symmetric positive definite and has four distinct eigenvalues. Let Λ = diag(,,4,4), x = [ 0 0 ] T. Then (Λ,x) is a multivariate eigenpair of A. Note that 0 A Λ = ε From (.4), we get A λ Λ = M = Z = M M = ε [ ] [ 0 0 0, M T 0 ε = [ ] 0, M 0 = ε. Thus, the absolute condition numbersareκ(x) = ε andκ(λ) = + 4 ε.thismeans that Λ and x both are ill-conditioned. Consider the perturbation of A A(t) = A+tE. Let E R 4 4 be a Householder transformation such that [ ] 0 Ex = QP T y (), y () R, y () = Then from (.8), we have ẋ(0) = Q T P [ ] 0. [ ][ ] M y () = Q T Ph, where h = [ ε] T. Thus, ẋ(0) = ε, and x(t) = x+qt Pht+O(t ). ],
12 646 X.-G. Liu, W.-G. Wang, and L.-H. Xu From (.9), we have λ(0) = [ I m Z ] PQ T QP T [ 0 y () = Z y () = [ ][ ] 0 0 = ε 0 ε ] [ ]. Thus, λ(t) = λ+ ε [ ] t+o(t ). 4. Perturbation bounds. Let (Λ, x) be a simple multivariate eigenpair of the MEP (.). Suppose that A is perturbed to A+ A. Consider the following nonlinear system (4.) { (A+ A)(x+ x) = (Λ+ Λ)(x+ x), x+ x S, where Λ = diag( λ I n,..., λ m I nm ) and λ = [ λ... λ m ] T. Set (4.) τ := J(Λ,x). Assume that A satisfies { } (4.3) A < min (4 mτ +)τ, λ min(a), where λ min (A) denotes the smallest eigenvalue of A. Applying the technique of [4], we prove the following result. Theorem 4.. Assume A satisfies (4.3). Then A + A has a multivariate eigenvalue Λ+ Λ = diag(µ I n,...,µ m I nm ) and a corresponding eigenvector y S such that (4.4) [ y x µ λ] mτ τ A A, where λ = [ λ... λ m ] T and µ = [ µ... µ m ] T.
13 Sensitivity Analysis for the Multivariate Eigenvalue Problem 647 Proof. Reformulating the nonlinear system (4.) as A x+ Λ x [ ] [ ] x A x (4.5) J(Λ, x) = + x λ 0.. x m By (4.5), we get [ ] { x λ τ m A [ ] x + A λ + Λ x + x }. x m { m A [ ] x τ + A λ + [ ] x λ + } x. Note that the last inequality follows from x + λ λ x. Hence, [ ] { [ ] x τ m A } x (4.6) +. λ τ A λ Consider the following quadratic equation about ξ τ ξ = ( m A +ξ ), τ A which has two positive solutions under the condition (4.3). The smaller one, denoted by ξ, satisfies mτ A ξ = τ A + ( τ A ) 4 mτ A (4.7). mτ A τ A We now define a mapping M by M and a set D by (4.8) [ ] x λ A x+ Λ x [ ] A x = (J(Λ,x)) + x, 0. x m D := {[ ] x [ ] } R n+m x λ λ ξ.
14 648 X.-G. Liu, W.-G. Wang, and L.-H. Xu Obviously, D is a bounded closed convex subset of R n+m. From (4.6) and (4.7), we see that M is a continuous mapping which maps D into ] D. By the Brouwer fixedpoint theorem, the mapping M has a fixed-point D such that + [ ] ] [ x x [ x λ λ λ is a solution of the nonlinear system (4.). Note that A < λ min (A) implies the perturbed matrix A + A is positive definite. From (4.5), (Λ + λ,x + x) is a multivariate eigenpair of A + A, and the desired conclusion is established. The inequality (4.4) reveals the relationship between the sensitivity of the multivariate eigenpair (Λ,x) and τ. Let A 0 = diag(a,a,...,a mm ) and the spectral decomposition of A jj be A jj = U j Λ j U T j, Λ j = diag(ˆλ (j) [ ] (j) (j),ˆλ,...,ˆλ n j ), U j = u (j) u (j)... u (j) n j. Suppose A 0 has n distinct eigenvalues. By Chu and Watterson [], the MEP A 0 y = Ωy, y S, has exactly m j= (n j) eigenpairs (Ω,y): Ω = diag(µ I n,µ I n,...,µ m I nm ), Ω = diag(ˆλ () k I n,ˆλ () k I n,...,ˆλ (m) k m I nm ), k j n j, j =,,...,m, [ ] T. y = ±(u () k ) T ±(u () k ) T... ±(u (m) k m ) T [ Let Λ = diag(λ,λ,...,λ m ), e = e (n) [ ] T R n i. Some calculation gives that T e (n ) T... e (n m) T ] T, and e (n i) [ ] [ ] τ (0) = J(Ω,y) A0 Ω B(y) Λ Ω B(e) = B(y) T = 0 B(e) T = 0 α, where α = min i m min j n i j k i ˆλ(i) k i (i) ˆλ j. = Let E = A A 0. As a consequence of Theorem 4., we get the following result. Theorem 4.. Suppose A 0 = diag(a,a,...,a mm ) has n distinct eigenvalues. Assume that E = A A 0 satisfies { } E < min (4 mτ (0) +)τ (0), λ min(a 0 ).
15 Sensitivity Analysis for the Multivariate Eigenvalue Problem 649 Then the MEP (.) has exactly m j= (n j) solutions. Furthermore, for any multivariate eigenpair (Ω,y) of A 0, there exists a multivariate eigenpair (Λ,x) of A such that [ y x µ λ] mτ (0) τ (0) E E, where λ = [ λ λ... λ m ] T and µ = [ µ µ... µ m ] T. We can also prove the following inclusion results concerning multivariate eigenvalues. Theorem 4.3. Let (Λ, x) be a multivariate eigenpair of (.). Then (4.9) σ min (λ i I A ii ) min{ν (i),ν(i) }, where ν (i) = A ij, j i ν (i) = m [ A i... A i,i A i,i+... A im ], i =,,...,m. In particular, if x is a global maximizer of the MCP, then (4.0) 0 λ i λ max (A ii ) min{ν (i),ν(i) }, i =,,...,m. Proof. Since Ax = Λx, we have that (λ i I A ii )x i = j i A ij x j, i =,,...,m. The inequalities (4.9) are obtained straightforward. If x is a global maximizer of the MCP (.), then the first inequality in (4.0) is already proved by Zhang and Chu [9]. Note that A ii is symmetric, the second inequality in (4.0) follows from (4.9) due to λ i λ max (A ii ) = σ min (λ i I ni A ii ). 5. Backward errors. In this section, we deal with the normwise backward errors of approximate multivariate eigenpairs of the MEP (.). Let ( Λ, x) be an approximate multivariate eigenpair of (.), where Λ = diag( λ I n, λ I n,..., λ m I nm ), x = [ x T x T... x T m] T S. Consider the following matrix equation about E 0 (5.) (A+E 0 ) x = Λ x,
16 650 X.-G. Liu, W.-G. Wang, and L.-H. Xu where E 0 R n n is a symmetric matrix. The normwise backward error of ( Λ, x) is defined by Note that (5.) can be rewritten as η( Λ, x) := min{ E 0 F E 0 satisfies (5.)}. E 0 x = Λ x A x. Applying a result of Sun ([4, Lemma.4]), we see that the following result holds. Theorem 5.. The normwise backward error η( Λ, x) is given by r η( Λ, x) = m ( xt r) m, where r = A x Λ x. Theorem 5. may be used to analyze the accuracy of the computed results of iterative methods for MEP (.). For example, we consider the Horst algorithm summarized below. Algorithm 5.. (Horst algorithm []) Given x (0) S; For k =,,..., until convergence; 3 For i =,,...,m; 4 Compute y (k) i = m A ij x (k) j ; 5 Compute µ (k) i 6 Compute x (k+) i 7 End 8 End j= = y (k) i ; = y(k) i µ (k) i ; [ ] T Let x (k) = (x (k) )T... (x (k) m ) T and Ω (k) = diag(µ (k) I n,...,µ (k) m I nm ) 0. Then the algorithm can be formulated as Ω (k) x (k+) = Ax (k), k =,,... The next result shows that if x (k+) x (k) ε, then (Ω (k),x (k+) ) is an approximate multivariate eigenpair of A. Theorem 5.3. There exists a symmetric matrix E R n n such that (A+E)x (k+) = Ω (k) x (k+),
17 Sensitivity Analysis for the Multivariate Eigenvalue Problem 65 where E F m A(x(k+) x (k) ). Proof. Let (5.) r = Ax (k+) Ω (k) x (k+). By Theorem 5., there exists a symmetric matrix E R n n such that where Recall that, by definition, Ex (k+) = r, E F m r. Ax (k) = Ω (k) x (k+). Substituting it into (5.), the desired conclusion is established. From the definition of M in (.5), we see that M = /σ min (A λ Λ). Theorem 3. shows that the sensitivity of the eigenvector x depends on σ min (A λ Λ). From (.3) and (3.3), loosely speaking, the sensitivity of the eigenvalue Λ is also dependent on σ min (A λ Λ). Let E(n) := {H R n n H T = H}. Suppose that (Λ,x) is a simple multivariate eigenpair of A and define E := {H H E(n),(Λ,x) is a multivariate eigenpair ofa+h, but not simple}. Below we prove an interesting property of σ min (A λ Λ) in connection with E. Theorem 5.4. If σ min (A λ Λ) < λ min (A), then min H E H F = σ min (A λ Λ), where A λ and Λ are defined respectively by (.), so (.3). Proof. Eckhart-Young theorem [] and Theorem.6 together imply that min H E H F min H E G(x) T HG(x) F = σ min (A λ Λ), where G(x) is defined in (.).
18 65 X.-G. Liu, W.-G. Wang, and L.-H. Xu Let (A λ Λ)υ = λ min υ, υ R n m, υ =, where λ min denotes the eigenvalue of A λ Λ satisfying λ min = σ min (A λ Λ). Now consider the matrix equation (5.3) G(x) T NG(x) = λ min υυ T with the unknown N E(n). Let u j = X j υ j, u = [ u T u T... u T m] T, uj R nj. Then, a simple calculation shows that N = λ min uu T = λ min G(x)υυ T G(x) T solves (5.3), and N F = σ min (A λ Λ). The assumption σ min (A λ Λ) < λ min (A) implies that N E and the desired result is obtained. Finally, a relationship between τ 0 := /σ min (A λ Λ) and τ := J(Λ,x) is given by the next theorem. Theorem 5.5. τ 0 τ max{τ 0 (+ M ),+ M +τ 0 ( M + M )}. Proof. From (.4) and (4.), τ = (J(Λ,x)) = J pq. Note that M = [ 0 I n m 0 ] 0 Jpq I n m and M = A λ Λ, thus τ 0 τ. 0 On the other hand, from (.3), τ = J pq λ max 0 τ 0 Z 0 τ 0 Z Z Z Z Z = max{τ 0 + Z,+ Z + Z } max{τ 0 (+ M ),+ M +τ 0 ( M + M )}. The first inequality is obtained from [6, Lemma ]. In summary, Theorems 3., 4. and 5.5 show that the sensitivity of the multivariate eigenpair (Λ,x) of A is dependent on τ 0.
19 Sensitivity Analysis for the Multivariate Eigenvalue Problem Numerical examples. In this section, we present test results with the condition numbers and the backward errors of multivariate eigenvalue of A. All computations are carried out using MATLAB with precision ǫ In our experiments, the computed solutions are given by using Horst method. Example 6.. ([])The matrix A is given by A = with m = and P = {,3}. The computed multivariate eigenvalue and eigenvector of A are λ = [ ] T, x = [ ] T. Example 6.. ([]) The matrix A is given by A = with m = and P = {,3}. The computed multivariate eigenvalue and eigenvector of A are λ = [ ] T, x = [ ] T. Example 6.3. ([5]) The matrix A is given by A = with m = 3 and P = {,,}. The computed multivariate eigenvalue and eigenvector of A are λ = [ ] T,
20 654 X.-G. Liu, W.-G. Wang, and L.-H. Xu x = [ ] T. Table shows the condition numbers and the backward errors of multivariate eigenpairs of A. Table. The condition numbers and the backward errors. κ(λ) κ(x) η( Λ, x) Ex e-0 Ex e-0 Ex e-0 7. Conclusion and discussion. Sensitivity analysis for a multivariate eigenvalue problem has been performed. The first order perturbation expansions of a simple multivariate eigenvalue and the corresponding eigenvector are presented. The expressions are easy to compute. We also derive explicit expressions for the condition numbers, first-order perturbation upper bounds and backward errors for the multivariate eigenpairs. Acknowledgment. We wish to thank the two anonymous referees for their constructive and detailed comments that enabled us to substantially improve our presentation. REFERENCES [] M.T. Chu and J.L. Watterson. On a multivariate eigenvalue problem, part I: Algebraic theory and a power method. SIAM J. Sci. Comput., 4:089 06, 993. [] C. Eckhart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika, : 8, 936. [3] G.H. Golub and C.F. Van Loan. Matrix Computations, Third edition. Johns Hopkins University Press, Baltimore and London, 996. [4] G.H. Golub and H.Y. Zha. Perturbation analysis of the canonical correlations of matrix pairs. Linear Algebra Appl., 0:3 8, 994. [5] M. Hanafi and J.M.F. Ten Berge. Global optimality of the successive Maxbet algorithm. Psychometrika, 68():97 03, 003. [6] M. Hanafi and H.A.L. Kiers. Analysis of K sets of data, with differential emphasis on agreement between and within sets. Comput. Statist. Data. Anal., 5:49 508, 006. [7] N.J. Higham. Accuracy and Stability of Numerical Algorithms, Second edition. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 00. [8] P. Horst. Relations among m sets of measures. Psychometrika, 6:9 49, 96. [9] D.-K. Hu. The convergence property of a algorithm about generalized eigenvalue and eigenvector of positive definite matrix. In China-Japan Symposium on Statistics, Beijing, China, 984.
21 Sensitivity Analysis for the Multivariate Eigenvalue Problem 655 [0] Y.-O. Li, T. Adali, W. Wang, and V. D. Callhoun. Joint blind source separation by multiset canonical correlation analysis. IEEE Trans. Signal Process., 57: , 009. [] A.M. Ostrowski. Solution of Equations in Euclidean and Banach Spaces. Academic Press, New York, 973. [] J.R. Rice. A theory of condition. SIAM J. Numer. Anal., 3:87 30, 966. [3] J.-G. Sun. An algorithm for the solution of multiparameter eigenvalue problems (in Chinese). Math. Numer. Sin., 8:37 49, 986. [4] J.-G. Sun. Backward perturbation analysis of certain characteristic subspaces. Numer. Math., 65:357 38, 993. [5] J.-G. Sun. Matrix Perturbation Analysis, second edition (in Chinese). Science Press, Beijing, 00. [6] X.-F. Wang, W.-G. Wang, and X.-G. Liu. A note on bounds for eigenvalues of matrix polynomials (in Chinese). Math. Numer. Sin., 3:43 5, 009. [7] J.H. Wilkinson. Rounding Errors in Algebraic Processes. Prentice-Hall, Inc., Englewood Cliffs, NJ, 963. [8] L.-H. Zhang. Riemannian Newton method for the multivariate eigenvalue problem. SIAM J. Matrix Anal. Appl., 3:97 996, 00. [9] L.-H. Zhang and M.-T. Chu. Computing absolute maximum correlation. IMA J. Numer. Anal., 3:63 84, 0. [0] L.-H. Zhang and L.-Z. Liao. An alternating variable method for the maximal correlation problem. J. Global Optim., 54:99 8, 0. [] L.-H. Zhang, L.-Z. Liao, and L.-M. Sun. Towards the global solution of the maximal correlation problem. J. Global Optim., 49:9 07, 0.
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationOn cardinality of Pareto spectra
Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela
More informationAn angle metric through the notion of Grassmann representative
Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationMultiparameter eigenvalue problem as a structured eigenproblem
Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationGroup inverse for the block matrix with two identical subblocks over skew fields
Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationStructured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control
Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet
More informationNewcriteria for H-tensors and an application
Wang and Sun Journal of Inequalities and Applications (016) 016:96 DOI 10.1186/s13660-016-1041-0 R E S E A R C H Open Access Newcriteria for H-tensors and an application Feng Wang * and De-shu Sun * Correspondence:
More informationA method for computing quadratic Brunovsky forms
Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: http://repositoryuwyoedu/ela
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationGeneralized Schur complements of matrices and compound matrices
Electronic Journal of Linear Algebra Volume 2 Volume 2 (200 Article 3 200 Generalized Schur complements of matrices and compound matrices Jianzhou Liu Rong Huang Follow this and additional wors at: http://repository.uwyo.edu/ela
More informationExamples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling
1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationDistance bounds for prescribed multiple eigenvalues of matrix polynomials
Distance bounds for prescribed multiple eigenvalues of matrix polynomials Panayiotis J Psarrakos February 5, Abstract In this paper, motivated by a problem posed by Wilkinson, we study the coefficient
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationA new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp
Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 54 2012 A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp 770-781 Aristides
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationGeneralization of Gracia's Results
Electronic Journal of Linear Algebra Volume 30 Volume 30 (015) Article 16 015 Generalization of Gracia's Results Jun Liao Hubei Institute, jliao@hubueducn Heguo Liu Hubei University, ghliu@hubueducn Yulei
More informationThe spectrum of the edge corona of two graphs
Electronic Journal of Linear Algebra Volume Volume (1) Article 4 1 The spectrum of the edge corona of two graphs Yaoping Hou yphou@hunnu.edu.cn Wai-Chee Shiu Follow this and additional works at: http://repository.uwyo.edu/ela
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More informationAn Even Order Symmetric B Tensor is Positive Definite
An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationZ-eigenvalue methods for a global polynomial optimization problem
Math. Program., Ser. A (2009) 118:301 316 DOI 10.1007/s10107-007-0193-6 FULL LENGTH PAPER Z-eigenvalue methods for a global polynomial optimization problem Liqun Qi Fei Wang Yiju Wang Received: 6 June
More informationInterior points of the completely positive cone
Electronic Journal of Linear Algebra Volume 17 Volume 17 (2008) Article 5 2008 Interior points of the completely positive cone Mirjam Duer duer@mathematik.tu-darmstadt.de Georg Still Follow this and additional
More informationOn EP elements, normal elements and partial isometries in rings with involution
Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationThe goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T
1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationThe symmetric linear matrix equation
Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationA new upper bound for the eigenvalues of the continuous algebraic Riccati equation
Electronic Journal of Linear Algebra Volume 0 Volume 0 (00) Article 3 00 A new upper bound for the eigenvalues of the continuous algebraic Riccati equation Jianzhou Liu liujz@xtu.edu.cn Juan Zhang Yu Liu
More informationStrictly nonnegative tensors and nonnegative tensor partition
SCIENCE CHINA Mathematics. ARTICLES. January 2012 Vol. 55 No. 1: 1 XX doi: Strictly nonnegative tensors and nonnegative tensor partition Hu Shenglong 1, Huang Zheng-Hai 2,3, & Qi Liqun 4 1 Department of
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationPerturbations of functions of diagonalizable matrices
Electronic Journal of Linear Algebra Volume 20 Volume 20 (2010) Article 22 2010 Perturbations of functions of diagonalizable matrices Michael I. Gil gilmi@bezeqint.net Follow this and additional works
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationNote on deleting a vertex and weak interlacing of the Laplacian spectrum
Electronic Journal of Linear Algebra Volume 16 Article 6 2007 Note on deleting a vertex and weak interlacing of the Laplacian spectrum Zvi Lotker zvilo@cse.bgu.ac.il Follow this and additional works at:
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationRETRACTED On construction of a complex finite Jacobi matrix from two spectra
Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationMATH 5640: Functions of Diagonalizable Matrices
MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +
More informationNon-trivial solutions to certain matrix equations
Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationc 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March
SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.
More informationOn the reduction of matrix polynomials to Hessenberg form
Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu
More informationNote on the Jordan form of an irreducible eventually nonnegative matrix
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 19 2015 Note on the Jordan form of an irreducible eventually nonnegative matrix Leslie Hogben Iowa State University, hogben@aimath.org
More informationOn Projection of a Positive Definite Matrix on a Cone of Nonnegative Definite Toeplitz Matrices
Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 8 2018 On Proection of a Positive
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationOn rank one perturbations of Hamiltonian system with periodic coefficients
On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationInverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
More informationOn nonnegative realization of partitioned spectra
Electronic Journal of Linear Algebra Volume Volume (0) Article 5 0 On nonnegative realization of partitioned spectra Ricardo L. Soto Oscar Rojo Cristina B. Manzaneda Follow this and additional works at:
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationFirst, we review some important facts on the location of eigenvalues of matrices.
BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationPotentially nilpotent tridiagonal sign patterns of order 4
Electronic Journal of Linear Algebra Volume 31 Volume 31: (2016) Article 50 2016 Potentially nilpotent tridiagonal sign patterns of order 4 Yubin Gao North University of China, ybgao@nuc.edu.cn Yanling
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationA Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay
A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay Zheng-Jian Bai Mei-Xiang Chen Jin-Ku Yang April 14, 2012 Abstract A hybrid method was given by Ram, Mottershead,
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationRefined Inertia of Matrix Patterns
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl
More informationOff-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems
Off-diagonal perturbation, first-order approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationGeneralized left and right Weyl spectra of upper triangular operator matrices
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017 Article 3 2017 Generalized left and right Weyl spectra of upper triangular operator matrices Guojun ai 3695946@163.com Dragana S. Cvetkovic-Ilic
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationOn the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs
The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian
More informationBicyclic digraphs with extremal skew energy
Electronic Journal of Linear Algebra Volume 3 Volume 3 (01) Article 01 Bicyclic digraphs with extremal skew energy Xiaoling Shen Yoaping Hou yphou@hunnu.edu.cn Chongyan Zhang Follow this and additional
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationOn matrix equations X ± A X 2 A = I
Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,
More informationResearch Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field
Complexity, Article ID 6235649, 9 pages https://doi.org/10.1155/2018/6235649 Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Jinwang Liu, Dongmei
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationCHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate
CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +
More informationA norm inequality for pairs of commuting positive semidefinite matrices
Electronic Journal of Linear Algebra Volume 30 Volume 30 (205) Article 5 205 A norm inequality for pairs of commuting positive semidefinite matrices Koenraad MR Audenaert Royal Holloway University of London,
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More information