ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
|
|
- Cameron Lindsey
- 5 years ago
- Views:
Transcription
1 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this paper, we obtain optimal perturbation bounds of the weighted Moore-Penrose inverse under the weighted unitary invariant norm, the weighted Q-norm and the weighted F-norm, and thereby extend some recent results. Key words. Weighted Moore-Penrose inverse, Weighted unitary invariant norm, Weighted Q-norm, Weighted F-norm. AMS subject classifications. 5A09, 5A8, 5A24.. Introduction. Let C m n be the set of complex m n matrices and C m n r be the subset consisting of all matrices in C m n of rank r. Let A C m n. We denote A, A 2, A Q and A F by the unitary invariant norm, spectral norm, Q-norm and F-norm of A, respectively. The conjugate transformation and the Moore-Penrose generalized inverse of a matrix A are denoted by A and A, respectively. Weighted problems, such as the weighted generalized inverse problem and the weighted least squares problem, draw more and more attention, see e.g., [2, 4, 8, 2]. A generalization of the generalized inverse is the weighted Moore-Penrose inverse of an arbitrary matrix which has many applications in numerical computation, statistics, prediction theory, control systems and analysis and curve fitting, see e.g., [, 9, 4]. There have been many numerical methods for the computation of the weighted Moore- Penrose inverse, see e.g., [6, 7, 0, ]. It is an interesting problem to determine how the weighted Moore-Penrose inverse is transformed under perturbation. Answers to this problem will have application in numerical computation, prediction theory and curve fitting. Therefore, it is of significance to estimate the optimal perturbation bounds of the weighted Moore-Penrose inverse. The weighted unitary invariant norm Received by the editors on October 7, 200. Accepted for publication on May 4, 20. Handling Editor: Miroslav Fiedler. Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, PO Box 279, Beijing 0090, P.R. China xww@lsec.cc.ac.cn). School of Mathematical Sciences, South China Normal University, Guangzhou, 5063 China clx04@63.com). School of Mathematical Sciences, South China Normal University, Guangzhou, 5063 China liwen@scnu.edu.cn). The work was supported in part by Research Fund for the Doctoral Program of Higher Education of China Grant No ). 52
2 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li is a more general norm and in terms of this norm, the bounds for the weighted Moore- Penrose inverse can be characterized by weighted singular values M, N) singular values). Recently, much effort has been made for estimating perturbation bounds of the Moore-Penrose inverse, see e.g., [5, 9, 3, 4]. In [3], Wedin presented the perturbation bounds of the Moore-Penrose inverse under a general unitarily invariant norm, the spectral norm and Frobenius norm, respectively. Meng and Zheng in [5] obtained the optimal perturbation bounds for the Moore-Penrose inverse under the Frobenius norm. Cai et al. in [3] obtained the additive and multiplicative perturbation bounds for the Moore-Penrose inverse under the unitary invariant norm and the Q- norm, which improves the corresponding results in [3]. In this paper, we will focus our attention on optimal perturbation bounds for the weighted Moore-Penrose inverse in the weighted unitary invariant norm, the weighted Q-norm and the weighted F- norm and thereby extend the corresponding results in [3] and [5]. We first introduce some basic definitions: Definition.. [3] A unitary invariant norm is called a Q-norm if there exists another unitarily invariant norm such that Y = Y Y ) 2, which is denoted by Q. Note that F-norm and 2-norm are Q-norms. Definition.2. [5] For an arbitrary matrix A C m n, there is a unique matrix X C n m satisfying the following equalities: AXA = A; XAX = X; MAX) = MAX; NXA) = NXA. Then matrix X is called a weighted Moore-Penrose inverse of A and denoted by X = A MN. Here M and N are the given Hermitian positive definite matrices, which are called weighted matrices. Definition.3. [5] Let A C m n. Then the following norms A MN) = M 2 AN 2 ; A FMN) = M 2 AN 2 F ; A QMN) = M 2 AN 2 Q ; A MN = M 2 AN 2 2, are called the weighted unitary invariant norm, the weighted F-norm, the weighted Q-norm and the weighted spectral norm of A, respectively. Definition.4. [5] Let A C m n r. The M,N) weighted singular value de-
3 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds 523 composition MN-SVD) of A C m n r is expressed as follows: ) Σ 0.) A = U V = U ΣV, 0 0 where U = U,U 2 ) C m n and V = V,V 2 ) C n n satisfy U MU = I m and V N V = I n, Σ =diagσ,...,σ r ), σ i = λ i and λ λ r > 0 are the nonzero eigenvalues of N A MA. Then σ,...,σ r > 0 are called the nonzero M,N) weighted singular values of A. The rest of this paper is organized as follows. In Section 2, we give some lemmas, which are useful to deduce our main results. In Sections 3 and 4, we consider the additive and multiplicative perturbation of the weighted Moore-Penrose inverse. Some new bounds for additive and multiplicative perturbation under the norms MN), QMN) and FMN) are presented, which extends the corresponding ones in [5] and [3]. In Section 5, we give some numerical examples to illustrate the optimality of our given bounds under the weighted Q-norm and F-norm, respectively. Finally, in Section 6 we give concluding remarks. 2. Preliminaries. In this section we give some lemmas, which are useful to deduce our main results. Lemma 2.. [2] Let A have MN-SVD.). Then ) A MN = N V Σ U M; 2) A MN NM = σ r. Lemma 2.2. [3] Let B have the block form B B B = 2 B 2 B 22 ). Then B 2 Q B 2 Q + B 2 2 Q + B 2 2 Q + B 22 2 Q. Lemma 2.3. [3] Let B and B 2 be two Hermitian matrices and let P be a complex matrix. Suppose that there are two disjoint intervals separated by a gap of width at least η, where one interval contains the spectrum of B and the other contains that of B 2. If η > 0, then there exists a unique solution X to the matrix equation B X XB 2 = P and moreover, X P. η
4 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li Lemma 2.4. [3] Let W C n n be a unitary matrix with the block form ) W W W = 2, W 2 W 22 where W C r r, W 22 C n r) n r), r < n. Then W 2 = W 2 for any unitarily invariant norm. 3. Additive perturbation bounds. In this section, we will present optimal additive perturbation bounds of the weighted Moore-Penrose inverse under the weighted unitarily invariant norm, the weighted Q-norm and the weighted F-norm, respectively. Theorem 3.. Let A C m n r and B = A + E C m n s. Then 3.) B MN A MN NM) A MN NM B MN NM + max{ A MN 2 NM, B MN 2 NM }) E MN). Proof. Let A and B have the following M,N) weighted singular value decompositions: ) ) Σ 0 Σ A = U V = U Σ V, B = 0 0 Ũ 0 3.2) Ṽ = 0 0 Ũ Σ Ṽ, where U = U,U 2 ), Ũ = Ũ,Ũ2) C m m, V = V,V 2 ), Ṽ = Ṽ, Ṽ 2 ) C n n satisfy U MU = I m, Ũ MŨ = I m, V N V = I n and Ṽ N Ṽ = I n, Σ =diagσ,...,σ r ), Σ =diag σ,..., σ s ) with σ σ r > 0 and σ σ s > 0. By 3.2) we have 3.3) E = B A = Ũ Σ Ṽ U Σ V. By the MN-SVDs 3.2) of A and B we know that M 2 U, M 2 Ũ, N 2 V and N 2 Ṽ are unitary matrices. Hence from 3.3) one may deduce that 3.4) 3.5) 3.6) Σ Ṽ N V Ũ MU Σ = Ũ MEN V, U 2 MŨ Σ = U 2 MEN Ṽ, Σ V N Ṽ 2 = U MEN Ṽ 2. It follows from 3.4) that 3.7) Ṽ N V Σ Σ Ũ MU = Σ Ũ MEN V Σ.
5 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds 525 By Lemma 2. we obtain A MN = N V Σ U M and B MN = N Ṽ Σ Ũ M. Then, by Definition.3, we have B MN A MN NM) = N 2 N Ṽ Σ Ũ M N V Σ U M)M 2 3.8) = Ṽ N 2 N 2 Ṽ Σ Ũ M 2 N 2 V Σ U M 2 )M 2 U = Ṽ N Ṽ Σ Ũ M N V Σ U M)U ) = Ṽ N Ṽ Σ Ũ M N V Σ U M)U,U 2 ) = Ṽ2 Σ from which one may deduce that 3.9) By 3.7) we have 3.0) B MN A MN NM) + Σ By 3.5) and 3.6) we have Ũ MU Ṽ N V Σ Ṽ 2 N V Σ Σ Ũ MU Ṽ N V Σ Σ Ũ MU 2 0 Ũ MU Ṽ N V Σ Σ Ũ MU 2 Ṽ 2 N V Σ 0 ) ). ) σ r σ s M 2 EN 2. Σ Ũ 2 MU 2 = Σ U 2 MEN Ṽ ), Ṽ2 N V Σ = U MEN Ṽ 2 ) Σ 2. ), Thus 3.) 0 Σ Ũ MU 2 Ṽ 2 N V Σ 0 ) max{ σr 2, } σ 2 s 0 U 2 MEN Ṽ ) U MEN Ṽ 2 ) 0 ). Notice that 0 U2 MEN Ṽ ) U MEN Ṽ 2 ) 0 3.2) ) ) Ṽ Ṽ2 N E M U, U 2 ) = Ṽ N E MU.
6 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li Since M 2 U and N 2 Ṽ are unitary matrices, it follows from 3.) and 3.2) that ) 0 Σ Ũ MU 2 Ṽ 2 N V Σ max{ 0 σr 2, σ s 2 } N 2 E M 2 = max{ σr 2, σ s 2 } M 2 EN 2 = max{ σr 2, σ s 2 } E MN), which together with 3.9), 3.0) and Lemma 2.2) deduces 3.). Remark 3.2. If we take M = N = I, then Theorem 3. reduces to 3.3) B A A 2 B 2 + max{ A 2 2, B 2 2}) E, which is the result of Theorem 3. in [3]. For the Q NM) -norm we provide the following bound. Theorem 3.3. Let A C m n r and B = A + E C m n s. Then 3.4) B MN A MN QNM) A MN 4 NM + B MN 4 NM + A MN 2 NM B MN 2 NM E QMN). Proof. The bound 3.4) follows immediately from Lemma 2.2, 3.4) 3.6) and 3.8). Remark 3.4. If we take M = N = I, then Theorem 3.3 reduces to B A Q A B A 2 2 B 2 2 E Q, which is the result of Theorem 3.2 in [3]. Theorem 3.5. Let A C m n r and B = A + E C m n s. Then 3.5) B MN A MN FNM) max{ A MN 2 NM, B MN 2 NM } E FMN). Proof. It follows from 3.3) that 3.6) 3.7) 3.8) U MŨ Σ Σ V N Ṽ = U MEN Ṽ, Ũ2 MU Σ = Ũ 2 MEN V, U2 MŨ Σ = U2 MEN Ṽ, Σ Ṽ N V 2 = Ũ MEN V 2, Σ V N Ṽ 2 = U MEN Ṽ 2.
7 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds 527 By 3.6), we have 3.9) Σ U MŨ V N Ṽ Σ = Σ U MEN Ṽ Σ. It is easy to see that B MN A MN FNM) = N 2 N Ṽ Σ Ũ M N V Σ U M)M 2 F 3.20) It follows from 3.8) and 3.20) that = V N 2 N 2 Ṽ Σ Ũ M 2 N 2 V Σ U M 2 )M 2 Ũ F = V N Ṽ Σ Ũ M N V Σ U M)Ũ F ) = V V2 N Ṽ Σ Ũ M N V Σ U M)Ũ,Ũ2) F = V N Ṽ Σ Σ U MŨ Σ U ) MŨ2 V2 N. Ṽ Σ 0 2 B MN A MN 2 FNM) = Σ Ũ MU Ṽ N V Σ 2 F + Σ Ũ MU 2 2 F + Ṽ 2 N V Σ 2 F + V N Ṽ Σ Σ U MŨ 2 F + Σ U MŨ2 2 F + V2 N Ṽ Σ 2 F, which together with Lemma 2.2, 3.4) 3.6) and 3.6) 3.9) yields 2 B MN A MN 2 FNM) 3.2) Therefore, = Σ Ũ MEN V Σ 2 F + Σ 2 F Ṽ N E MU 2 2 F + Σ U MEN Ṽ Σ 2 F + Ṽ 2 N E MU Σ 2 2 F + Σ 2 V N E MŨ2 2 F + V2 N E MŨ Σ 2 2 F σr 2 σ s 2 Ũ MEN V 2 F + U MEN Ṽ 2 F) + σr 4 Ṽ 2 N E MU 2 F + V N E MŨ2 2 F) max{ σr 4, σ s 4 } Ũ MEN V 2 F + U MEN Ṽ 2 F) + Ṽ N E MU 2 2 F + V2 N E MŨ 2 F + Ṽ 2 N E MU 2 F + V N E MŨ2 2 F 2max{ σr 4, σ s 4 } M 2 EN 2 2 F = 2max{ A MN 4 NM, B MN 4 NM } E 2 FMN). B MN A MN FNM) max{ A MN 2 NM, B MN 2 NM } E FMN), which implies 3.5) holds.
8 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li Remark 3.6. If we take M = N = I, then Theorem 3.5 reduces to which is the result of Theorem 2. in [5]. B A F max{ A 2 2, B 2 2} E F, Now we consider the case that ranka) =rankb), i.e., A,B C m n r. Theorem 3.7. Let A, B = A + E C m n r. Then B MN A MN NM) [ A MN NM B MN NM + A MN NM + B MN NM) min{ A MN NM, B MN NM }] E MN). Proof. Let D = Σ 0 0 σ r I ) Σ 0, D 2 = 0 σ r I ), and Then 3.22) X = Σ XD + D 2 X = + Ũ MU Ṽ N V Σ Σ Ṽ 2 N V Σ Σ Ũ MU 2 0 Ũ MU Σ Ṽ N V Σ σ r Ṽ 2 N V Σ 0 ). Ũ MU Σ Ṽ N V σ Σ r Ũ MU 2 Ṽ 2 N V 0 ) Ũ MU 2. ) From 3.4) it is easy to see that 3.23) 3.24) Σ Ũ MU Σ Ṽ N V σ M 2 EN 2 E MN), r σ r Ũ MU Σ Ṽ N V Σ σ r M 2 EN 2 σ r E MN). Since Ũ MU is unitary, by Lemma 2.4 we have By 3.3) and 3.5), one may deduce that Ũ MU 2 = Ũ 2 MU. Ũ 2 MU = Ũ 2 MEN V Σ
9 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds 529 and This in turn implies that Ũ MU 2 = Σ U 2 MEN Ṽ ). Ũ 2 MU σ r E MN, and Ũ MU 2 σ r E MN, respectively. Then and thus Ũ MU 2 max{σ r, σ r } E MN), σ Σ r Ũ MU 2 σ r Ũ MU 2 σ r By an analogous argument, we have and σ r σ r max{σ r, σ r } E MN). Ṽ 2 N V max{σ r, σ r } E MN) σ r Ṽ2 N V Σ σ r Ṽ 2 N σ r V σ r σ r max{σ r, σ r } E MN), which together with 3.8), 3.22) 3.24) and Lemma 2.3 give the desired result. Remark 3.8. If we take M = N = I, then Theorem 3.7 reduces to 3.25) B A [ A 2 B 2 + A 2 + B 2 )min{ A 2, B 2 }] E, which is the result of Theorem 3.3 in [3]. For the weighted F-norm we have: Theorem 3.9. Let A, B = A + E C m n r. Then 3.26) B MN A MN FNM) A MN NM B MN NM E FMN). Proof. Since in.) U MŨ,Ṽ N V and V N Ṽ are unitary, by Lemma 2.4 we have 3.27) Ũ MU 2 F = Ũ 2 MU F, U 2 MŨ F = U MŨ2 F,
10 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li 3.28) Ṽ N V 2 F = Ṽ 2 N V F, V 2 N Ṽ F = V N Ṽ 2 F. It follows from 3.7), 3.8) and 3.27), 3.28) that 2 B MN A MN 2 FNM) = Σ Ũ MEN V Σ 2 F + Σ U MEN Ṽ Σ 2 F + σ r 2 Ũ MU 2 2 F + V2 N Ṽ 2 F) + σr 2 Ṽ 2 N V 2 F + U MŨ2 2 F) σ rσ 2 r 2 Ũ MEN V 2 F + U MEN Ṽ 2 F) + σ r 2 Ũ 2 MU 2 F + V N Ṽ 2 2 F) + σr 2 Ṽ N V 2 2 F + U2 MŨ 2 F) = σ rσ 2 r 2 Ũ MEN V 2 F + U MEN Ṽ 2 F) + σ r 2 Ũ 2 MEN V Σ 2 F + Σ U MEN Ṽ 2 2 F) + σ 2 r which implies 3.26) holds. Remark 3.0. Since Σ Ũ MEN V 2 2 F + U2 MEN Ṽ Σ 2 F) 2 σ rσ 2 r 2 M 2 EN 2 2 F = 2 A MN 2 NM B MN 2 NM E 2 FMN), A MN NM B MN NM max{ A MN 2 NM, B MN 2 NM }, the bound is sharper than the one in 3.5). For the weighted Q-norm we have: Theorem 3.. Let A, B = A + E C m n r. Then 3.29) B MN A MN QNM) 3 A MN NM B MN NM E QMN). Proof. By 3.20) we have B MN A MN QNM) = V N Ṽ Σ Σ U MŨ Σ U MŨ2 V2 N Ṽ Σ 0 ) Q.
11 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 It follows from 3.8), 3.27), 3.28) and Lemma 2.2 that Optimal Perturbation Bounds 53 2 B MN A MN 2 QNM) Σ Ũ MU Ṽ N V Σ 2 Q + Σ Ũ MU 2 2 Q + Ṽ 2 N V Σ 2 Q + V N Ṽ Σ Σ U MŨ 2 Q + Σ U MŨ2 2 Q + V2 N Ṽ Σ 2 Q, which together with 3.4) 3.7) and 3.6) 3.9) gives that 2 B MN A MN 2 QNM) Σ Ũ MEN V Σ 2 Q + Σ U MEN Ṽ Σ 2 Q + σ r 2 Ũ MU 2 2 Q + V2 N Ṽ 2 Q) + σr 2 Ṽ 2 N V 2 Q + U MŨ2 2 Q) σr 2 σ r 2 Ũ MEN V 2 Q + U MEN Ṽ 2 Q) + σ r 2 Ũ 2 MU 2 Q + V N Ṽ 2 2 Q) + σr 2 Ṽ N V 2 2 Q + U2 MŨ 2 Q) = σr 2 σ r 2 Ũ MEN V 2 Q + U MEN Ṽ 2 Q) + σ r 2 Ũ 2 MEN V Σ 2 Q + Σ U MEN Ṽ 2 2 Q) + σ 2 r Σ Ũ MEN V 2 2 Q + U2 MEN Ṽ Σ 2 Q) σr 2 σ r 2 Ũ MEN V 2 Q + Ũ 2 MEN V 2 Q + Ũ MEN V 2 2 Q + U MEN Ṽ 2 Q + U MEN Ṽ 2 2 Q + U 2 MEN Ṽ 2 Q) 6 A MN 2 NM B MN 2 NM E 2 QMN), which implies that the inequality 3.29) holds. Remark 3.2. If we take M = N = I, then Theorem 3. reduces to which is the result of Theorem 3.4 in [3]. It is easy to see that 3 A MN NM B MN NM B A Q 3 A 2 B 2 E Q, A MN 4 NM + B MN 4 NM + A MN 2 NM B MN 2 NM, which implies that the bound in 3.29) is sharper than the one in 3.4).
12 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li 4. Multiplicative perturbation bounds. In this section, we present optimal multiplicative perturbation bounds of the weighted Moore-Penrose inverse. Let B be a multiplicative perturbed matrix of A, i.e., B = D AD 2, where D and D 2 are m m and n n nonsingular matrices, then ranka) =rankb). Theorem 4.. Let A C m n and B = D AD 2, where D and D 2 are respectively m m and n n nonsingular matrices. Then 4.) B MN A MN NM) max{ A MN NM, B MN NM }Ψ D,D 2 ), where Ψ D,D 2 ) = I m D MM) + I m D MM)+ I n D 2 NN) + I n D 2 NN). Proof. Let A and B have the MN-SVDs 3.2). Clearly we have 4.2) B A = BI n D 2 ) + D I m )A = I m D )B + AD 2 I n ). It follows from 3.3) and 4.2) that 4.3) Ũ Σ Ṽ U Σ V = Ũ Σ Ṽ I n D2 ) + D I m )U Σ V = I m D )Ũ Σ Ṽ + U Σ V D 2 I n ). By 4.3) we obtain Σ Ṽ N V Ũ MU Σ = Σ Ṽ I n D 2 )N V + Ũ MD I m )U Σ, U MŨ Σ Σ V N Ṽ = U MI m D )Ũ Σ + Σ V D 2 I n )N Ṽ, U 2 MŨ Σ = U 2 MI m D )Ũ Σ, Ũ 2 MU Σ = Ũ 2 MD I m )U Σ, Σ Ṽ N V 2 = Σ Ṽ I n D 2 )N V 2, from which one may deduce that Σ V N Ṽ 2 = Σ V D 2 I n )N Ṽ 2, 4.4) Ṽ N V Σ Σ Ũ MU
13 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds 533 = Ṽ I n D 2 )N V Σ + Σ Ũ MD I m )U, 4.5) 4.6) 4.7) Σ Ũ MU 2 = Σ Ũ I m D )MU 2, Ṽ 2 N V Σ = Ṽ 2 N D 2 I n )V Σ, Σ U MŨ V N Ṽ Σ = Σ U MI m D )Ũ + V D 2 I n )N Ṽ Σ, 4.8) 4.9) Σ U MŨ2 = Σ U D I m )MŨ2, V2 N Ṽ Σ = V 2 N I n D 2 )Ṽ Σ. By 3.7) and 4.4) 4.6) we obtain B MN A MN NM) = L Σ Ũ I m D )MU ) 2 Ṽ2 N D2 I n )V Σ 0 Σ Ũ MD I m )U Σ Ũ I m D )MU ) Ṽ I n D2 )N V Σ ) 0 Ṽ2 N D2 I n )V Σ 0 ) M Σ Ũ M 2 M 2 I m D )M 2 M 2 U ) Ṽ N 2 N 2 I + n D2 )N 2 N 2 V Σ 4.0) 0 Ṽ2 N 2 N 2 D2 I n )N 2 N 2 V Σ, where This implies 4.) holds. L Ṽ I n D 2 )N V Σ 0 Σ Ũ MD I m )U, M Σ Ũ M 2 M 2 D I m )M 2 M 2 U. Remark 4.2. If we take M = N = I, then Theorem 4. reduces to B A max{ A 2, B 2 }ΨD,D 2 ),
14 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li where ΨD,D 2 ) = I m D + I m D + I n D 2 + I n D 2, which is the result of Theorem 4. in [3]. For the Q NM) -norm we obtain the following bound: Theorem 4.3. Let A C m n and B = DAD 2, where D and D 2 are m m and n n nonsingular matrices, respectively. Then B 3 4.) MN A MN QNM) 2 max{ A MN NM, B MN NM }Φ D,D 2 ), where Φ D,D 2 ) = [ I m D 2 QMM) + I m D 2 QMM) + I n D 2 2 QNN) + I n D 2 2 QNN) ] 2. Proof. By 3.8), 4.4) 4.9) and Lemma 2. we can derive 2 B MN A MN 2 QNM) Ṽ I n D 2 )N V Σ + Σ Ũ MD I m )U 2 Q + Σ Ũ I m D )MU 2 2 Q + Ṽ 2 N D2 I n )V Σ 2 Q + Σ U MI m D )Ũ + V D 2 I n )N Ṽ Σ 2 Q + Σ U D I m )MŨ2 2 Q + V 2 N I n D 2 )Ṽ Σ 2 Q σ r Ṽ I n D 2 )N V Q + σ r Ũ MD I m )U Q ) 2 + σ r U MI m D )Ũ Q + σ r V D 2 I n )N Ṽ Q ) 2 + σ r 2 Ũ I m D )MU 2 2 Q + V2 N I n D2 )Ṽ 2 Q) + σr 2 Ṽ 2 N D2 I n )V 2 Q + U D I m )MŨ2 2 Q) 2 σr 2 Ṽ I n D2 )N V 2 Q + 2 σ r 2 Ũ MD I m )U 2 Q) + 2 σr 2 U MI m D )Ũ 2 Q + 2 σ r 2 V D 2 I n )N Ṽ 2 Q) + σ r 2 Ũ I m D )MU 2 2 Q + V2 N I n D2 )Ṽ 2 Q) + σr 2 Ṽ 2 N D2 I n )V 2 Q + U D I m )MŨ2 2 Q)
15 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 Optimal Perturbation Bounds ) max{ σr 2, σ 2 r }[2 Ṽ I n D 2 )N V 2 Q + V2 N I n D2 )Ṽ 2 Q) + 2 Ũ MD I m )U 2 Q + U D I m )MŨ2 2 Q) + 2 U MI m D )Ũ 2 Q + Ũ I m D )MU 2 2 Q) + 2 V D 2 I n )N Ṽ 2 Q + σr 2 Ṽ 2 N D2 I n )V 2 Q))] 3max{ σr 2, σ 2 r + I n D 2 2 QNN) ] }[ I m D 2 QMM) + I m D 2 QMM) + I n D 2 2 QNN) = 3max{ A MN 2 NM, B MN 2 NM }ΦD,D 2 ) 2, where Φ D,D 2 ) equals [ I m D 2 QMM) + I m D 2 QMM) + I n D 2 2 QNN) + I n D 2 2 QNN) ] 2. Hence, B MN A MN QNM) which derives the desired result. 3 2 max{ A MN NM, B MN NM }Φ D,D 2 ), Remark 4.4. If we take M = N = I, then Theorem 4.3 reduces to where B A Q 3 2 max{ A 2, B 2 }ΦD,D 2 ), ΦD,D 2 ) = [ I m D 2 Q + I m D 2 Q + I n D 2 2 Q + I n D 2 2 Q] 2, which is the result of Theorem 4.2 in [3]. For the F NM) -norm we obtain the following bound: Theorem 4.5. Let A C m n and B = D AD 2, where D and D 2 are respectively m m and n n nonsingular matrices. Then 4.3) B MN A MN FNM) max{ A MN NM, B MN NM }ϕ D,D 2 ), where ϕ D,D 2 ) equals I m D 2 FMM) + I m D 2 FMM) + I n D 2 2 FNN) + I n D 2 2 FNN) ] 2.
16 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li Proof. It follows from 4.2) and Lemma.2 of [5] that 2 B MN A MN 2 FNM) max{ σr 2, σ r 2 }[2 Ṽ I n D2 )N V 2 F + V2 N I n D2 )Ṽ 2 F) + 2 Ũ MD I m )U 2 F + U D I m )MŨ2 2 F) + 2 U MI m D )Ũ 2 F + Ũ I m D )MU 2 2 F) + 2 V D 2 I n )N Ṽ 2 F + σr 2 Ṽ 2 N D2 I n )V 2 F))] 2max{ σr 2, σ 2 r + I n D 2 2 FNN) ) } I m D 2 FMM) + I m D 2 FMM) + I n D 2 2 FNN) = 2max{ A MN 2 NM, B MN 2 NM }ϕ D,D 2 ) 2, which implies that 4.3) holds. where Remark 4.6. If we take M = N = I, then Theorem 4.5 reduces to B A F max{ A 2, B 2 }ϕd,d 2 ), ϕd,d 2 ) = [ I m D 2 F + I m D 2 F + I n D 2 2 F + I n D 2 2 F] 2, which is the result of Theorem 3. in [5]. Remark 4.7. We note that a multiplicative perturbation can also be viewed as an additive one. The example in Remark 4. of [3] and Example 3 in [5] have shown that the multiplicative bounds 4.), 4.3) are better than the additive bounds 3.29) and 3.26), respectively. 5. Numerical examples. In this section, we give some simple numerical examples to illustrate the optimality of our given bounds under the weighted Q norm and F norm, respectively. Examples and 2 in [5] show the optimality of the additive perturbation bounds in 3.5) and 3.26), respectively. The example in Remark 3.3 of [3] shows the approximate optimality of the perturbation bound in 3.4). The following examples will illustrate that the bounds in 3.29), 4.) and 4.3) are approximately optimal. Example 5.. Let 0 0 A = 0 0, B = ,
17 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May 20 M = Optimal Perturbation Bounds , N = We take the Ky-Fan 2-2 norm as a Q-norm for details about the Ky-Fan norm please see [3]). Then B 2 MN A MN QNM) = 00000,. 3 A MN NM B MN NM E QMN). = , which implies the bound in 3.29) is approximate to the optimal bound. Example 5.2. Let A = , D = I,D 2 = M = ,N = We take the Ky-Fan 2-2 norm as a Q-norm. Then B MN A MN QNM) = 2 0 7, , 3 2 max{ A MN NM, B MN NM }Φ D,D 2 ). = 2 0 7, which implies the bound 4.) is approximate to the optimal bound. Example 5.3. Let A = Then ),D = I,D 2 = ) 0,M = 0 2 ) 0,N = 0 2 ). B MN A MN FNM) = 0 9, max{ A MN NM, B MN NM }ϕ D,D 2 ). = 0 9, which implies the bound 4.3) is approximate to the optimal bound.
18 Electronic Journal of Linear Algebra ISSN Volume 22, pp , May Wei-Wei Xu, Li-Xia Cai, and Wen Li 6. Concluding remarks. Problems about the weighted Moore-Penrose inverse arise in many fields, e.g., weighted generalized inverse problems and the weighted least squares problem, etc. Of them, an interesting one is that when the original matrix A is perturbed to be B = A + E, how does the weighted Moore-Penrose inverse transform? In this paper we have considered this problem and by using the M,N) weighted singular value decomposition MN SV D) some optimal bounds for additive and multiplicative perturbation under the norms MN), FMN) and QMN) are presented, respectively. Our results extend the corresponding ones in [3] and [5]. Acknowledgment. The authors would like to thank the referee for his/her kind suggestions. REFERENCES [] R.B. Bapat, S.K. Jain, and S. Pati. Weighted Moore-Penrose inverse of a Boolean matrix. Linear Algebra Appl., 255: , 997. [2] A. Ben-Israel and T.N.E. Greville. Generalized Inverses: Theory and Applications, 2nd edition. Springer, New York, [3] L.X. Cai, W.W. Xu, and W. Li. Additive and multiplicative perturbation bounds for the Moore- Penrose inverse. Linear Algebra Appl., 434: , 20. [4] V. Mehrmann. A symplectic orthogonal method for single input or single output discrete time optimal linear quadratic control problems. SIAM J. Matrix Anal. Appl., 9:22 247, 988. [5] L.S. Meng and B. Zheng. The optimal perturbation bounds of the Moore-Penrose inverse under the Frobenius norm. Linear Algebra Appl., 432: , 200. [6] M.D. Petković and P.S. Stanimirović. Symbolic computation of the Moore-Penrose inverse using a partitioning method. Int. J. Comput. Math., 82: , [7] M.D. Petković, P.S. Stanimirović, and M.B. Tasić. Effective partitioning method for computing weighted Moore-Penrose inverse. Comput. Math. Appl., 55: , [8] C.R. Rao and S.K. Mitra. Generalized Inverses of Matrices and its Applications. Wiley, New York, 97. [9] C.R. Rao and M.B. Rao. Matrix Algebra and its Applications to Statistics and Econometrics. World Scientific, Hong Kong, 998. [0] P.S. Stanimirović and M.B. Tasić. Partitioning method for rational and polynomial matrices. Appl. Math. Comput., 55:37 63, [] M.B. Tasić, P.S. Stanimirović, and M.D. Petković. Symbolic computation of weighted Moore- Penrose inverse using partitioning method. Appl. Math. Comput., 89:65 640, [2] G.R. Wang, Y.M. Wei, and S.Z. Qiao, Generalized Inverses: Theory and Computations. Science, Beijing, [3] P.Å. Wedin. Perturbation theory for pseudo-inverses. Nordisk Tidskr. Informationsbehandling BIT), 3:27 232, 973. [4] Y.M. Wei and H.B. Wu. Expression for the perturbation of the weighted Moore-Penrose inverse. Comput. Math. Appl., 39:3 8, [5] H. Yang and H.Y. Li. Weighted polar decomposition and WGL partial ordering of rectangular complex matrices. SIAM J. Matrix Anal. Appl., 30: , 2008.
Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationThe DMP Inverse for Rectangular Matrices
Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationDiagonal and Monomial Solutions of the Matrix Equation AXB = C
Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of
More informationPERTURBATION ANALYSIS OF THE EIGENVECTOR MATRIX AND SINGULAR VECTOR MATRICES. Xiao Shan Chen, Wen Li and Wei Wei Xu 1. INTRODUCTION A = UΛU H,
TAIWAESE JOURAL O MATHEMATICS Vol. 6, o., pp. 79-94, ebruary 0 This paper is available online at http://tjm.math.ntu.edu.tw PERTURBATIO AALYSIS O THE EIGEVECTOR MATRIX AD SIGULAR VECTOR MATRICES Xiao Shan
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationGroup inverse for the block matrix with two identical subblocks over skew fields
Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationMOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE
J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse
More informationarxiv: v1 [math.ra] 14 Apr 2018
Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationThe reflexive and anti-reflexive solutions of the matrix equation A H XB =C
Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES
ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES XIAOJI LIU, LINGLING WU, AND JULIO BENíTEZ Abstract. In this paper, some formulas are found for the group inverse of ap +bq,
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationResearch Article Constrained Solutions of a System of Matrix Equations
Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationOn the solvability of an equation involving the Smarandache function and Euler function
Scientia Magna Vol. 008), No., 9-33 On the solvability of an equation involving the Smarandache function and Euler function Weiguo Duan and Yanrong Xue Department of Mathematics, Northwest University,
More informationRank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix
Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationThe Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationSome inequalities for unitarily invariant norms of matrices
Wang et al Journal of Inequalities and Applications 011, 011:10 http://wwwjournalofinequalitiesandapplicationscom/content/011/1/10 RESEARCH Open Access Some inequalities for unitarily invariant norms of
More informationThe Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D
International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationarxiv: v4 [math.oc] 26 May 2009
Characterization of the oblique projector U(VU) V with application to constrained least squares Aleš Černý Cass Business School, City University London arxiv:0809.4500v4 [math.oc] 26 May 2009 Abstract
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationOn Sums of Conjugate Secondary Range k-hermitian Matrices
Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,
More informationBlock companion matrices, discrete-time block diagonal stability and polynomial matrices
Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract
More informationDragan S. Djordjević. 1. Introduction
UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationWorkshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract
Workshop on Generalized Inverse and its Applications Invited Speakers Program Abstract Southeast University, Nanjing November 2-4, 2012 Invited Speakers: Changjiang Bu, College of Science, Harbin Engineering
More informationRecurrent Neural Network Approach to Computation of Gener. Inverses
Recurrent Neural Network Approach to Computation of Generalized Inverses May 31, 2016 Introduction The problem of generalized inverses computation is closely related with the following Penrose equations:
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationSome results on the reverse order law in rings with involution
Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationResearch Article Some Results on Characterizations of Matrix Partial Orderings
Applied Mathematics, Article ID 408457, 6 pages http://dx.doi.org/10.1155/2014/408457 Research Article Some Results on Characterizations of Matrix Partial Orderings Hongxing Wang and Jin Xu Department
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 7, Issue 1, Article 34, 2006 MATRIX EQUALITIES AND INEQUALITIES INVOLVING KHATRI-RAO AND TRACY-SINGH SUMS ZEYAD AL
More informationAPPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS
Univ. Beograd. Publ. Eletrotehn. Fa. Ser. Mat. 15 (2004), 13 25. Available electronically at http: //matematia.etf.bg.ac.yu APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Predrag
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationOperators with Compatible Ranges
Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationWeaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms
DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationOn group inverse of singular Toeplitz matrices
Linear Algebra and its Applications 399 (2005) 109 123 wwwelseviercom/locate/laa On group inverse of singular Toeplitz matrices Yimin Wei a,, Huaian Diao b,1 a Department of Mathematics, Fudan Universit,
More informationResearch Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations
Applied Mathematics Volume 01, Article ID 398085, 14 pages doi:10.1155/01/398085 Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Chang-Zhou Dong, 1 Qing-Wen Wang,
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationarxiv: v3 [math.ra] 22 Aug 2014
arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationThe generalized Schur complement in group inverses and (k + 1)-potent matrices
The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationThe restricted weighted generalized inverse of a matrix
Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 77 2011 The restricted weighted generalized inverse of a matrix Vasilios N. Katsikis Dimitrios Pappas Follow this and additional
More informationWeak Monotonicity of Interval Matrices
Electronic Journal of Linear Algebra Volume 25 Volume 25 (2012) Article 9 2012 Weak Monotonicity of Interval Matrices Agarwal N. Sushama K. Premakumari K. C. Sivakumar kcskumar@iitm.ac.in Follow this and
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationIN1993, Klein and Randić [1] introduced a distance function
IAENG International Journal of Applied Mathematics 4:3 IJAM_4_3_0 Some esults of esistance Distance irchhoff Index Based on -Graph Qun Liu Abstract he resistance distance between any two vertices of a
More informationInequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices
Applied Mathematics E-Notes, (00), 117-14 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Inequalities Involving Khatri-Rao Products of Positive Semi-definite Matrices
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationarxiv: v1 [math.fa] 31 Jan 2013
Perturbation analysis for Moore-Penrose inverse of closed operators on Hilbert spaces arxiv:1301.7484v1 [math.fa] 31 Jan 013 FAPENG DU School of Mathematical and Physical Sciences, Xuzhou Institute of
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationof a Two-Operator Product 1
Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationSPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN
Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block
More informationNonsingularity and group invertibility of linear combinations of two k-potent matrices
Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,
More informationPredrag S. Stanimirović and Dragan S. Djordjević
FULL-RANK AND DETERMINANTAL REPRESENTATION OF THE DRAZIN INVERSE Predrag S. Stanimirović and Dragan S. Djordjević Abstract. In this article we introduce a full-rank representation of the Drazin inverse
More informationDETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI
More informationPERTURBATION ANAYSIS FOR THE MATRIX EQUATION X = I A X 1 A + B X 1 B. Hosoo Lee
Korean J. Math. 22 (214), No. 1, pp. 123 131 http://dx.doi.org/1.11568/jm.214.22.1.123 PERTRBATION ANAYSIS FOR THE MATRIX EQATION X = I A X 1 A + B X 1 B Hosoo Lee Abstract. The purpose of this paper is
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationarxiv: v1 [math.ra] 28 Jan 2016
The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:
More informationSome results on matrix partial orderings and reverse order law
Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationThe reflexive re-nonnegative definite solution to a quaternion matrix equation
Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More information