On matrix equations X ± A X 2 A = I
|
|
- Andrea Weaver
- 5 years ago
- Views:
Transcription
1 Linear Algebra and its Applications On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University, Shoumen 9712, Bulgaria Received 7 March 1999; accepted 11 September 2 Submitted by F. Uhlig Abstract The two matrix equations X + A X 2 A = I and X A X 2 A = I are studied. We construct iterative methods for obtaining positive definite solutions of these equations. Sufficient conditions for the existence of two different solutions of the equation X + A X 2 A = I are derived. Sufficient conditions for the existence of positive definite solutions of the equation X A X 2 A = I are given. Numerical experiments are discussed. 21 Elsevier Science Inc. All rights reserved. AMS classification: 65F1 Keywords: Matrix equation; Positive definite solution; Iterative method 1. Introduction We consider the two matrix equations X + A X 2 A = I 1 and X A X 2 A = I, 2 where I is the n n unit matrix and A is an n n invertible matrix. Several authors [2 5] have studied the matrix equation X ± A X 1 A = I and they have obtained theoretical properties of these equations. Nonlinear matrix equations of type 1 and 2 arise in dynamic programming, stochastic filtering, control theory and statistics [4]. Corresponding author. address: i.gantchev@fmi.shu-bg.net I.G. Ivanov, vejdi@dir.bg V.I. Hasanov /1/$ - see front matter 21 Elsevier Science Inc. All rights reserved. PII:S
2 28 I.G. Ivanov et al. / Linear Algebra and its Applications In many applications one must solve a system of linear equations [1] Mx = f, where the positive definite matrix M arises from a finite differenceapproximationto an elliptic partial differential operator. As an example, let I A M = A. I Solving this linear system leads us to computing a solution of one of the equations X + A X 1 A = I, X + A X 2 A = I, or X A X 2 A = I. This approach was considered in [6]. The system Mx = f can be solved by Woodbury s formula [7]. In this paper we show that there are two positive definite solutions of Eq. 1. We study the existence question and how to find a Hermitian positive definite solution X of 2. We propose two iterative methods which converge to a positive definite solution of 2. When norm of A is small enough the considered iteration methods converge. The rate of convergence of these methods depends on two parameters. Numerical examples are discussed and some results of the experiments are given. We start with some notations which we use throughout this paper. We shall use A to denote the spectral norm of the matrix A, i.e., A = max i λ i,wheretheλ i are the eigenvalues of AA. Let matrices P and Q be Hermitian. The notation P> QP Q means that P Q is positive definite semidefinite. The assumption P Q>implies P 1 Q 1 and P Q. 2. The matrix equation X + A X 2 A = I In this section we generalize theorems which were proved in [6]. We shall assume that A 2. In [6] two iterative methods converging to a positive definite 27 solution were investigated. We consider the matrix sequence X = γi, X k+1 = AI X k 1 A, k =, 1, 2,... 3 For this iterative method we have proved [6, Theorem 2]. Theorem 1. If there exist the numbers α, β with <α<β<1 for which the inequalities α 2 1 αi < AA <β 2 1 βi are satisfied, then Eq. 1 has a positive definite solution. We can change this theorem slightly. Consider the scalar function ϕx = x 2 1 x.wehave 2 max ϕx = ϕ. [,1] 3
3 I.G. Ivanov et al. / Linear Algebra and its Applications This function is monotonically increasing where x, 2 3 and monotonically decreasing where x 2 3, 1. We obtain the following. Theorem 2. If there exist numbers α, β with <α β 2 3 for which the inequalities α 2 1 αi AA β 2 1 βi are satisfied, then Eq. 1 has a positive definite solution X such that αi X βi. We are proving the new theorem: Theorem 3. Let α and β be solutions of scalar equations α 2 1 α = min λ i AA and β 2 1 β = max λ i AA, respectively. Assume < α β 2 3. Consider {X k } defined by 3. Then: i If γ [, α], then {X k } is monotonically increasing and converges to a positive definite solution X α. ii If γ [ β, 2 3 ], then {X k} is monotonically decreasing and converges to a positive definite solution X β. iii If γ α, β and β 2 /2 α1 β < 1, then {X k } converges to a positive definite solution X γ. Proof. Since the function ϕx = x 2 1 x is monotonically increasing where x [, 2 3 ] we have <α α β β 2 3 and the inequalities α 2 1 αi AA β 2 1 βi are satisfied. i Assume γ [, α]. Hence X = γi βi.wehave X 1 = AI γi 1 A α 2 1 α I X, 1 γ β X 1 = AI γi 1 A 2 1 β I βi. 1 γ Thus X X 1 βi. Assume that X k 1 X k βi.forx k+1 compute X k+1 = AI X k 1 A AI X k 1 1 A = X k, X k+1 = AI X k 1 A β AI βi 1 A 2 1 β 1 β I = βi. Hence {X k } is monotonically increasing and bounded from above by the matrix βi. Consequently the sequence {X k } converges to a positive definite solution X α.
4 3 I.G. Ivanov et al. / Linear Algebra and its Applications ii Assume γ [ β, 2 3 ]. Hence X = γi αi.wehave β X 1 = AI γi 1 A 2 1 β I γi, 1 γ X 1 = AI γi 1 A α 2 1 α I αi. 1 γ Thus X X 1 αi. Assume that X k 1 X k αi. ForX k+1 compute X k+1 = AI X k 1 A AI X k 1 1 A = X k, X k+1 = AI X k 1 A AI αi 1 A α 2 1 α 1 α I = αi. Hence {X k } is monotonically decreasing and bounded from below by the matrix αi. Consequently the sequence {X k } converges to a positive definite solution X β. iii Assume γ α, β. Hence αi < X = γi < βi. Weprovethat{X k } is a Cauchy sequence. Assume that αi < X k < βi.forx k+1 compute X k+1 = AI X k 1 A AA < 1 β β 2 1 β 1 β I = βi, X k+1 = AA AI X k 1 A > 1 α α 2 1 α 1 α I = αi. Hence αi < X k < βi for k =, 1,... Let us consider the norm X k+p X k = AI X k+p 1 1 A We denote P = AI X k+p 1 1 A and Q = AI X k 1 1 A. AI X k 1 1 A and use the equality P P Q + P Q Q = P Q. Obviously, Y = P Q is a solution of the linear matrix equation PY + Y Q = P Q. According to [8, Theorem 8.5.2] we have Y = e Pt P Qe Qt dt. 4
5 I.G. Ivanov et al. / Linear Algebra and its Applications Thus X k+p X k = e Pt P Qe Qt dt P Q e Pt e Qt dt. Since αi < X k for k =, 1,..., P = X k+p, Q = X k, then P> αi and Q> αi. Hence X k+p X k < = 1 2 α = 1 2 α 1 e 2 αt dt P Q A [ I X k+p 1 1 I X k 1 1] A AI X k 1 1 X k+p 1 X k 1 I X k+p 1 1 A 2 α A 2 I Xk 1 1 I Xk+p 1 1 X k+p 1 X k 1 < β 2 1 β 1 X k+p 1 X k 1 2 α 1 β 2 β 2 = Xk+p 1 X k 1. 2 α1 β Consequently k β 2 X k+p X k X p X. 2 α1 β Since β 2 q = < 1 2 α1 β and X p X X p X p 1 + X p 1 X p X 1 X q p 1 + +q + 1 X 1 X < 1 1 q X 1 X, we have X k+p X k < qk 1 q X 1 X.
6 32 I.G. Ivanov et al. / Linear Algebra and its Applications The sequence {X k } forms a Cauchy sequence considered in the Banach space C n n where X k are n n positive definite matrices. Hence this sequence has a positive definite limit of 1. Theorem 4. Let α 1 and β 1 be real for which the inequalities i α1 2 1 α 1I AA β1 2 1 β 1I, β 2 1 ii 2α 1 1 β 1 < 1 are satisfied. For each two γ 1,γ 2 with α 1 γ 1 γ 2 β the reccurence equation 3 defines two matrix sequences {X k } and {X k } with initial points X = γ 1I and X = γ 2 I. These sequences converge to the same limit X γ which a positive definite solution of 1. Proof. We have α 1 I X k β 1I and α 1 I X k β 1I. We put P = AI X k 1 1 A,Q= AI X k 1 1 A and for X k X k we obtain X k X k AI = X k 1 1 A AI X k 1 1 A = P Q = e Pt P Qe Qt dt [ e α1t dt A I X k 1 1 I X k 1 1] A 1 2α A 2 I X k 1 1 I X k 1 1 X k 1 X k 1 Hence X k X But k β2 1 1 β 1 2α β 1 2 X k 1 X β 2 1 = X 2α 1 1 β 1 k 1 X k 1 β1 2 X k 1 2α 1 1 β 1 X k 1.. k 1 β1 2 2α 1 1 β 1 < 1. Consequently sequences {X k } and {X k } have the common limit.
7 I.G. Ivanov et al. / Linear Algebra and its Applications Next consider the matrix sequence X = αi, X s+1 = I A Xs 2 A, s =, 1, 2,... 5 Theorem 5. Suppose there is a matrix A and numbers η, γ for which the conditions i 2 3 <γ η 1, ii η 2 1 ηi A A γ 2 1 γi are satisfied. Then {X s } defined by 5 for α [γ,η] converges to a positive definite solution X of 1 at least linearly and γi X ηi. Proof. Since X = αi we have γi X ηi. Suppose γi X s ηi. Then 1 I X 2 η2 s 1 γ 2 I and 1 η 2 A A A Xs 2 A 1 γ 2 A A. According to condition ii we obtain 1 ηi 1 1 η 2 A A and γ 2 A A 1 γi. Hence γi X s+1 = I A Xs 2 A ηi. 6 We have X s+p X s =A Xs 1 2 X 2 s+p 1 A =A X 2 s+p 1 Xs+p 1 2 X2 s 1 X 2 s 1 A =A Xs+p 1 2 [ Xs+p 1 Xs+p 1 X s 1 + ] X s+p 1 X s 1 Xs 1 X 2 s 1 A =A Xs+p 1 1 Xs+p 1 X s 1 X 2 s 1 A Then +A X 2 s+p 1 Xs+p 1 X s 1 X 1 s 1 A. X s+p X s A X 1 s+p 1 X 2 s 1 A + A X 2 s+p 1 X 1 s 1 A X s+p 1 X s 1. But for all k we have X 1 k 1 γ have A = ρa A γ 2 1 γ. Therefore γ A Xk 1 A Xk γ, γ I and thus X 1 k γ 1, X 2 k 1. Using ii we γ 2
8 34 I.G. Ivanov et al. / Linear Algebra and its Applications γ A Xk 2 A Xk γ γ 2. Hence for all integer p>and arbitrary s we have 21 γ X s+p X s X s+p 1 X s 1. γ We also have q = 21 γ/γ <1since 2 3 <γ. We obtain X s+p X s q s X p X. Since X p X < 1 1 q X 1 X, we have X s+p X s qs 1 q X 1 X. The matrices {X s } form a Cauchy sequence considered in the Banach space C n n where X s are n n positive definite matrices. Hence this sequence has a limit X. Using 6 we obtain γi X ηi. We have X s+1 X A Xs 1 X 2 A + A Xs 2 X 1 A X s X. Similarly, X s+1 X 21 γ X s X. γ Remark 1. Condition ii of Theorem 5 implies that A Theorem 6. Suppose the iterative method 5 converges to a positive definite solution X. Then for any ε> X s+1 X < 2 X 1 A X 2 A +ε X s X for all s large enough. Proof. Since X s X for any ε>, there is an integer s so that for s>s Xs 1 A X 2 A < X 1 A X 2 A +ε
9 and Then I.G. Ivanov et al. / Linear Algebra and its Applications X 1 A Xs 2 A < X 1 A X 2 A +ε. X s+1 X A Xs 1 X 2 A + A Xs 2 X 1 A X s X <2 X 1 A X 2 A +ε X s X. Theorem 7. If A < 2 27, then X 1 A X 2 A < 1 2, where X is the limit of sequence 5. Proof. From the condition A < 2 we obtain that sequence 5 is convergent to 27 X, with 2 3 I<X I. Then X 1 < 3 2 and X 2 < 9 4. We obtain X 1 A X 2 A X 1 X 2 A 2 < = 1 2. Remark 2. If A < 2, then algorithms 3 and 5 converge to two different 27 positive definite solutions X and X and X >X see Example Solutions of the matrix equation X A X 2 A = I We will describe an iterative method which compute a positive definite solution of Eq. 2. Consider the following sequence of matrices: X = αi, X s+1 = AX s I 1 A, s =, 1, 2,... 7 We have the following. Theorem 8. If A is nonsingular and for some real α>, we have i AA <α 2 α 1I, AA ii α 1 1 α 2 A A>I, 2 1 α 1 α 1 iii 2 A 2 < 1, µ µ α 1 where µ is the smallest eigenvalue of the matrix AA, then Eq. 2 has a positive definite solution.
10 36 I.G. Ivanov et al. / Linear Algebra and its Applications Proof. We consider sequence 7. According to condition i we have X 1 = AA AX I 1 A = α 1 <αi= X. Hence X 1 <X. Applying condition ii we obtain AA X 1 = α 1 >I+ 1 α 2 A A>I, I<X 1 <X. Moreover, we have X 1 I 1 >X I 1, X 2 = AX 1 I 1 A > AX I 1 A = X 1. Consequently X 1 <X 2. Using condition ii we have X 1 I>1/α 2 A A and X 2 = AX 1 I 1 A <αi= X. Hence I<X 1 <X 2 <X. Analogously one can prove that I<X 1 <X 2s+1 <X 2s+3 <X 2k+2 <X 2k <X = αi for every positive integer s, k. Consequently the subsequences {X 2k }, {X 2s+1 } are convergent to positive definite matrices. These sequences have a common limit. Indeed, we have X 2k X 2k+1 = AX 2k 1 I 1 A AX 2k I 1 A. We denote P = AX 2k 1 I 1 A, Q = AX 2k I 1 A and use the following equality: P P Q + P Q Q = P Q. Since X 2k 1 <X 2k+1 <X 2k for each k = 1, 2,..., the matrix Y = P Q is a positive definite solution of the matrix equation PY + Y Q = P Q. According to [8, Theorem 8.5.2] we have Y = e Pt P Qe Qt dt. 8 Since P, Q are positive definite matrices integral 8 exists and e Pt P Qe Qt ast. Then
11 I.G. Ivanov et al. / Linear Algebra and its Applications X 2k X 2k+1 = e Pt P Qe Qt dt P Q e Pt e Qt dt. We have X 1 <X s for the matrices of sequence 7. Hence X 1 I<X s I and X s I 1 <X 1 I 1.Wehave α 1 X 1 I 1 =. µ α 1 Consequently X s I 1 < Furthermore α 1 µ α 1. X s+1 = AX s I 1 A >X 1 = Thus, we obtain X 2k X 2k+1 P Q Condition iii now implies q = 1 2 A 2 = P Q 1 2 α 1 AA µ α 1 α 1 I. e 2 µ/α 1t dt α 1 µ = 1 2 µ AX 2k 1 I 1 A AX 2k I 1 A = 1 α 1 2 µ AX 2k I 1 X 2k X 2k 1 X 2k 1 I 1 A 1 2 α 1 α 1 2 µ A 2 X 2k X 2k 1. µ α 1 α 1 µ α 1 µ α 1 2 < 1. Consequently the subsequences {X 2k }, {X 2s+1 } are convergent and have a common positive definite limit which is a solution of the matrix equation 2.
12 38 I.G. Ivanov et al. / Linear Algebra and its Applications Similarly we can write the following. Theorem 9. If there is a real α>2for which i AA >α 2 α 1I, ii AA α 1 1 α 2 A A<I, A 2 iii 2αα 1 2 < 1, then Eq. 2 has a positive definite solution. Proof. We consider sequence 7. According to condition i we have X 1 = AA AX I 1 A = α 1 >αi= X. Hence X 1 >X. Moreover, we have X 1 I 1 <X I 1, X 2 = AX 1 I 1 A < AX I 1 A = X 1. Consequently X 2 <X 1. Using condition ii we have X 1 I<1/α 2 A A and X 2 = AX 1 I 1 A >αi= X. Hence X <X 2 <X 1. Analogously one can prove that αi = X <X 2s <X 2s+2 <X 2k+3 <X 2k+1 <X 1. for every positive integer s, k. Consequently the subsequences {X 2k }, {X 2s+1 } are convergent to positive definite matrices. These sequences have a common limit. Indeed, we have X 2k+1 X 2k = AX2k I 1 A AX 2k 1 I 1 A. We denote P = AX 2k I 1 A, Q = AX 2k 1 I 1 A and use the equality P P Q + P Q Q = P Q. Since X 2k <X 2k+1 <X 2k 1 for each k = 1, 2,...the matrix Y = P Q is a positive definite solution of the matrix equation
13 I.G. Ivanov et al. / Linear Algebra and its Applications PY+ Y Q = P Q. According to [8, Theorem 8.5.2] we have Y = e Pt P Qe Qt dt. 9 Since P, Q are positive definite matrices, integral 9 exists and e Pt P Qe Qt ast. Then X 2k+1 X 2k = e Pt P Qe Qt dt P Q e Pt e Qt dt. We have αi = X <X s for the matrices of 7. Hence X I<X s I and X s I 1 <X I 1.Wehave X I 1 =1/α 1. Consequently X s I 1 < 1 α 1, P = X2k+1 >αi, Q = X2k >αi. Thus we obtain X 2k+1 X 2k P Q e 2αt dt = P Q 1 2α = 1 2α AX 2k I 1 A AX 2k 1 I 1 A = 1 2α AX 2k 1 I 1 X 2k 1 X 2k X 2k I 1 A A 2 2αα 1 2 X 2k 1 X 2k. Condition iii now implies A 2 q = 2αα 1 2 < 1. Consequently the subsequences {X 2k }, {X 2s+1 } are convergent and have a common positive definite limit which is a solution of the matrix equation 2. Consider the following matrix sequence: X = ηi, X s+1 = I + A Xs 2 A, s =, 1, 2,... 1 Theorem 1. Suppose there is a matrix A and numbers α, β for which the conditions i 1 β<α< β 2 + 1, ii α 2 β 1I A A β 2 α 1I
14 4 I.G. Ivanov et al. / Linear Algebra and its Applications are satisfied. Then {X s } defined by 1 for η [β,α] converges to a positive definite solution X of 2 with at least linear rate of convergence and βi X αi. Proof. Since X = ηi we have βi X αi. Suppose βi X s αi. Then 1 α 2 A A A Xs 2 A 1 β 2 A A. According to condition ii we obtain β 1I 1 α 2 A A and 1 β 2 A A α 1I. Hence βi X s+1 = I + A Xs 2 A αi. 11 We have X s+p X s =A Xs+p 1 2 s 1 A =A Xs 1 2 Xs 1 2 X2 s+p 1 Xs+p 1 2 =A X 2 [ s 1 Xs 1 Xs 1 X s+p 1 + ] X s 1 X s+p 1 Xs+p 1 X 2 =A X 1 s 1 +A X 2 s 1 Xs 1 X s+p 1 X 2 s+p 1 A s+p 1 A Xs 1 X s+p 1 X 1 s+p 1 A. Then X s+p X s A Xs 1 1 X 2 s+p 1 A + A Xs 1 2 X 1 s+p 1 A X s 1 X s+p 1. But for all k we have Xk 1 1/β and Xk 2 1/β 2. Using ii we have A β 2 α 1. Therefore β A Xk 1 2 α 1 β, A Xk 2 β 2 α 1 β 2. Hence for all integer p>and arbitrary s we have 2α 1 X s+p X s X s+p 1 X s 1. β Using i α< β 2 + 1wehave q = 2α 1 β < 1.
15 I.G. Ivanov et al. / Linear Algebra and its Applications We obtain X s+p X s q s X p X. Since X p X < 1 1 q X 1 X, we have X s+p X s < qs 1 q X 1 X. The matrix sequence {X s } is a Cauchy sequence considered in the Banach space C n n where X s are n n positive definite matrices. Hence this sequence has a limit X. From 11 we obtain βi X αi. We have X s+1 X A X 1 X 2 s A + A X 2 Xs 1 A X s X. Similarly, X s+1 X 2α 1 X s X. β Theorem 11. Suppose the iterative method 1 converges to a positive definite solution X. Then for any ε> X s+1 X < 2 X 1 A X 2 A +ε X s X for all s large enough. The proof is similar to that of Theorem Numerical experiments We have made numerical experiments to compute a positive definite solution of Eqs. 1 and 2. The solution was computed for different matrices A and different values of n. Computations were done on a PENTIUM 2 MHz computer. All programs were written in MATLAB. We denote and ε 1 Z = Z + A Z 2 A I ε 2 Z = Z A Z 2 A I.
16 42 I.G. Ivanov et al. / Linear Algebra and its Applications We have tested our iteration processes for solving the equation X + A X 2 A = I on the following n n matrices. We use the stopping criterion ε 1 Z < 1.e 8. Example 1. Consider Eq. 1 with A = The spectral norm of A is.292. Consider the iterative method 3. If X = αi and α =, then 15 iterations are required for computing X. In this case we obtain a monotonically increasing sequence which converges to the solution X. Consider the case X = βi.ifβ = 2 3, then 16 iterations are required for computing X.Ifβ =.368, then 12 iterations are required for computing X. In this case β satisfies conditions of Theorem 2. We obtain a monotonically decreasing sequence which converges to the solution X. There is a value of ββ=.368 for which the iterative method 3 with X = βi is faster than the iterative method 3 with X = αi. Hereβ is a solution of the scalar equation A 2 = β 2 1 β. Consider the iterative method 5. We choose initial value X = αi and 2 3 <α 1 see Theorem 5. In this case the matrix sequence converges to the solution X. If α = 1, then 12 iterations are required for computing X.Ifα = 2/3+1 2,then 11 iterations are required for computing X.Ifα =.892, then nine iterations are required for computing X. We choose α to be a solution of the scalar equation A 2 = α 2 1 α see Theorem 5. Example 2. Consider Eq. 1 with A = For this example the conditions of Theorems 2 and 5 are not satisfied A = > 27. But the iterative method 5 is convergent. We use this method for computing the X.Ifη = 2 3, then 14 iterations are required for computing X.If η = 1, then 13 iterations are required for computing X.Ifη = 2/3+1 2,then13 iterations are required for computing X. We have tested our iteration processes for solving the equation X A X 2 A = I on the following n n matrices. We use the stopping criterion ε 2 Z < 1 8. Example 3. We have A = UÃU 1, where U is a randomly generated matrix and [ Ã = diag n, n,..., ] n 2.
17 I.G. Ivanov et al. / Linear Algebra and its Applications Table 1 Iterative method 7 n A α k Xα The matrix A satisfies the conditions of Theorem 9. Let k Xα be the smallest number s, forwhichε 2 X s <1 8 for the iterative method 7. The results are given in Table 1. Example 4. Consider Eq. 2 with A = Consider the iterative method 1. The matrix A satisfies the conditionsof Theorem 8. We choose β = 1. We compute A =.587. If α = 1.345, then six iterations are required for computing X. 5. Conclusion In this paper we consider special nonlinear matrix equations. We introduce iteration algorithms by which positive definite solutions of the equations can be calculated. It was proved in [4, Theorem 13] that if A < 1 2, then the equation X + A X 1 A = I has a positive definite solution. From Theorem 2 Theorem 5 we can see that if 2 A 2 < max ϕx = x 2 1 x = ϕ = , then the equation X + A X 2 A = I has a positive definite solution. There are matrices A see Examples 3 and 4 for which A > 1 2 and A > 4 27 and for which we can still compute a positive definite solution of the equation X A X 2 A = I.
18 44 I.G. Ivanov et al. / Linear Algebra and its Applications References [1] B.L. Buzbee, G.H. Golub, C.W. Nielson, On direct methods for solving Poisson s equations, SIAM J. Numer. Anal [2] C. Guo, P. Lancaster, iterative solution of two matrix equations, Math. Comput [3] J.C. Engwerda, A.C.M. Ran, A.L. Rijkeboer, Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation X + A X 1 A = Q, Linear Algebra Appl [4] J.C. Engwerda, On the existence of a positive definite solution of the matrix equation X + A T X 1 A = I, Linear Algebra Appl [5] A. Ferrante, B.C. Levy, Hermitian solutions of the the equation X = Q + NX 1 N, Linear Algebra Appl [6] I.G. Ivanov, S.M. El-Sayed, Properties of positive definite solution of the equation X + A X 2 A = I, Linear Algebra Appl [7] A.S. Householder, The Theory of Matrices in Numerical Analysis, Blaisdell, New York, [8] P. Lancaster, Theory of Matrices, Academic Press, New York, 1969.
IMPROVED METHODS AND STARTING VALUES TO SOLVE THE MATRIX EQUATIONS X ± A X 1 A = I ITERATIVELY
MATHEMATICS OF COMPUTATION Volume 74, Number 249, Pages 263 278 S 0025-5718(04)01636-9 Article electronically published on January 27, 2004 IMPROVED METHODS AND STARTING VAUES TO SOVE THE MATRIX EQUATIONS
More informationThe iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q
Vaezzadeh et al. Advances in Difference Equations 2013, 2013:229 R E S E A R C H Open Access The iterative methods for solving nonlinear matrix equation X + A X A + B X B = Q Sarah Vaezzadeh 1, Seyyed
More informationELA
POSITIVE DEFINITE SOLUTION OF THE MATRIX EQUATION X = Q+A H (I X C) δ A GUOZHU YAO, ANPING LIAO, AND XUEFENG DUAN Abstract. We consider the nonlinear matrix equation X = Q+A H (I X C) δ A (0 < δ 1), where
More informationRecursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q
Applied Mathematical Sciences, Vol. 2, 2008, no. 38, 1855-1872 Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q Nicholas Assimakis Department of Electronics, Technological
More informationPERTURBATION ANAYSIS FOR THE MATRIX EQUATION X = I A X 1 A + B X 1 B. Hosoo Lee
Korean J. Math. 22 (214), No. 1, pp. 123 131 http://dx.doi.org/1.11568/jm.214.22.1.123 PERTRBATION ANAYSIS FOR THE MATRIX EQATION X = I A X 1 A + B X 1 B Hosoo Lee Abstract. The purpose of this paper is
More informationOn the Hermitian solutions of the
Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation
More informationarxiv: v1 [math.na] 21 Oct 2014
Computing Symmetric Positive Definite Solutions of Three Types of Nonlinear Matrix Equations arxiv:1410.5559v1 [math.na] 21 Oct 2014 Negin Bagherpour a, Nezam Mahdavi-Amiri a, a Department of Mathematical
More informationYONGDO LIM. 1. Introduction
THE INVERSE MEAN PROBLEM OF GEOMETRIC AND CONTRAHARMONIC MEANS YONGDO LIM Abstract. In this paper we solve the inverse mean problem of contraharmonic and geometric means of positive definite matrices (proposed
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationThe symmetric linear matrix equation
Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela
More informationExistence and multiple solutions for a second-order difference boundary value problem via critical point theory
J. Math. Anal. Appl. 36 (7) 511 5 www.elsevier.com/locate/jmaa Existence and multiple solutions for a second-order difference boundary value problem via critical point theory Haihua Liang a,b,, Peixuan
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationA New Modification of Newton s Method
A New Modification of Newton s Method Vejdi I. Hasanov, Ivan G. Ivanov, Gurhan Nedjibov Laboratory of Mathematical Modelling, Shoumen University, Shoumen 971, Bulgaria e-mail: v.hasanov@@fmi.shu-bg.net
More informationEquivalence constants for certain matrix norms II
Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,
More informationConvergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations
Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations Wen-Wei Lin Shu-Fang u Abstract In this paper a structure-preserving transformation of a symplectic pencil
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationSome Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces
Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationNotes on matrix arithmetic geometric mean inequalities
Linear Algebra and its Applications 308 (000) 03 11 www.elsevier.com/locate/laa Notes on matrix arithmetic geometric mean inequalities Rajendra Bhatia a,, Fuad Kittaneh b a Indian Statistical Institute,
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationPolynomial numerical hulls of order 3
Electronic Journal of Linear Algebra Volume 18 Volume 18 2009) Article 22 2009 Polynomial numerical hulls of order 3 Hamid Reza Afshin Mohammad Ali Mehrjoofard Abbas Salemi salemi@mail.uk.ac.ir Follow
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationThe singular value of A + B and αa + βb
An. Ştiinţ. Univ. Al. I. Cuza Iaşi Mat. (N.S.) Tomul LXII, 2016, f. 2, vol. 3 The singular value of A + B and αa + βb Bogdan D. Djordjević Received: 16.II.2015 / Revised: 3.IV.2015 / Accepted: 9.IV.2015
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationSOME INEQUALITIES FOR COMMUTATORS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES. S. S. Dragomir
Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Filomat 5: 011), 151 16 DOI: 10.98/FIL110151D SOME INEQUALITIES FOR COMMUTATORS OF BOUNDED LINEAR
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationApplied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices
Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass
More informationJournal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix
Journal of Computational and Applied Mathematics 35 (010 76 735 Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elseier.com/locate/cam New
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationA filled function method for finding a global minimizer on global integer optimization
Journal of Computational and Applied Mathematics 8 (2005) 200 20 www.elsevier.com/locate/cam A filled function method for finding a global minimizer on global integer optimization You-lin Shang a,b,,,
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationBanach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable Sequences
Advances in Dynamical Systems and Applications ISSN 0973-532, Volume 6, Number, pp. 9 09 20 http://campus.mst.edu/adsa Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable
More informationPositive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order
Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationAn estimation of the spectral radius of a product of block matrices
Linear Algebra and its Applications 379 (2004) 267 275 wwwelseviercom/locate/laa An estimation of the spectral radius of a product of block matrices Mei-Qin Chen a Xiezhang Li b a Department of Mathematics
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationOn the second Laplacian eigenvalues of trees of odd order
Linear Algebra and its Applications 419 2006) 475 485 www.elsevier.com/locate/laa On the second Laplacian eigenvalues of trees of odd order Jia-yu Shao, Li Zhang, Xi-ying Yuan Department of Applied Mathematics,
More informationInverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationWeaker hypotheses for the general projection algorithm with corrections
DOI: 10.1515/auom-2015-0043 An. Şt. Univ. Ovidius Constanţa Vol. 23(3),2015, 9 16 Weaker hypotheses for the general projection algorithm with corrections Alexru Bobe, Aurelian Nicola, Constantin Popa Abstract
More informationLaplacian spectral radius of trees with given maximum degree
Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 1962 1969 www.elsevier.com/locate/laa Laplacian spectral radius of trees with given maximum degree Aimei Yu a,,1,
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationThe minimum rank of matrices and the equivalence class graph
Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationAn algorithm for symmetric generalized inverse eigenvalue problems
Linear Algebra and its Applications 296 (999) 79±98 www.elsevier.com/locate/laa An algorithm for symmetric generalized inverse eigenvalue problems Hua Dai Department of Mathematics, Nanjing University
More informationACCURATE SOLUTION ESTIMATE AND ASYMPTOTIC BEHAVIOR OF NONLINEAR DISCRETE SYSTEM
Sutra: International Journal of Mathematical Science Education c Technomathematics Research Foundation Vol. 1, No. 1,9-15, 2008 ACCURATE SOLUTION ESTIMATE AND ASYMPTOTIC BEHAVIOR OF NONLINEAR DISCRETE
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationRecursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix
journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of
More informationOn the Ulam stability of mixed type mappings on restricted domains
J. Math. Anal. Appl. 276 (2002 747 762 www.elsevier.com/locate/jmaa On the Ulam stability of mixed type mappings on restricted domains John Michael Rassias Pedagogical Department, E.E., National and Capodistrian
More informationPart 1a: Inner product, Orthogonality, Vector/Matrix norm
Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationPerron Frobenius Theory
Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationInequalities in Hilbert Spaces
Inequalities in Hilbert Spaces Jan Wigestrand Master of Science in Mathematics Submission date: March 8 Supervisor: Eugenia Malinnikova, MATH Norwegian University of Science and Technology Department of
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 7, Issue 1, Article 34, 2006 MATRIX EQUALITIES AND INEQUALITIES INVOLVING KHATRI-RAO AND TRACY-SINGH SUMS ZEYAD AL
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 36 (01 1960 1968 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa The sum of orthogonal
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 430 (2009) 579 586 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Low rank perturbation
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated
More informationMatrices with convolutions of binomial functions and Krawtchouk matrices
Available online at wwwsciencedirectcom Linear Algebra and its Applications 429 (2008 50 56 wwwelseviercom/locate/laa Matrices with convolutions of binomial functions and Krawtchouk matrices orman C Severo
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationImpulsive stabilization of two kinds of second-order linear delay differential equations
J. Math. Anal. Appl. 91 (004) 70 81 www.elsevier.com/locate/jmaa Impulsive stabilization of two kinds of second-order linear delay differential equations Xiang Li a, and Peixuan Weng b,1 a Department of
More informationAn Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function
Applied Mathematical Sciences, Vol 6, 0, no 8, 347-36 An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function Norhaliza Abu Bakar, Mansor Monsi, Nasruddin assan Department of Mathematics,
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationIterative Algorithm for Computing the Eigenvalues
Iterative Algorithm for Computing the Eigenvalues LILJANA FERBAR Faculty of Economics University of Ljubljana Kardeljeva pl. 17, 1000 Ljubljana SLOVENIA Abstract: - We consider the eigenvalue problem Hx
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationOn the Ritz values of normal matrices
On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific
More informationA Continuation Approach to a Quadratic Matrix Equation
A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September
More informationSQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS
Novi Sad J. Math. Vol. 35, No. 1, 2005, 59-70 SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS Lidija Z. Rančić 1, Miodrag S. Petković 2 Abstract. One-parameter families
More informationAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C
Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation
More informationON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *
Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod
More informationInterpolating the arithmetic geometric mean inequality and its operator version
Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 435 (2011) 1029 1033 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Subgraphs and the Laplacian
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationCONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX
J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence
More informationNP-hardness of the stable matrix in unit interval family problem in discrete time
Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More information