Improved Newton s method with exact line searches to solve quadratic matrix equation

Size: px
Start display at page:

Download "Improved Newton s method with exact line searches to solve quadratic matrix equation"

Transcription

1 Journal of Computational and Applied Mathematics 222 (2008) wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan Hu, Lei Zhang College of Mathematics and Econometrics, Hunan University, Changsha , China Received 10 April 2007; received in revised form 15 September 2007 Abstract In this paper, we study the matrix equation AX 2 + B X + C = 0, where A, B and C are square matrices We give two improved algorithms which are better than Newton s method with exact line searches to calculate the solution Some numerical examples are reported to illustrate our algorithms c 2007 Elsevier BV All rights reserved MSC: 65F30; 65H10; 39B42 Keywords: Quadratic matrix equation; Solvent; Newton s method; Exact line search; Samanskii technique 1 Introduction In this paper, our interest is the simplest quadratic matrix equation AX 2 + B X + C = 0, where A, B and C are given square matrices The Eq (11) often occurs in many applications, for example, the quasi-birth death processes, the quadratic eigenvalue problems (QEP) and pseudo-spectra for quadratic eigenvalue problems, see detail in [5,8,12,14] Let F(X) = AX 2 + B X + C, where A, B and C are square matrices A matrix S is called a solvent of the Eq (11) if F(S) = 0 [4] In the case of simple solvent, where the derivative is regular A dominant solvent is a solvent matrix whose eigenvalues strictly dominate the eigenvalues of all other solvents The paper [10,13] gave two linearly convergent algorithms for computing a dominant solvent of Eq (11) while the algorithms have the drawbacks that it is difficult to check (11) (12) This research supported by the National Natural Science Foundation of China and Doctorate Foundation of the Ministry of Education of China Corresponding author address: jianhuipaper@yahoocomcn (J-h Long) /$ - see front matter c 2007 Elsevier BV All rights reserved doi:101016/jcam

2 646 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) Table 11 Newton s method The ith iteration Error The ith iteration Error e e e e e e e e e 014 Table 12 Newton s method with exact line searches The ith iteration t Error e e e 014 in advance whether a dominant solvent exists and the convergence can be extremely slow Davis applied Newton s method to the Eq (11), and gave supporting theory and implementation details in [1,2] Kratz and Stickel investigated Newton s method for the general degree matrix polynomials in [4] Higham and Kim improved the global convergence properties of Newton s method with exact line searches and gave a complete characterization of solution in terms of the generalized Schur decomposition in [5] Various numerical solution techniques are described and compared in [6] From the above papers, we know that Newton s method is very attractive for solving the Eq (11) It has very nice numerical behavior such as quadratic convergence and good numerical stability, but it has a weak necessary that each iteration is rather expansive We know also that modified Newton s method is cheaper than Newton s method at each iteration [4], while it is only linearly convergence Consider the quadratic matrix equation AX 2 + B X + C = 0 with A, B, C R n n, n = A = I n, B = 10, C = The Tables 11 and 12 list the numerical results with starting iterative X 0 = 10 4 I n by Newton s method and Newton s method with exact line searches, respectively, where error = F(X i ) F, F denotes Frobenious norm, t is a real step size scalar in the direction of Newton It can be seen from the Tables 11 and 12 that when error > 10 1, F(X i ) F by Newton s method with exact line searches comes down faster than by Newton s method, while when error < 10 1 Newton s method with exact line searches has the same convergence rate as Newton s method and t sufficiently near or equal to 1 The paper has two main contributions First, we incorporate Newton s method with exact line searches and Newton s method in order to reduce the cost of computation Second, we incorporate Newton s method with exact line searches and Newton s method with Samanskii technique in order to have faster convergence than Newton s method with exact line searches The paper is organized as follows In Section 2, we introduce some notations and lemmas In Section 3, two algorithms are presented and analyzed In order to illustrate our results, several numerical examples are reported in Section 4 At last, some conclusions are given in Section 5

3 2 Some notations and lemmas J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) For matrices A, B C n n the Kronecker product is the matrix A B C n2 n 2 given by a 11 B a 1n B A B =, where A = (a i j ), a n1 B a nn B and the vec function of the matrix A is the column vec(a) = (a T 1,, at n )T C n2 where a 1,, a n C n are the the columns of A Definition 21 ([3,4,14]) Let G(X) = 0 be the nonlinear matrix equation, where G : C n n C n n If G(X + E) G(X) = G (X)[E] + G 1 (E), where G 1 : C n n C n n, G is a bounded linear operator and G 1 (E) / E 0 as E 0, then the function G is said to be Frechet-differentiable at X with Frechet derivative G (X)[E] in the direction E From Eq (11), we obtain F(X + E) = F(X) + AE X + (AX + B)E + AE 2 (21) Definition 21 implies that the Frechet derivative of Eq (11) at X in the direction E is given by F (X)[E] = AE X + (AX + B)E We call F (X) regular if the mapping F (X) is injective (22) Algorithm 21 (Newton s Method) step 0: Given X 0, and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (23) step 3: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Lemma 21 ([14]) Suppose that S C n n is a simple solvent of (11), ie F(S) = 0 and F (S) is regular Then, if the starting matrix X 0 is sufficiently near S, the sequence {X k } produced by Algorithm 21 converges quadratically to S More precisely, if X 0 S = ε 0 < ε = min{ A, δ}, where = inf { F (X)[H] : H = 1, X S δ } > 0 for sufficiently small δ (0, 1], then we have: (i) lim k X k = S, (ii) X k S ε 0 q k with q = ε 0 A < 1, for k = 0, 1, 2,, (iii) X k+1 S A X k S 2 for k = 0, 1, 2,, Remark 21 The Algorithm 21 has quadratical convergence rate, but it has some weakness When it solves the generalized Sylvester equation defining E k, it takes at least 56n 3 flops, thus each iteration is rather expansive Algorithm 22 (Modified Newton s Method) step 0: Given X 0, and ε, set k = 0

4 648 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) step 2: Solve for E k in generalized Sylvester equation AE k X 0 + (AX 0 + B)E k = F(X k ) (24) step 3: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Lemma 22 ([4]) Suppose that X 0 is an n n matrix, and (i) F (X 0 ) is regular, ie = inf { F (X 0 )[H] } : H = 1 > 0, { } c0 (ii) F(X 0 ) = ε 0 ε = min 2 4 A, 2 Then there exists S C n n,such that F(S) = 0, S X 0 2ε 0 More precisely, the sequence {X k } given by Algorithm 22 satisfies: (i) lim k X k = S exists with F(S) = 0, (ii) X k S q X k 1 S 2 ε 0 q k, (iii) X k+1 X k q X k X k 1 f or k = 1, 2,, q = 4ε 0 A < 1 c0 2 Remark 22 The Algorithm 22 is only linearly convergence, which means possibly very slow convergence But at each iterative step, only a linear equation with the same coefficient matrix has to be solved Algorithm 23 (Newton s Method with Samanskii Technique [11]) step 0: Given X 0, m and ε, set k = 0 step 2: Let X k,0 = X k, i = 1 step 3: if i > m, go to step 6 step 4: Solve for E k,i 1 in generalized Sylvester equation AE k,i 1 X k + (AX k + B)E k,i 1 = F(X k,i 1 ) (25) step 5: Update X k,i = X k,i 1 + E k,i 1, i = i + 1, and go to step 3 step 6: Update X k+1 = X k,m, k = k + 1, and go to step 1, where error k = F(X k ) F Remark 23 If m = 1, then the Algorithm 23 is Newton s method In this paper, we only consider the case that m = 2 Following we investigate the properties of Algorithm 23 Theorem 21 Suppose that S C n n is a simple solvent of (11), ie F(S) = 0 and F (S) is regular Then, if the starting matrix X 0 is sufficiently near S, the sequence {X k } produced by Algorithm 23 converges cubically to S More precisely, if X 0 S = ε 0 < ε = min{ 2 A, δ}, where, = inf { F (X)[H] : H = 1, X S δ } > 0 for sufficient small δ (0, 1], then we have: (i) X k+1 S η X k S 3 with η = A 2 c0 2 (ii) X k S ε 0 q k, q = ηε0 2 < 1, k = 1, 2,, (iii) lim k X k = S ( 2 + A ε 0 ) 3 A 2, k = 1, 2,, c0 2

5 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) Proof From the Lemma 21, we obtain that X k,1 is well defined by X k,1 = X k + E, AE X k + (AX k + B)E = F(X k ), (26) { { }} X k V = X X S = ε 0 < ε = min δ, and Xk,1 S A X k S 2 2 A Hence, X k+1 is well defined by X k+1 = X k,1 + H, AH X k + (AX k + B)H = F(X k,1 ), (27) X k {X X S < ε 1 } V, where ε1 2 ε 0 A By the Lemma 21 and the properties of Kronecker product and vec function, we obtain X k+1 S = Xk,1 + H S = vec(xk,1 S) + vech = vec(x k,1 S) F (X k ) 1 vecf(x k,1 ) 1 vecf (X k )[X k,1 S] vecf(x k,1 ) = 1 F (X k )[X k,1 S] F(X k,1 ) + F(S) 1 A Xk,1 S ( 2 X k S + Xk,1 S ) 1 c 2 0 A 2 X k S 2 ( 2 X k S + A ( A 2 c A ) ε 0 X k S 3 = η X k S 3, ) X k S 2 where F (X k ) = X T k A + I (AX k + B) So the assertion (i) is proved Now we obtain assertion (ii) by induction Since X 0 S = ε 0 ε is given, we assume that X k S ε 0 q k holds While q = ε 0 η < 1 is obvious, thus X k S ε 0 So we have X k+1 S η X k S 3 ηε 2 0 X k S = q X k S < ε 0 q k+1, hence, the assertion (ii) holds by induction principle It is obvious that (ii) implies (iii) Algorithms 21 and 23 are now compared in terms of computational cost we know that the heart of the two algorithms is to solve the generalized Sylvester equation AE X + (AX + B)E = F(X) (28) To solve (28) we can adapt methods described in [5,7] as follows (1) Computing Hessenberg-triangular decomposition [9] of A and AX + B W AZ = T, W (AX + B)Z = H, 15n 3 flops, (29) where W and Z are unitary, T is upper triangular and H is upper Hessenberg (2) Computing the Schur decomposition [9] of X U XU = R, 25n 3 flops, (210) where U is unitary and R is upper triangular (3) Forming F F = W P(X)U, 4n 3 flops (211)

6 650 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) (4) Transforming from Y to E Y = Z EU, 4n 3 flops (5) Solving the upper Hessenberg systems (212) k 1 (H + r kk T )y k = f k r ik T y i, 4n 3 flops (213) i=1 Thus, the computational cost that we obtain X k+1 from X k is about 56n 3 flops for Algorithm 21 For Algorithm 23, in order to obtain X k+1 we need to solve two generalized Sylvester equations with the same coefficient matrix Hence the cost is about 70n 3 flops Algorithm 23 is only 14n 3 flops more than Algorithm 21 at each iteration, but the Algorithm 23 converges cubically to the solvent S Remark 24 If the starting matrix X 0 is sufficient near the solvent of Eq (11), then Newton s method with Samanskii technique has faster convergence than Newton s method The computational procedure of Newton s method with Samanskii technique only increases a time of circulation compared to Newton s method and when m = 1, it has contained the procedure of Newton s method 3 Two new algorithms Algorithm 31 (Newton s Method with Exact Line Searches) step 0: Given X 0, and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (31) step 3: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (32) step 4: Update X k+1 = X k + t E k, k = k + 1, and go to step 1, where error k = F(X k ) F From [5], we know that Algorithm 31 has the global convergence properties and the quadratic convergence rate, but at each iterative step, t, namely, a multiple of the Newton s step, has to be calculated Observing the Table 12, we obtain a question whether t approaches or equal to 1 when the iterative solution X k is near the solvent S The answer is yes, under a mild assumption, as we now show Recalling that Newton s method defines E by AE X + (AX + B)E = F(X) and Newton s method with exact line searches defines t by F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (34) From (33), we have F(X + s E) = (1 s)f(x) + s 2 AE 2 (35) Assume that X j is within a region where quadratic convergence to S and let X j+1 = X j + E j and X j+1 = X j + t E j be the Newton update and the update with exact line searches, respectively Defining j = S X j, So, by Lemma 21, we have (33) j+1 = O( j 2 ), (36)

7 and using (35) we have J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) (1 t)f(x j ) + t 2 AE 2 j = F(X j + t E j ) F(X j + E j ) = F(S j+1 ) = O( j 2 ) (37) Thus E j = j+1 + j, which means that E j = O( j ) and, Therefore (33) says F(X j ) = F(S j ) = D S ( j ) + O( j 2 ), where D S ( j ) = A j S + (AS + B) j Hence, as long as the Frechet derivative is nonsingular at S, (37) implies that 1 t = O( j ) So as long as X j is sufficient near the solvent S, that is j 0, then t 1 In practice, it is difficult to know in advance the solvent S of Eq (11), hence, we use F(X i ) ε 0 to replace j 0 for convenience Upon all above analysis, we have the following algorithm Algorithm 32 (No 1) step 0: Given X 0, ε 0 and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (38) step 3: If error k < ε 0, go to step 6 step 4: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (39) step 5: Update X k+1 = X k + t E k, k = k + 1, and go to step 1 step 6: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Clearly Algorithm 32 not only keeps the global convergence properties and the quadratic convergence rate, but also need not to compute t when error k < ε 0 From Theorem 21, we know that Newton s method with Samskii technique (m = 2) converges cubically to the solvent S, so long as the starting matrix X 0 satisfying the conditions X 0 S = ε 0 < ε = min{ 2 A, δ}, where = inf{ F (X)[H] : H = 1, X S δ} > 0 for sufficient small δ (0, 1] Hence, by Algorithms 31 and 23, we get the following algorithm Algorithm 33 (No 2) step 0: Given X 0, ε 0 and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (310) step 3: If error k < ε 0, go to step 6 step 4: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (311) step 5: Update X k+1 = X k + t E k, k = k + 1, and go to step 1 step 6: Update X k,1 = X k + E k

8 652 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) Table 41 Algorithm 21 n = 20 n = 50 s = 1 error 1 = 11291e s = 1 error 1 = 17853e s = 5 error 5 = s = 5 error 5 = s = 8 error 8 = s = 8 error 8 = s = 9 error 9 = s = 9 error 9 = s = 10 error 10 = 15401e 004 s = 10 error 10 = 25560e 004 s = 11 error 11 = 57274e 009 s = 11 error 11 = 95571e 009 s = 12 error 12 = 14282e 015 s = 12 error 12 = 22820e 015 Table 42 Algorithm 31 n = 20 n = 50 s 1 = 1 t = error 1 = s 1 = 1 t = error 1 = s 1 = 2 t = error 2 = s 1 = 2 t = error 2 = s 1 = 3 t = error 3 = s 1 = 3 t = error 3 = s 1 = 4 t = error 4 = 74418e 005 s 1 = 4 t = error 4 = 63845e 005 s 1 = 5 t = error 5 = 74663e 010 s 1 = 5 t = error 5 = 51195e 010 s 1 = 6 t = error 6 = 22122e 015 s 1 = 6 t = error 6 = 22820e 015 step 7: Solve for E k,1 in generalized Sylvester equation AE k,1 X k + (AX k + B)E k,1 = F(X k,1 ) (312) step 8: Update X k+1 = X k,1 + E k,1, k = k + 1, and go to step 1, where error k = F(X k ) F We know that Newton s method with exact line searches needs 61n 3 flops and Algorithm 33 needs 70n 3 flops as error < ε 0 for each iteration Hence, Algorithm 33 is only 9n 3 flops more than Algorithm 31 as error < ε 0 for each iterative step, while 9n 3 flops may be ignored compared with 61n 3 flops It is obvious that Algorithm 33 keeps the global convergence properties of Algorithm 31 but has faster convergence rate than Algorithm 31 by Remark 24 4 Numerical results In this section, we use several numerical examples to illustrate our results The following notations will be used in this section, s = i, s 1 = i and s 2 = i denote the ith iteration of Newton s method, Newton s method with exact line searches, Newton s method with Samanskii technique, respectively X i is the ith iterative solution error i = P(X i ) F, X denotes the iterative solution when P(X) F ε and error = P(X) F CPU denotes the computation time Example 41 Consider the example mentioned in [14], AX 2 + B X + C = 0, with A = I n, B = I n, C = (H 2 + H), where H = (h i j ), h i j = 1/(i + j 1) The Tables list the numerical result, ε 0 = 01, ε = Starting the iteration with X 0 = 100I n Observing the Tables 41 44, we know that Algorithm 21 needs the most flops and iteration numbers than other algorithms Algorithm 31 is better than Algorithm 21, since it needs less iteration numbers than Algorithm 21, but it needs to compute t at each iteration step Algorithm 32 keeps the merits of Algorithm 31 and does not need to compute t when error < ε 0 Algorithm 33 needs the fewest iteration numbers than others

9 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) Table 43 Algorithm 32 n = 20 n = 50 s 1 = 1 t = error 1 = s 1 = 1 t = error 1 = s 1 = 2 t = error 2 = s 1 = 2 t = error 2 = s 1 = 3 t = error 3 = s 1 = 3 t = error 3 = s = 1 error 4 = 22614e 004 s = 1 error 4 = 53830e 004 s = 2 error 5 = 12829e 008 s = 2 error 5 = 36341e 008 s = 3 error 6 = 22281e 015 s = 3 error 6 = 22953e 015 Table 44 Algorithm 33 n = 20 n = 50 s 1 = 1 t = error 1 = s 1 = 1 t = error 1 = s 1 = 2 t = error 2 = s 1 = 2 t = error 2 = s 1 = 3 t = error 3 = s 1 = 3 t = error 3 = s 2 = 1 error 4 = 33543e 006 s 2 = 1 error 4 = 86442e 006 s 2 = 2 error 5 = 17511e 015 s 2 = 2 error 5 = 27540e 015 Table 45 ε 0 = 10 1 Algorithm X 0 s Error s 1 Error s 1 s Error s 1 s 2 Error i I e e e e i I e e e e i I e e e e 14 Example 42 Consider the example mentioned in [5], AX 2 + B X + C = 0, with A = , B = , C = Applying Algorithms 21 and for X 0 = 10 j i I,, i = 1, j = 0, 5, 10 respectively The results are given in Table 45 Example 43 Consider the equation AX 2 + B X + C = 0 with A, B, C R n n, A = I n, B = 10, C = It comes from a damped mass spring system We take n = 50, 100, 150 and solve the equation by Algorithms 31 33, respectively The Tables list the numerical results Let ε 0 = 10 1 or 10, ε = Starting the iteration with X 0 = 10 5 I n It can be seen from the Tables that Algorithms 32 and 33 are better than Algorithms 21 and 31

10 654 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) Table 46 n = 50 Algorithm s 1 s 2 s Error CPU e e e ε 0 = e ε 0 = 01 Table 47 n = 100 Algorithm s 1 s 2 s Error CPU e e e ε 0 = e ε 0 = 01 Table 48 n = 150 Algorithm s 1 s 2 s Error CPU e e e ε 0 = e ε 0 = 01 5 Conclusions Higham and Kim obtained better theoretical and numerical results to solve quadratic matrix equation by Newton s method with exact line searches than by Newton s method in [5] In this paper we analyzed their algorithms and obtained that when the iterative solution is near the solvent S such that F(S) = 0, it is not necessary to use exact line searches Hence our Algorithm 32 need less cost of computation than Algorithm 31 Our Algorithm 33 first used Samanskii technique [11], which is often used to structure high-order algorithm to solve nonlinear equations, to solve quadratic matrix equation and obtained better theoretical and numerical results Acknowledgements The authors thank the anonymous referees for their helpful comments which improved the paper References [1] GJ Davis, Numerical solution of a quadratic matrix equation, SIAM J Sci State Comput 2 (1981) [2] GJ Davis, Algorithm 598: An algorithm to compute solvents of the matrix equation AX 2 + B X + C = 0, ACM Trans Math Software 9 (1983) [3] E Stickel, On the Fréchet derivative of matrix functions, Linear Algebra Appl 91 (1987) [4] W Kratz, E Stickel, Numerical solution of matrix polynomial equations by Newton s methods, IMA J Numer Anal 7 (1987) [5] NJ Higham, HM Kim, Solving a quadratic matrix equation by Newton s method with exact line searches, SIAM J Matrix Anal Appl 23 (2001) [6] NJ Higham, HM Kim, Numerical analysis of a quadratic matrix equation, IMA J Numer Anal 20 (2000) [7] JD Gardiner, AJ Laub, JJ Amato, CB Moler, Solution of the sylvester matrix equation AX B T + C X D T = E, ACM Trans Math Software 18 (1992) [8] P Lancaster, Lambda-Matrices and Vibrating Systems, Pergamon Press, Oxford, UK, 1966 [9] GH Golub, CF Van loan, Matrix Computations, third ed, The Johns Hopkins University Press, 1996 [10] JE Dennis Jr, F Traub, RP Weber, Algorithms for solvents of matrix polynomials, SIAM J Numer Anal 15 (1978) [11] V Samanskii, On a modification of the Newton method, Uhrain Mat Z 19 (1967) (in Russian) [12] F Tisseur, K Meerbergen, The quadratic eigenvalue problem, SIAM Rev 43 (2001) [13] I Gohberg, P lancaster, L Rodman, Matrix Polynomials, Academic Press, New York, 1982 [14] HM Kim, Numerical methods for solving a quadratic matrix equation, PhD Thesis, Manchester University, 2000

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

An error estimate for matrix equations

An error estimate for matrix equations Applied Numerical Mathematics 50 (2004) 395 407 www.elsevier.com/locate/apnum An error estimate for matrix equations Yang Cao, Linda Petzold 1 Department of Computer Science, University of California,

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational Applied Mathematics 236 (212) 3174 3185 Contents lists available at SciVerse ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

On condition numbers for the canonical generalized polar decompostion of real matrices

On condition numbers for the canonical generalized polar decompostion of real matrices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

An algorithm for symmetric generalized inverse eigenvalue problems

An algorithm for symmetric generalized inverse eigenvalue problems Linear Algebra and its Applications 296 (999) 79±98 www.elsevier.com/locate/laa An algorithm for symmetric generalized inverse eigenvalue problems Hua Dai Department of Mathematics, Nanjing University

More information

ELA

ELA POSITIVE DEFINITE SOLUTION OF THE MATRIX EQUATION X = Q+A H (I X C) δ A GUOZHU YAO, ANPING LIAO, AND XUEFENG DUAN Abstract. We consider the nonlinear matrix equation X = Q+A H (I X C) δ A (0 < δ 1), where

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

On matrix equations X ± A X 2 A = I

On matrix equations X ± A X 2 A = I Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,

More information

A filled function method for finding a global minimizer on global integer optimization

A filled function method for finding a global minimizer on global integer optimization Journal of Computational and Applied Mathematics 8 (2005) 200 20 www.elsevier.com/locate/cam A filled function method for finding a global minimizer on global integer optimization You-lin Shang a,b,,,

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

Condition Numbers and Backward Error of a Matrix Polynomial Equation Arising in Stochastic Models

Condition Numbers and Backward Error of a Matrix Polynomial Equation Arising in Stochastic Models J Sci Comput (2018) 76:759 776 https://doi.org/10.1007/s10915-018-0641-x Condition Numbers and Backward Error of a Matrix Polynomial Equation Arising in Stochastic Models Jie Meng 1 Sang-Hyup Seo 1 Hyun-Min

More information

Efficient algorithms for finding the minimal polynomials and the

Efficient algorithms for finding the minimal polynomials and the Efficient algorithms for finding the minimal polynomials and the inverses of level- FLS r 1 r -circulant matrices Linyi University Department of mathematics Linyi Shandong 76005 China jzh108@sina.com Abstract:

More information

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 2 pp 138-153 September 1994 Copyright 1994 ISSN 1068-9613 ETNA etna@mcskentedu ON THE PERIODIC QUOTIENT SINGULAR VALUE DECOMPOSITION JJ HENCH Abstract

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

Available online at   ScienceDirect. Procedia Engineering 100 (2015 ) 56 63 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 56 63 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Definite Quadratic

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

The symmetric linear matrix equation

The symmetric linear matrix equation Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

The Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009.

The Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009. The Complex Step Approximation to the Fréchet Derivative of a Matrix Function Al-Mohy, Awad H. and Higham, Nicholas J. 2009 MIMS EPrint: 2009.31 Manchester Institute for Mathematical Sciences School of

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of

More information

On the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type mappings in Banach spaces

On the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type mappings in Banach spaces Computers and Mathematics with Applications 53 (2007) 1847 1853 www.elsevier.com/locate/camwa On the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q Vaezzadeh et al. Advances in Difference Equations 2013, 2013:229 R E S E A R C H Open Access The iterative methods for solving nonlinear matrix equation X + A X A + B X B = Q Sarah Vaezzadeh 1, Seyyed

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

Math 304 (Spring 2010) - Lecture 2

Math 304 (Spring 2010) - Lecture 2 Math 304 (Spring 010) - Lecture Emre Mengi Department of Mathematics Koç University emengi@ku.edu.tr Lecture - Floating Point Operation Count p.1/10 Efficiency of an algorithm is determined by the total

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

On the Iwasawa decomposition of a symplectic matrix

On the Iwasawa decomposition of a symplectic matrix Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

The Newton Bracketing method for the minimization of convex functions subject to affine constraints Discrete Applied Mathematics 156 (2008) 1977 1987 www.elsevier.com/locate/dam The Newton Bracketing method for the minimization of convex functions subject to affine constraints Adi Ben-Israel a, Yuri

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat

More information

Eigen-solving via reduction to DPR1 matrices

Eigen-solving via reduction to DPR1 matrices Computers and Mathematics with Applications 56 (2008) 166 171 www.elsevier.com/locate/camwa Eigen-solving via reduction to DPR1 matrices V.Y. Pan a,, B. Murphy a, R.E. Rosholt a, Y. Tang b, X. Wang c,

More information

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro Algorithms 2015, 8, 774-785; doi:10.3390/a8030774 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Article Parallel Variants of Broyden s Method Ioan Bistran, Stefan Maruster * and

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL: Ann Funct Anal 5 (2014), no 2, 127 137 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:wwwemisde/journals/AFA/ THE ROOTS AND LINKS IN A CLASS OF M-MATRICES XIAO-DONG ZHANG This paper

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

On a quadratic matrix equation associated with an M-matrix

On a quadratic matrix equation associated with an M-matrix Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay Zheng-Jian Bai Mei-Xiang Chen Jin-Ku Yang April 14, 2012 Abstract A hybrid method was given by Ram, Mottershead,

More information

Solving Nonlinear Matrix Equation in Credit Risk. by Using Iterative Methods

Solving Nonlinear Matrix Equation in Credit Risk. by Using Iterative Methods Int. J. Contemp. Math. Sciences, Vol. 7, 2012, no. 39, 1921-1930 Solving Nonlinear Matrix Equation in Credit Risk by Using Iterative Methods a,b Gholamreza Farsadamanollahi 1, b Joriah Binti Muhammad and

More information

Hk+lYk 8k, Hk+ Hk + (Sk Hkyk)V[, LINEAR EQUATIONS* WHY BROYDEN S NONSYMMETRIC METHOD TERMINATES ON. Y g(xk+l) g(xk) gk+l gk

Hk+lYk 8k, Hk+ Hk + (Sk Hkyk)V[, LINEAR EQUATIONS* WHY BROYDEN S NONSYMMETRIC METHOD TERMINATES ON. Y g(xk+l) g(xk) gk+l gk SIAM J. OPIMIZAION Vol. 5, No. 2, pp. 231-235, May 1995 1995 Society for Industrial and Applied Mathematics 001 WHY BROYDEN S NONSYMMERIC MEHOD ERMINAES ON LINEAR EQUAIONS* DIANNE P. O LEARYt Abstract.

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

Applied Numerical Mathematics

Applied Numerical Mathematics Applied Numerical Mathematics 131 (2018) 1 15 Contents lists available at ScienceDirect Applied Numerical Mathematics www.elsevier.com/locate/apnum The spectral-galerkin approximation of nonlinear eigenvalue

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Consistency and efficient solution of the Sylvester equation for *-congruence

Consistency and efficient solution of the Sylvester equation for *-congruence Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 57 2011 Consistency and efficient solution of the Sylvester equation for *-congruence Fernando de Teran Froilan M. Dopico Follow

More information

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS AZIM RIVAZ 1 AND FATEMEH SALARY POUR SHARIF ABAD 2 1,2 DEPARTMENT OF MATHEMATICS, SHAHID BAHONAR

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information