Improved Newton s method with exact line searches to solve quadratic matrix equation

Similar documents
A Continuation Approach to a Quadratic Matrix Equation

2 Computing complex square roots of a real matrix

A derivative-free nonmonotone line search and its application to the spectral residual method

Spectral gradient projection method for solving nonlinear monotone equations

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

Lecture 2: Computing functions of dense matrices

Algorithms for Solving the Polynomial Eigenvalue Problem

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

An error estimate for matrix equations

Interval solutions for interval algebraic equations

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Journal of Computational and Applied Mathematics

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

On condition numbers for the canonical generalized polar decompostion of real matrices

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

Two Results About The Matrix Exponential

An algorithm for symmetric generalized inverse eigenvalue problems

ELA

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Quadratic Matrix Polynomials

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

On matrix equations X ± A X 2 A = I

A filled function method for finding a global minimizer on global integer optimization

The restarted QR-algorithm for eigenvalue computation of structured matrices

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Matrix functions that preserve the strong Perron- Frobenius property

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

Automatica, 33(9): , September 1997.

Condition Numbers and Backward Error of a Matrix Polynomial Equation Arising in Stochastic Models

Efficient algorithms for finding the minimal polynomials and the

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

ETNA Kent State University

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

The symmetric linear matrix equation

Iterative Methods. Splitting Methods

The Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009.

Comparison results between Jacobi and other iterative methods

Tensor Complementarity Problem and Semi-positive Tensors

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

On the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type mappings in Banach spaces

The skew-symmetric orthogonal solutions of the matrix equation AX = B

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q

Solving large scale eigenvalue problems

ETNA Kent State University

arxiv: v1 [math.na] 1 Sep 2018

Numerical Methods I Solving Nonlinear Equations

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

On deflation and singular symmetric positive semi-definite matrices

Math 304 (Spring 2010) - Lecture 2

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

On the Iwasawa decomposition of a symplectic matrix

THE solution of the absolute value equation (AVE) of

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Eigen-solving via reduction to DPR1 matrices

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Math 504 (Fall 2011) 1. (*) Consider the matrices

Conjugate Gradient (CG) Method

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro

Absolute value equations

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

ECS130 Scientific Computing Handout E February 13, 2017

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

On a quadratic matrix equation associated with an M-matrix

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Preconditioned conjugate gradient algorithms with column scaling

Higher rank numerical ranges of rectangular matrix polynomials

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay

Solving Nonlinear Matrix Equation in Credit Risk. by Using Iterative Methods

Hk+lYk 8k, Hk+ Hk + (Sk Hkyk)V[, LINEAR EQUATIONS* WHY BROYDEN S NONSYMMETRIC METHOD TERMINATES ON. Y g(xk+l) g(xk) gk+l gk

MATH 5524 MATRIX THEORY Problem Set 4

Applied Numerical Mathematics

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

S.F. Xu (Department of Mathematics, Peking University, Beijing)

Consistency and efficient solution of the Sylvester equation for *-congruence

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

On nonexpansive and accretive operators in Banach spaces

Transcription:

Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan Hu, Lei Zhang College of Mathematics and Econometrics, Hunan University, Changsha 410082, China Received 10 April 2007; received in revised form 15 September 2007 Abstract In this paper, we study the matrix equation AX 2 + B X + C = 0, where A, B and C are square matrices We give two improved algorithms which are better than Newton s method with exact line searches to calculate the solution Some numerical examples are reported to illustrate our algorithms c 2007 Elsevier BV All rights reserved MSC: 65F30; 65H10; 39B42 Keywords: Quadratic matrix equation; Solvent; Newton s method; Exact line search; Samanskii technique 1 Introduction In this paper, our interest is the simplest quadratic matrix equation AX 2 + B X + C = 0, where A, B and C are given square matrices The Eq (11) often occurs in many applications, for example, the quasi-birth death processes, the quadratic eigenvalue problems (QEP) and pseudo-spectra for quadratic eigenvalue problems, see detail in [5,8,12,14] Let F(X) = AX 2 + B X + C, where A, B and C are square matrices A matrix S is called a solvent of the Eq (11) if F(S) = 0 [4] In the case of simple solvent, where the derivative is regular A dominant solvent is a solvent matrix whose eigenvalues strictly dominate the eigenvalues of all other solvents The paper [10,13] gave two linearly convergent algorithms for computing a dominant solvent of Eq (11) while the algorithms have the drawbacks that it is difficult to check (11) (12) This research supported by the National Natural Science Foundation of China 10571047 and Doctorate Foundation of the Ministry of Education of China 20060532014 Corresponding author E-mail address: jianhuipaper@yahoocomcn (J-h Long) 0377-0427/$ - see front matter c 2007 Elsevier BV All rights reserved doi:101016/jcam200712018

646 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 Table 11 Newton s method The ith iteration Error The ith iteration Error 1 27394e+010 3 17121e+009 5 10701e+008 7 66871e+006 9 41706e+005 11 25220e+004 13 11082e+003 15 206248 16 07567 17 00017 18 11049e 008 19 53417e 014 Table 12 Newton s method with exact line searches The ith iteration t Error 1 19997 22820e+003 2 08375 4275598 3 11719 364079 4 10222 10573 5 10018 00015 6 10000 50189e 009 7 10000 36296e 014 in advance whether a dominant solvent exists and the convergence can be extremely slow Davis applied Newton s method to the Eq (11), and gave supporting theory and implementation details in [1,2] Kratz and Stickel investigated Newton s method for the general degree matrix polynomials in [4] Higham and Kim improved the global convergence properties of Newton s method with exact line searches and gave a complete characterization of solution in terms of the generalized Schur decomposition in [5] Various numerical solution techniques are described and compared in [6] From the above papers, we know that Newton s method is very attractive for solving the Eq (11) It has very nice numerical behavior such as quadratic convergence and good numerical stability, but it has a weak necessary that each iteration is rather expansive We know also that modified Newton s method is cheaper than Newton s method at each iteration [4], while it is only linearly convergence Consider the quadratic matrix equation AX 2 + B X + C = 0 with A, B, C R n n, n = 120 20 10 15 5 10 30 10 5 15 5 A = I n, B = 10, C = 5 30 10 15 5 10 20 5 15 The Tables 11 and 12 list the numerical results with starting iterative X 0 = 10 4 I n by Newton s method and Newton s method with exact line searches, respectively, where error = F(X i ) F, F denotes Frobenious norm, t is a real step size scalar in the direction of Newton It can be seen from the Tables 11 and 12 that when error > 10 1, F(X i ) F by Newton s method with exact line searches comes down faster than by Newton s method, while when error < 10 1 Newton s method with exact line searches has the same convergence rate as Newton s method and t sufficiently near or equal to 1 The paper has two main contributions First, we incorporate Newton s method with exact line searches and Newton s method in order to reduce the cost of computation Second, we incorporate Newton s method with exact line searches and Newton s method with Samanskii technique in order to have faster convergence than Newton s method with exact line searches The paper is organized as follows In Section 2, we introduce some notations and lemmas In Section 3, two algorithms are presented and analyzed In order to illustrate our results, several numerical examples are reported in Section 4 At last, some conclusions are given in Section 5

2 Some notations and lemmas J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 647 For matrices A, B C n n the Kronecker product is the matrix A B C n2 n 2 given by a 11 B a 1n B A B =, where A = (a i j ), a n1 B a nn B and the vec function of the matrix A is the column vec(a) = (a T 1,, at n )T C n2 where a 1,, a n C n are the the columns of A Definition 21 ([3,4,14]) Let G(X) = 0 be the nonlinear matrix equation, where G : C n n C n n If G(X + E) G(X) = G (X)[E] + G 1 (E), where G 1 : C n n C n n, G is a bounded linear operator and G 1 (E) / E 0 as E 0, then the function G is said to be Frechet-differentiable at X with Frechet derivative G (X)[E] in the direction E From Eq (11), we obtain F(X + E) = F(X) + AE X + (AX + B)E + AE 2 (21) Definition 21 implies that the Frechet derivative of Eq (11) at X in the direction E is given by F (X)[E] = AE X + (AX + B)E We call F (X) regular if the mapping F (X) is injective (22) Algorithm 21 (Newton s Method) step 0: Given X 0, and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (23) step 3: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Lemma 21 ([14]) Suppose that S C n n is a simple solvent of (11), ie F(S) = 0 and F (S) is regular Then, if the starting matrix X 0 is sufficiently near S, the sequence {X k } produced by Algorithm 21 converges quadratically to S More precisely, if X 0 S = ε 0 < ε = min{ A, δ}, where = inf { F (X)[H] : H = 1, X S δ } > 0 for sufficiently small δ (0, 1], then we have: (i) lim k X k = S, (ii) X k S ε 0 q k with q = ε 0 A < 1, for k = 0, 1, 2,, (iii) X k+1 S A X k S 2 for k = 0, 1, 2,, Remark 21 The Algorithm 21 has quadratical convergence rate, but it has some weakness When it solves the generalized Sylvester equation defining E k, it takes at least 56n 3 flops, thus each iteration is rather expansive Algorithm 22 (Modified Newton s Method) step 0: Given X 0, and ε, set k = 0

648 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 step 2: Solve for E k in generalized Sylvester equation AE k X 0 + (AX 0 + B)E k = F(X k ) (24) step 3: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Lemma 22 ([4]) Suppose that X 0 is an n n matrix, and (i) F (X 0 ) is regular, ie = inf { F (X 0 )[H] } : H = 1 > 0, { } c0 (ii) F(X 0 ) = ε 0 ε = min 2 4 A, 2 Then there exists S C n n,such that F(S) = 0, S X 0 2ε 0 More precisely, the sequence {X k } given by Algorithm 22 satisfies: (i) lim k X k = S exists with F(S) = 0, (ii) X k S q X k 1 S 2 ε 0 q k, (iii) X k+1 X k q X k X k 1 f or k = 1, 2,, q = 4ε 0 A < 1 c0 2 Remark 22 The Algorithm 22 is only linearly convergence, which means possibly very slow convergence But at each iterative step, only a linear equation with the same coefficient matrix has to be solved Algorithm 23 (Newton s Method with Samanskii Technique [11]) step 0: Given X 0, m and ε, set k = 0 step 2: Let X k,0 = X k, i = 1 step 3: if i > m, go to step 6 step 4: Solve for E k,i 1 in generalized Sylvester equation AE k,i 1 X k + (AX k + B)E k,i 1 = F(X k,i 1 ) (25) step 5: Update X k,i = X k,i 1 + E k,i 1, i = i + 1, and go to step 3 step 6: Update X k+1 = X k,m, k = k + 1, and go to step 1, where error k = F(X k ) F Remark 23 If m = 1, then the Algorithm 23 is Newton s method In this paper, we only consider the case that m = 2 Following we investigate the properties of Algorithm 23 Theorem 21 Suppose that S C n n is a simple solvent of (11), ie F(S) = 0 and F (S) is regular Then, if the starting matrix X 0 is sufficiently near S, the sequence {X k } produced by Algorithm 23 converges cubically to S More precisely, if X 0 S = ε 0 < ε = min{ 2 A, δ}, where, = inf { F (X)[H] : H = 1, X S δ } > 0 for sufficient small δ (0, 1], then we have: (i) X k+1 S η X k S 3 with η = A 2 c0 2 (ii) X k S ε 0 q k, q = ηε0 2 < 1, k = 1, 2,, (iii) lim k X k = S ( 2 + A ε 0 ) 3 A 2, k = 1, 2,, c0 2

J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 649 Proof From the Lemma 21, we obtain that X k,1 is well defined by X k,1 = X k + E, AE X k + (AX k + B)E = F(X k ), (26) { { }} X k V = X X S = ε 0 < ε = min δ, and Xk,1 S A X k S 2 2 A Hence, X k+1 is well defined by X k+1 = X k,1 + H, AH X k + (AX k + B)H = F(X k,1 ), (27) X k {X X S < ε 1 } V, where ε1 2 ε 0 A By the Lemma 21 and the properties of Kronecker product and vec function, we obtain X k+1 S = Xk,1 + H S = vec(xk,1 S) + vech = vec(x k,1 S) F (X k ) 1 vecf(x k,1 ) 1 vecf (X k )[X k,1 S] vecf(x k,1 ) = 1 F (X k )[X k,1 S] F(X k,1 ) + F(S) 1 A Xk,1 S ( 2 X k S + Xk,1 S ) 1 c 2 0 A 2 X k S 2 ( 2 X k S + A ( A 2 c0 2 2 + A ) ε 0 X k S 3 = η X k S 3, ) X k S 2 where F (X k ) = X T k A + I (AX k + B) So the assertion (i) is proved Now we obtain assertion (ii) by induction Since X 0 S = ε 0 ε is given, we assume that X k S ε 0 q k holds While q = ε 0 η < 1 is obvious, thus X k S ε 0 So we have X k+1 S η X k S 3 ηε 2 0 X k S = q X k S < ε 0 q k+1, hence, the assertion (ii) holds by induction principle It is obvious that (ii) implies (iii) Algorithms 21 and 23 are now compared in terms of computational cost we know that the heart of the two algorithms is to solve the generalized Sylvester equation AE X + (AX + B)E = F(X) (28) To solve (28) we can adapt methods described in [5,7] as follows (1) Computing Hessenberg-triangular decomposition [9] of A and AX + B W AZ = T, W (AX + B)Z = H, 15n 3 flops, (29) where W and Z are unitary, T is upper triangular and H is upper Hessenberg (2) Computing the Schur decomposition [9] of X U XU = R, 25n 3 flops, (210) where U is unitary and R is upper triangular (3) Forming F F = W P(X)U, 4n 3 flops (211)

650 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 (4) Transforming from Y to E Y = Z EU, 4n 3 flops (5) Solving the upper Hessenberg systems (212) k 1 (H + r kk T )y k = f k r ik T y i, 4n 3 flops (213) i=1 Thus, the computational cost that we obtain X k+1 from X k is about 56n 3 flops for Algorithm 21 For Algorithm 23, in order to obtain X k+1 we need to solve two generalized Sylvester equations with the same coefficient matrix Hence the cost is about 70n 3 flops Algorithm 23 is only 14n 3 flops more than Algorithm 21 at each iteration, but the Algorithm 23 converges cubically to the solvent S Remark 24 If the starting matrix X 0 is sufficient near the solvent of Eq (11), then Newton s method with Samanskii technique has faster convergence than Newton s method The computational procedure of Newton s method with Samanskii technique only increases a time of circulation compared to Newton s method and when m = 1, it has contained the procedure of Newton s method 3 Two new algorithms Algorithm 31 (Newton s Method with Exact Line Searches) step 0: Given X 0, and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (31) step 3: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (32) step 4: Update X k+1 = X k + t E k, k = k + 1, and go to step 1, where error k = F(X k ) F From [5], we know that Algorithm 31 has the global convergence properties and the quadratic convergence rate, but at each iterative step, t, namely, a multiple of the Newton s step, has to be calculated Observing the Table 12, we obtain a question whether t approaches or equal to 1 when the iterative solution X k is near the solvent S The answer is yes, under a mild assumption, as we now show Recalling that Newton s method defines E by AE X + (AX + B)E = F(X) and Newton s method with exact line searches defines t by F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (34) From (33), we have F(X + s E) = (1 s)f(x) + s 2 AE 2 (35) Assume that X j is within a region where quadratic convergence to S and let X j+1 = X j + E j and X j+1 = X j + t E j be the Newton update and the update with exact line searches, respectively Defining j = S X j, So, by Lemma 21, we have (33) j+1 = O( j 2 ), (36)

and using (35) we have J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 651 (1 t)f(x j ) + t 2 AE 2 j = F(X j + t E j ) F(X j + E j ) = F(S j+1 ) = O( j 2 ) (37) Thus E j = j+1 + j, which means that E j = O( j ) and, Therefore (33) says F(X j ) = F(S j ) = D S ( j ) + O( j 2 ), where D S ( j ) = A j S + (AS + B) j Hence, as long as the Frechet derivative is nonsingular at S, (37) implies that 1 t = O( j ) So as long as X j is sufficient near the solvent S, that is j 0, then t 1 In practice, it is difficult to know in advance the solvent S of Eq (11), hence, we use F(X i ) ε 0 to replace j 0 for convenience Upon all above analysis, we have the following algorithm Algorithm 32 (No 1) step 0: Given X 0, ε 0 and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (38) step 3: If error k < ε 0, go to step 6 step 4: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (39) step 5: Update X k+1 = X k + t E k, k = k + 1, and go to step 1 step 6: Update X k+1 = X k + E k, k = k + 1, and go to step 1, where error k = F(X k ) F Clearly Algorithm 32 not only keeps the global convergence properties and the quadratic convergence rate, but also need not to compute t when error k < ε 0 From Theorem 21, we know that Newton s method with Samskii technique (m = 2) converges cubically to the solvent S, so long as the starting matrix X 0 satisfying the conditions X 0 S = ε 0 < ε = min{ 2 A, δ}, where = inf{ F (X)[H] : H = 1, X S δ} > 0 for sufficient small δ (0, 1] Hence, by Algorithms 31 and 23, we get the following algorithm Algorithm 33 (No 2) step 0: Given X 0, ε 0 and ε, set k = 0 step 2: Solve for E k in generalized Sylvester equation AE k X k + (AX k + B)E k = F(X k ) (310) step 3: If error k < ε 0, go to step 6 step 4: Find t by exact line searches such that F(X k + t E k ) F = min s [0,2] F(X k + s E k ) F (311) step 5: Update X k+1 = X k + t E k, k = k + 1, and go to step 1 step 6: Update X k,1 = X k + E k

652 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 Table 41 Algorithm 21 n = 20 n = 50 s = 1 error 1 = 11291e + 004 s = 1 error 1 = 17853e + 004 s = 5 error 5 = 433420 s = 5 error 5 = 688583 s = 8 error 8 = 03885 s = 8 error 8 = 06346 s = 9 error 9 = 00258 s = 9 error 9 = 00424 s = 10 error 10 = 15401e 004 s = 10 error 10 = 25560e 004 s = 11 error 11 = 57274e 009 s = 11 error 11 = 95571e 009 s = 12 error 12 = 14282e 015 s = 12 error 12 = 22820e 015 Table 42 Algorithm 31 n = 20 n = 50 s 1 = 1 t = 19849 error 1 = 53244 s 1 = 1 t = 19872 error 1 = 63133 s 1 = 2 t = 05109 error 2 = 07510 s 1 = 2 t = 04331 error 2 = 07949 s 1 = 3 t = 11099 error 3 = 00330 s 1 = 3 t = 09954 error 3 = 00690 s 1 = 4 t = 10066 error 4 = 74418e 005 s 1 = 4 t = 10079 error 4 = 63845e 005 s 1 = 5 t = 10000 error 5 = 74663e 010 s 1 = 5 t = 10000 error 5 = 51195e 010 s 1 = 6 t = 10000 error 6 = 22122e 015 s 1 = 6 t = 10000 error 6 = 22820e 015 step 7: Solve for E k,1 in generalized Sylvester equation AE k,1 X k + (AX k + B)E k,1 = F(X k,1 ) (312) step 8: Update X k+1 = X k,1 + E k,1, k = k + 1, and go to step 1, where error k = F(X k ) F We know that Newton s method with exact line searches needs 61n 3 flops and Algorithm 33 needs 70n 3 flops as error < ε 0 for each iteration Hence, Algorithm 33 is only 9n 3 flops more than Algorithm 31 as error < ε 0 for each iterative step, while 9n 3 flops may be ignored compared with 61n 3 flops It is obvious that Algorithm 33 keeps the global convergence properties of Algorithm 31 but has faster convergence rate than Algorithm 31 by Remark 24 4 Numerical results In this section, we use several numerical examples to illustrate our results The following notations will be used in this section, s = i, s 1 = i and s 2 = i denote the ith iteration of Newton s method, Newton s method with exact line searches, Newton s method with Samanskii technique, respectively X i is the ith iterative solution error i = P(X i ) F, X denotes the iterative solution when P(X) F ε and error = P(X) F CPU denotes the computation time Example 41 Consider the example mentioned in [14], AX 2 + B X + C = 0, with A = I n, B = I n, C = (H 2 + H), where H = (h i j ), h i j = 1/(i + j 1) The Tables 41 44 list the numerical result, ε 0 = 01, ε = 10 11 Starting the iteration with X 0 = 100I n Observing the Tables 41 44, we know that Algorithm 21 needs the most flops and iteration numbers than other algorithms Algorithm 31 is better than Algorithm 21, since it needs less iteration numbers than Algorithm 21, but it needs to compute t at each iteration step Algorithm 32 keeps the merits of Algorithm 31 and does not need to compute t when error < ε 0 Algorithm 33 needs the fewest iteration numbers than others

J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 653 Table 43 Algorithm 32 n = 20 n = 50 s 1 = 1 t = 19849 error 1 = 53244 s 1 = 1 t = 19872 error 1 = 63133 s 1 = 2 t = 05109 error 2 = 07510 s 1 = 2 t = 04331 error 2 = 07949 s 1 = 3 t = 11099 error 3 = 00330 s 1 = 3 t = 09954 error 3 = 00690 s = 1 error 4 = 22614e 004 s = 1 error 4 = 53830e 004 s = 2 error 5 = 12829e 008 s = 2 error 5 = 36341e 008 s = 3 error 6 = 22281e 015 s = 3 error 6 = 22953e 015 Table 44 Algorithm 33 n = 20 n = 50 s 1 = 1 t = 19849 error 1 = 53244 s 1 = 1 t = 19872 error 1 = 63133 s 1 = 2 t = 05109 error 2 = 07510 s 1 = 2 t = 04331 error 2 = 07949 s 1 = 3 t = 11099 error 3 = 00330 s 1 = 3 t = 09954 error 3 = 00690 s 2 = 1 error 4 = 33543e 006 s 2 = 1 error 4 = 86442e 006 s 2 = 2 error 5 = 17511e 015 s 2 = 2 error 5 = 27540e 015 Table 45 ε 0 = 10 1 Algorithm 21 31 32 33 X 0 s Error s 1 Error s 1 s Error s 1 s 2 Error i I 8 14879e 14 6 55123e 14 4 2 32015e 13 5 1 31657e 14 10 5 i I 20 22907e 14 6 39046e 14 4 2 15796e 14 5 1 44408e 14 10 10 i I 37 48876e 14 7 28032e 14 5 2 73893e 14 5 2 18589e 14 Example 42 Consider the example mentioned in [5], AX 2 + B X + C = 0, with 176 128 289 766 245 21 121 189 159 A = 128 084 0413, B = 023 104 0223, C = 0 27 0145 289 0413 0725 06 0756 0658 119 364 155 Applying Algorithms 21 and 31 33 for X 0 = 10 j i I,, i = 1, j = 0, 5, 10 respectively The results are given in Table 45 Example 43 Consider the equation AX 2 + B X + C = 0 with A, B, C R n n, 20 10 15 5 10 30 10 5 15 5 A = I n, B = 10, C = 5 30 10 15 5 10 20 5 15 It comes from a damped mass spring system We take n = 50, 100, 150 and solve the equation by Algorithms 31 33, respectively The Tables 46 48 list the numerical results Let ε 0 = 10 1 or 10, ε = 10 12 Starting the iteration with X 0 = 10 5 I n It can be seen from the Tables 46 48 that Algorithms 32 and 33 are better than Algorithms 21 and 31

654 J-h Long et al / Journal of Computational and Applied Mathematics 222 (2008) 645 654 Table 46 n = 50 Algorithm s 1 s 2 s Error CPU 21 0 0 19 29565e 014 0365935 31 7 0 0 22575e 014 0171445 32 4 0 3 20640e 014 0158533 ε 0 = 10 33 5 1 0 42407e 014 0148978 ε 0 = 01 Table 47 n = 100 Algorithm s 1 s 2 s Error CPU 21 0 0 19 49027e 014 3482728 31 7 0 0 28602e 014 1705694 32 4 0 3 21056e 014 1683122 ε 0 = 10 33 5 1 0 62580e 014 1509677 ε 0 = 01 Table 48 n = 150 Algorithm s 1 s 2 s Error CPU 21 0 0 19 58728e 014 10466781 31 7 0 0 34548e 014 4754391 32 4 0 3 28255e 014 4683742 ε 0 = 10 33 5 1 0 75513e 014 4397326 ε 0 = 01 5 Conclusions Higham and Kim obtained better theoretical and numerical results to solve quadratic matrix equation by Newton s method with exact line searches than by Newton s method in [5] In this paper we analyzed their algorithms and obtained that when the iterative solution is near the solvent S such that F(S) = 0, it is not necessary to use exact line searches Hence our Algorithm 32 need less cost of computation than Algorithm 31 Our Algorithm 33 first used Samanskii technique [11], which is often used to structure high-order algorithm to solve nonlinear equations, to solve quadratic matrix equation and obtained better theoretical and numerical results Acknowledgements The authors thank the anonymous referees for their helpful comments which improved the paper References [1] GJ Davis, Numerical solution of a quadratic matrix equation, SIAM J Sci State Comput 2 (1981) 164 175 [2] GJ Davis, Algorithm 598: An algorithm to compute solvents of the matrix equation AX 2 + B X + C = 0, ACM Trans Math Software 9 (1983) 246 254 [3] E Stickel, On the Fréchet derivative of matrix functions, Linear Algebra Appl 91 (1987) 83 88 [4] W Kratz, E Stickel, Numerical solution of matrix polynomial equations by Newton s methods, IMA J Numer Anal 7 (1987) 355 369 [5] NJ Higham, HM Kim, Solving a quadratic matrix equation by Newton s method with exact line searches, SIAM J Matrix Anal Appl 23 (2001) 303 316 [6] NJ Higham, HM Kim, Numerical analysis of a quadratic matrix equation, IMA J Numer Anal 20 (2000) 499 519 [7] JD Gardiner, AJ Laub, JJ Amato, CB Moler, Solution of the sylvester matrix equation AX B T + C X D T = E, ACM Trans Math Software 18 (1992) 223 231 [8] P Lancaster, Lambda-Matrices and Vibrating Systems, Pergamon Press, Oxford, UK, 1966 [9] GH Golub, CF Van loan, Matrix Computations, third ed, The Johns Hopkins University Press, 1996 [10] JE Dennis Jr, F Traub, RP Weber, Algorithms for solvents of matrix polynomials, SIAM J Numer Anal 15 (1978) 523 533 [11] V Samanskii, On a modification of the Newton method, Uhrain Mat Z 19 (1967) 133 138 (in Russian) [12] F Tisseur, K Meerbergen, The quadratic eigenvalue problem, SIAM Rev 43 (2001) 235 286 [13] I Gohberg, P lancaster, L Rodman, Matrix Polynomials, Academic Press, New York, 1982 [14] HM Kim, Numerical methods for solving a quadratic matrix equation, PhD Thesis, Manchester University, 2000