A NEW METHOD FOR SOLVING ILL-CONDITIONED LINEAR SYSTEMS. Fazlollah Soleymani

Similar documents
Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Finding simple roots by seventh- and eighth-order derivative-free methods

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

9.1 Preconditioned Krylov Subspace Methods

A Novel Computational Technique for Finding Simple Roots of Nonlinear Equations

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Tikhonov Regularization of Large Symmetric Problems

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Chapter 7 Iterative Techniques in Matrix Algebra

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

A New Two Step Class of Methods with Memory for Solving Nonlinear Equations with High Efficiency Index

M.A. Botchev. September 5, 2014

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

A new approach to solve fuzzy system of linear equations by Homotopy perturbation method

Optimal Preconditioners for Interval Gauss Seidel Methods

CLASSICAL ITERATIVE METHODS

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

A Two-Side Equilibration Method to Reduce the Condition Number of an Ill-Posed Linear System

An adaptive fast multipole boundary element method for the Helmholtz equation

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Optimal Preconditioning for the Interval Parametric Gauss Seidel Method

Notes on PCG for Sparse Linear Systems

Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices

THE solution of the absolute value equation (AVE) of

Quality of the Solution Sets of Parameter-Dependent Interval Linear Systems

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

On Solving Large Algebraic. Riccati Matrix Equations

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Multigrid absolute value preconditioning

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

SQUARE-ROOT FAMILIES FOR THE SIMULTANEOUS APPROXIMATION OF POLYNOMIAL MULTIPLE ZEROS

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

Lecture 18 Classical Iterative Methods

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Jae Heon Yun and Yu Du Han

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

AMSC/CMSC 466 Problem set 3

Inexactness and flexibility in linear Krylov solvers

UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS JOSEPH ACQUAH

Conjugate Gradient Method

Interval solutions for interval algebraic equations

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

DELFT UNIVERSITY OF TECHNOLOGY

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

arxiv: v1 [cs.na] 7 Jul 2017

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

Higher rank numerical ranges of rectangular matrix polynomials

A Block Compression Algorithm for Computing Preconditioners

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

On the solution of large Sylvester-observer equations

New seventh and eighth order derivative free methods for solving nonlinear equations

Copyright 2010 Tech Science Press CMES, vol.60, no.3, pp , 2010

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Arnoldi-Tikhonov regularization methods

Modified Jarratt Method Without Memory With Twelfth-Order Convergence

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

Variants of BiCGSafe method using shadow three-term recurrence

1 Conjugate gradients

Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Numerical Methods in Matrix Computations

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Fast Verified Solutions of Sparse Linear Systems with H-matrices

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

On solving linear systems arising from Shishkin mesh discretizations

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

Linear Solvers. Andrew Hazel

Pseudoinverse Preconditioners and Iterative Methods for Large Dense Linear Least-Squares Problems

Math 411 Preliminaries

Linear Algebra Massoud Malek

Notes on Conditioning

Robust l 1 and l Solutions of Linear Inequalities

Solving Interval Linear Equations with Modified Interval Arithmetic

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Arnoldi Methods in SLEPc

Some minimization problems

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Research Article On a New Three-Step Class of Methods and Its Acceleration for Nonlinear Equations

Research Article Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

Transcription:

Opuscula Math. 33, no. 2 (2013), 337 344 http://dx.doi.org/10.7494/opmath.2013.33.2.337 Opuscula Mathematica A NEW METHOD FOR SOLVING ILL-CONDITIONED LINEAR SYSTEMS Fazlollah Soleymani Communicated by Zdzisław Jackiewicz Abstract. An accurate numerical method is established for matrix inversion. It is shown theoretically that the scheme possesses the high order of convergence of seven. Subsequently, the method is taken into account for solving linear systems of equations. The accuracy of the contributed iterative method is clarified on solving numerical examples when the coefficient matrices are ill-conditioned. All of the computations are performed on a PC using several programs written in Mathematica 7. Keywords: matrix inversion, linear systems, Hilbert matrix, ill-conditioned, approximate inverse. Mathematics Subject Classification: 15A09, 65F10. 1. INTRODUCTION It is widely known that the solutions of linear systems of equations are sensitive to the round-off error. For some linear systems a small change in one of the values of the coefficient matrix or the right-hand side vector causes a large change in the solution vector. When the solution is highly sensitive to the values of the coefficient matrix A or the right-hand side constant vector b, the equations are said to be ill-conditioned. Therefore, we cannot easily rely on the solutions coming out of an ill-conditioned system. Ill-conditioned systems pose particular problems when the coefficients are estimated from experimental results [6]. For a system with condition number κ(a) = A A 1 on its coefficient matrix, one can expect a loss of roughly lg 10 κ(a) decimal places in the accuracy of the solution. In other words, solving very ill-conditioned linear systems by classical methods is not usual. The Krylov subspace methods (without preconditioners) [8] and also the iterative methods of the AOR family, both require a remarkable time to find a reliable solution. Note that the Accelerated Over Relaxation family or the AOR c AGH University of Science and Technology Press, Krakow 2013 337

338 Fazlollah Soleymani method includes two free parameters, which is the extension of the Successive Over Relaxation method. On the other hand, one way is to construct iterative methods of high order of convergence to find approximate inverses of the ill-conditioned matrices by applying high floating point arithmetic. Several methods of various orders were proposed for approximating (rectangular or square) matrix inverses, such as those according to the minimum residual iterations [1] and the Hotelling-Bodewig algorithm [9]. The Hotelling-Bodewig algorithm is defined as V n+1 = V n (2I AV n ), n = 0, 1,..., where I is the identity matrix. In 2011, Li et al. in [4] presented the following third-order method V n+1 = V n (3I AV n (3I AV n )), n = 0, 1,..., (1.1) and also proposed another cubical method for finding A 1 : V n+1 = [I + 14 ] (I V na)(3i V n A) 2 V n, n = 0, 1,.... (1.2) Notice that the iterative method (1.1) can also be found in [3]. As an another example, Krishnamurthy and Sen suggested the following sixth-order iteration method [3, 10] for the above purpose V n+1 = V n (2I AV n )(3I AV n (3I AV n ))(I AV n (I AV n )), (1.3) where n = 0, 1,.... In what follows, we first present a proof for the method (1.3). Theorem 1.1. Let A = [a ij ] N N be a nonsingular matrix. If the initial approximation V 0 satisfies I AV 0 < 1, then the iterative method (1.3) converges with sixth-order convergence to A 1. Proof. Let I AV 0 < 1, and E 0 = I AV 0. Subsequently E n = I AV n. Then for (1.3), we have E n+1 = I AV n+1 = = I A(V n (2I AV n )(3I AV n (3I AV n ))(I AV n (I AV n ))) = = I ( AV n ( 2I + AV n )(3I 3AV n + (AV n ) 2 )(I AV n + (AV n ) 2 ) = = (I AV n ) 6 = (E n ) 6. (1.4) Moreover, since E 0 < 1, by the relation (1.4) we obtain that E n+1 E n 6 E n 1 62... E 0 6n+1, (1.5) where (1.5) tends to zero when n, that is, I AV n 0, when n, and thus for (1.3), we attain V n A 1 as n. (1.6)

A new method for solving ill-conditioned linear systems 339 Now we show the sixth order of convergence. To do this, we denote e n = V n A 1, as the error matrix in the iterative procedure (1.3). We have A 1 + e n+1 = V n+1 = V n (2I AV n )(3I AV n (3I AV n ))(I AV n (I AV n )) = which yields and consequently = (e n + A 1 )(2I A(e n + A 1 ))(3I A(e n + A 1 )(3I A(e n + + A 1 )))(I A(e n + A 1 )(I A(e n + A 1 ))) = = (e n + A 1 )(Ae n I)(I + (Ae n ) 2 + (Ae n ) 4 ) = = (e n + A 1 )(I Ae n + (Ae n ) 2 (Ae n ) 3 + (Ae n ) 4 (Ae n ) 5 ) = = A 1 A 5 (e n ) 6, e n+1 = A 5 (e n ) 6, e n+1 A 5 e n 6. (1.7) Thus the iteration (1.3) locally converges with sixth order to A 1. This concludes the proof. For further reading, we refer the readers to [5, 7]. In this article, in order to work with very ill-conditioned matrices, we will propose a method for finding A 1 iteratively. The theoretical convergence of the method will also be studied. The rest of the paper is organized as follows. The main contribution of this article is given in Section 2. Subsequently, the method is examined in Section 3 numerically. Finally, concluding remarks are presented in Section 4. 2. A HIGH-ORDER METHOD This section contains a new high order algorithm for finding A 1 numerically. In order to deal with very ill-conditioned linear systems, or to find robust approximate inverses of the coefficient matrices, we suggest the following iteration method V n+1 = 1 4 V n(32i + AV n ( 113I + AV n (231I + AV n ( 301I + AV n (259I + AV n (W n )))))), (2.1) where W n = 147I + AV n (53I + AV n ( 11I + AV n )), for any n = 0, 1,..., wherein I is the identity matrix, and the sequence of iterates {V n } n=0 converges to A 1 for a good initial guess. Such a guess, V 0, will be given in Section 3. Theorem 2.1. Let A = [a ij ] N N be a nonsingular matrix. If the initial approximation V 0 satisfies I AV 0 < 1, (2.2) then the iterative method (2.1) converges with seventh order to A 1.

340 Fazlollah Soleymani Proof. Let I AV 0 < 1, E 0 = I AV 0, and E n = I AV n. For (2.1), we have E n+1 = I AV n+1 = Thus, we obtain 1 = I A[ 4 V n(32i + AV n ( 113I + AV n (231I + AV n ( 301I+ ] + AV n (259I + AV n ( 147I + AV n (53I + AV n ( 11I + AV n )))))))) = = 1 4 AV n(32i 113AV n + 231(AV n ) 2 301(AV n ) 3 + 259(AV n ) 4 147(AV n ) 5 + 53(AV n ) 6 11(AV n ) 7 + (AV n ) 8 ) = = 1 4 ( 2I + AV n) 2 ( I + AV n ) 7 = 1 4 (I + (I AV n)) 2 (I AV n ) 7 = = 1 4 (I + 2E n + (E n ) 2 )(E n ) 7 = 1 4 (E7 n + 2E 8 n + E 9 n). E n+1 = 1 4 ( E7 n + 2E 8 n + E 9 n ) 1 4 ( E7 n + 2 E 8 n + E 9 n ). (2.3) Moreover, since E 0 < 1, and E 1 E 0 7 < 1, hence we get that (we consider E n < 1) E n+1 E n 7 E n 1 72... E 0 7n+1 < 1, (2.4) where (2.4) tends to zero when n, that is, I AV n 0, when n, and thus for (2.1), we attain V n A 1 as n. (2.5) Now we show the seventh order of convergence. To do this, we denote e n = V n A 1, as the error matrix in the iterative procedure (2.1). We have We can now easily obtain and I AV n+1 = 1 4 [(I AV n) 7 + 2(I AV n ) 8 + (I AV n ) 9 ]. (2.6) A(A 1 V n+1 ) = 1 4 [A7 (A 1 V n ) 7 + 2A 8 (A 1 V n ) 8 + A 9 (A 1 V n ) 9 ], (2.7) A(V n+1 A 1 ) = 1 4 [ A7 (V n A 1 ) 7 + 2A 8 (V n A 1 ) 8 A 9 (V n A 1 ) 9 ]. (2.8) Simplifying (2.8) results in e n+1 = 1 4 [ A6 (e n ) 7 + 2A 7 (e n ) 8 A 8 (e n ) 9 ], (2.9)

A new method for solving ill-conditioned linear systems 341 which, by taking the norm of both sides, yields and consequently e n+1 e n+1 1 4 [ A6 e 7 n + 2 A 7 e 8 n + A 8 e 9 n ], (2.10) ( ) 1 4 [ A 6 + 2 A 7 e n + A 8 e n 2 ] e n 7. (2.11) Thus, the iteration (2.1) locally converges to A 1 with at least seventh order of convergence based on the error inequality (2.11). This concludes the proof. 3. COMPUTATIONAL TESTS In this section, experiments are presented to demonstrate the capability of the suggested method. For solving a square real linear system of equations of the general form Ax = b, wherein A R N N, we can now propose the following efficient algorithm x n+1 = V n+1 b while n = 0, 1,.... The programming package Mathematica 7 [11] has been used in this section. For numerical comparisons, we have used the Hotelling-Bodewig algorithm denoted by (HBA), the method (1.1) which is also known as Chebyshev s method, and the sixth-order method of Krishnamurthy and Sen (1.3) and the new accurate algorithm (2.1). We use 256 digits floating point arithmetic in our calculations using the command SetAccuracy [expr,256] to keep the effect of round-off error at minimum. We here take into account the very efficient way of producing V 0 as given by Codevico et al. in [2] as follows: V 0 = A T A 1 A. (3.1) This choice will easily provide guesses enough close to the A 1 in order to preserve the convergence order. However, another way to do this is V 0 = A T T race(aa T ). (3.2) Test Problem. Consider the linear system Hx = b, wherein H is the Hilbert matrix defined by H N N = [h i,j = 1 i+j 1 ]. The right hand side vector will be considered as b = (10, 10,..., 10) T in this paper. We use Hilbert matrices of various orders as test problems in this section.

342 Fazlollah Soleymani Table 1. The condition number of different Hilbert matrices Order of Hilbert matrix 10 15 20 Condition number 3.5 10 13 1.5 10 21 6.2 10 28 Table 1 simply shows that H is getting ill-conditioned by increasing the dimension of the matrix, and subsequently the system cannot be solved easily. Note that the condition numbers in Table 1 have been obtained using the command N[LinearAlgebra MatrixConditionNumber[HilbertMatrix[n]], 5] in Mathematica. We expect to find robust approximations of the Hilbert inverses in less iterations by high-order iterative methods. Table 2 shows the number of iterations for different methods in order to reveal the efficiency of the proposed iteration. In Table 2, IN and RN stand for iteration number and residual norm for the solution of the linear system, respectively. Note that in this test problem, we have used an initial value constructed by (3.1). There is a clear reduction in computational steps for the proposed method (2.1) in contrast to the other existing well-known methods of various orders in the literature based on Table 2. Table 2. Results of comparisons for the Test Problem Methods HBA (1.1) (1.3) (2.1) Order of convergence 2 3 6 7 Hilbert mat. dimension is 10 IN 96 61 38 33 RN 1.4 10 53 1.4 10 82 2.8 10 230 1.8 10 250 Hilbert mat. dimension is 15 IN 146 93 57 50 RN 2.2 10 41 5.5 10 94 1.3 10 90 5.7 10 250 Hilbert mat. dimension is 20 IN 197 124 76 66 RN 7.9 10 54 1.1 10 42 4.8 10 41 3.6 10 79

A new method for solving ill-conditioned linear systems 343 4. SUMMARY Iterative methods are often effective especially for large scale systems with sparsity and Hotelling-Bodewig-type methods for well-conditioned linear systems in double precision arithmetic and ill-conditioned ones by using high precision floating points. The Hotelling-Bodewig algorithm is simple to describe and to analyze, and is numerically stable for nonsingular input matrices. This was the idea of developing iterative methods of this type for the solution of linear systems. In this article, we have developed an iterative method in inverse-finding of matrices. Note that such high order iterative methods are efficient for very ill-conditioned linear systems. We have shown that the suggested method (2.1) reaches the seventh order of convergence. Moreover, the efficiency of the new scheme was illustrated numerically in Section 3. Acknowledgments Special thanks go to the anonymous referee for some valuable suggestions, which have resulted in the improvement of this work. REFERENCES [1] E. Chow, Y. Saad, Approximate inverse preconditioners via sparse-sparse iterations, SIAM J. Sci. Comput. 19 (1998), 995 1023. [2] G. Codevico, V.Y. Pan, M.V. Barel, Newton-Like iteration based on a cubic polynomial for structured matrices, Numer. Algorithms 36 (2004), 365 380. [3] E.V. Krishnamurthy, S.K. Sen, Numerical Algorithms Computations in Science and Engineering, Affiliated East-West Press, New Delhi, 1986. [4] H.-B. Li, T.-Z. Huang, Y. Zhang, V.-P. Liu, T.-V. Gu, Chebyshev-type methods and preconditioning techniques, Appl. Math. Comput. 218 (2011), 260 270. [5] W. Li, Z. Li, A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix, Appl. Math. Comput. 215 (2010), 3433 3442. [6] C.-S. Liu, W. Yeih, S.N. Atluri, On solving the ill-conditioned system Ax = b: General-purpose conditioners obtained from the boundary-collocation solution of the Laplace equation, using Trefftz expansions with multiple length scales, CMES Comput. Model. Eng. Sci. 44 (2009), 281 311. [7] K. Moriya, T. Noderab, A new scheme of computing the approximate inverse preconditioner for the reduced linear systems, J. Comput. Appl. Math. 199 (2007), 345 352. [8] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., SIAM, 2003. [9] G. Schulz, Iterative Berechnung der Reziproken Matrix, Z. Angew. Math. Mech. 13 (1933), 57 59. [10] S.K. Sen, S.S. Prabhu, Optimal iterative schemes for computing Moore-Penrose matrix inverse, Int. J. Systems Sci. 8 (1976), 748-753. [11] S. Wolfram, The Mathematica Book, 5th ed., Wolfram Media, 2003.

344 Fazlollah Soleymani Fazlollah Soleymani fazlollah.soleymani@gmail.com Islamic Azad University Department of Mathematics Zahedan Branch, Zahedan, Iran Received: March 13, 2012. Revised: June 1, 2012. Accepted: June 14, 2012.