A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

Similar documents
On the Preconditioning of the Block Tridiagonal Linear System of Equations

Optimal Iterate of the Power and Inverse Iteration Methods

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

nonlinear simultaneous equations of type (1)

Main matrix factorizations

The antitriangular factorisation of saddle point matrices

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

EECS 275 Matrix Computation

Conceptual Questions for Review

arxiv: v1 [math.na] 1 Sep 2018

Singular value decomposition

On the Iwasawa decomposition of a symplectic matrix

A Note on Eigenvalues of Perturbed Hermitian Matrices

Solution of Linear Equations

Why the QR Factorization can be more Accurate than the SVD

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Scientific Computing

1 Error analysis for linear systems

Total least squares. Gérard MEURANT. October, 2008

Algebra C Numerical Linear Algebra Sample Exam Problems

Applied Linear Algebra in Geoscience Using MATLAB

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Row Space, Column Space, and Nullspace

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

The Lanczos and conjugate gradient algorithms

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

2. Review of Linear Algebra

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

ETNA Kent State University

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

Block Bidiagonal Decomposition and Least Squares Problems

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

The SVD-Fundamental Theorem of Linear Algebra

Defining Equations for Bifurcations and Singularities

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Review of Some Concepts from Linear Algebra: Part 2

Linear Methods in Data Mining

Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)

3D Computer Vision - WT 2004

Matrix decompositions

Math Fall Final Exam

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

A fast randomized algorithm for overdetermined linear least-squares regression

Linear Algebra Review. Vectors

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Problem set 5: SVD, Orthogonal projections, etc.

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

9 Searching the Internet with the SVD

Mathematical foundations - linear algebra

Gene Golub Day Memorial Workshop HKBU. SVD the Swiss Army Knife of Linear Algebra. Walter Gander. February 7, 2015

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations

14.2 QR Factorization with Column Pivoting

Linear Algebra Massoud Malek

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

. = V c = V [x]v (5.1) c 1. c k

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Linear Algebra, part 3 QR and SVD

The skew-symmetric orthogonal solutions of the matrix equation AX = B

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Designing Information Devices and Systems II

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Inner Product, Length, and Orthogonality

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Introduction to Applied Linear Algebra with MATLAB

5 Linear Algebra and Inverse Problem

Review of similarity transformation and Singular Value Decomposition

Solving a class of nonlinear two-dimensional Volterra integral equations by using two-dimensional triangular orthogonal functions.

STA141C: Big Data & High Performance Statistical Computing

Banach Journal of Mathematical Analysis ISSN: (electronic)

1 Number Systems and Errors 1

Matrix decompositions

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices

The Singular Value Decomposition and Least Squares Problems

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

EE731 Lecture Notes: Matrix Computations for Signal Processing

The Deflation Accelerated Schwarz Method for CFD

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Lecture 5: Web Searching using the SVD

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Characterization of half-radial matrices

Some Algorithms Providing Rigourous Bounds for the Eigenvalues of a Matrix

EE731 Lecture Notes: Matrix Computations for Signal Processing

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Transcription:

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations Davod Khojasteh Salkuyeh 1 and Mohsen Hasani 2 1,2 Department of Mathematics, University of Mohaghegh Ardabili, P. O. Box. 179, Ardabil, Iran E-mail: 1 khojaste@uma.ac.ir E-mail: 2 hasani.mo@gmail.com Abstract Volokh and Vilnay in [Appl. Math. Lett. 13 (2000) 119-124] proposed a method for computing an accurate solution of nearly singular linear systems of equations. The process of the method is done in two stages. Their method uses the truncated singular value decomposition of the initial coefficient matrix at the first stage and the Gaussian elimination procedure for well-conditioned reduced system of linear equations at the second stage. In this note, we propose a multistage approach based on the Volokh and Vilnay s method. Some numerical results are given to show the efficiency of the proposed method. AMS Subject Classification : 65F22. Keywords: linear system, ill-conditioned, SVD, accuracy, nullspace, multistage. 1. Introduction Consider the linear system of equations Ax = b, (1) where the matrix A R m m is rather ill-conditioned and x, b R m. There are various direct and iterative methods to solve such systems. Unfortunately, direct solvers such as Gaussian elimination method may lead to an inaccurate solution of (1). Some methods have been proposed to overcome on this problem in the literature. One approach is to use the scaling strategy, but scaling of the equations and unknowns must proceed on a problem-by-problem basic. Hence, general scaling strategy are unreliable [1, 6]. Another approach is to use an iterative refinement of the solution obtained by a direct solver. This method have its own drawbacks and limitations [7]. The other method is to consider the problem as a least squares (LS) problem min Ax b 2. x R m 1

There are several techniques to solve this problem. One of the known methods involves the use of singular value decomposition (SVD) to solve it. Let the SVD of the matrix A takes the form m A = V ΣU T = σ i v i u T i, (2) where the columns of U and V are the right and left singular vectors, respectively, and the diagonal m m matrix Σ contains m singular values σ 1 σ 2... σ m > 0 of A. By using (2), the solution of (1) takes the form i=1 x = m i=1 σ 1 i u i (v T i b). (3) The SVD solution of (1) may be inaccurate due to appearing small singular values for badly conditioned matrices. This is an inherent property of the badly conditioned matrices and solving this problem is difficult. In [6], Volokh and Vilnay proposed an interesting method for computing an accurate solution of nearly singular linear system of equations. Their method is based on truncated SVD of A and is done in two stages. Their method uses the truncated SVD of the initial coefficient matrix at the first stage and the Gaussian elimination procedure for well-conditioned reduced system of linear equations at the second stage. In this note we propose a multistage method based on the Volokh and Vilnay s method. This paper is organized as follows. In section 2, we give a brief description of the Volokh and Vilnay s method and propose our method. Section 3 is devoted to numerical experiments. 2. The Volokh and Vilnay s method and the new idea Given ɛ > 0, let ɛ > σ n+1 σ n+2 σ m, (4) be the dangerous small singular values of A. By neglecting these singular values the SVD solution (3) takes the form x 1 = n i=1 σ 1 i u i (v T i b). (5) Parameter ɛ defines version of the truncated SVD (TSVD). The TSVD is widely used for the regularization of ill-conditioned linear systems. However, for badly ill-conditioned matrices the TSVD solution may not be accurate enough. Volokh and Vilnay in [6] proposed a method for pin-pointing this solution. Hereafter, we denote their method by TSPP (Two-Stage Pin-Pointing). Let V 1 = [v 1, v 2,..., v n ] and U 1 = [u 1, u 2,..., u n ] 2

be the left and right singular matrices of TSVD and Ṽ2 = [ṽ n+1,..., ṽ m ] V1 and Ũ 2 = [ũ n+1,..., ũ m ] U1 be their orthogonal complements. Columns of Ṽ2 and Ũ2 span nullspaces of V1 T and U2 T, respectively [5]. In the TSPP method the Eq. (1) is written as following ( ) [V 1, Ṽ2] T A[U 1, Ũ2] [U 1, Ũ2] T x = [V 1, Ṽ2] T b1. (6) b 2 Letting Eq. (6) takes the form à = [V 1, Ṽ2] T A[U 1, Ũ2], x = [U 1, Ũ2]z, C 1 = Ṽ T 2 AŨ2, b 1 = V T 1 b, b 2 = V T 2 b, ( ) ( ) diag(σ1,..., σ Ãz = n ) 0 z1 = 0 C 1 z 2 ( b1 b 2 ). (7) After computing the vectors z 1 and z 2 the solution of Eq. (1) is computed via x = U 1 z 1 + Ũ2z 2. The latter system is solved independently for z 1 and z 2, i.e., the system diag(σ 1,..., σ n )z 1 = b 1, (8) C 1 z 2 = b 2, (9) are solved independently. The first system of equations is solved easily and the second one may be solved by Gaussian elimination procedure. It can be seen that the singular values of C 1 are σ n+1 σ n+2 σ m. Therefore we have Therefore κ 2 (A) = A 2 A 1 2 = σ 1 σ 1 m, κ 2 (C 1 ) = C 1 2 C 1 1 2 = σ n+1 σ 1 m. κ 2 (C 1 ) κ 2 (A) = σ n+1 σ 1 < ɛσ 1 1 1. By the above discussion the TSPP(σ 1,..., σ n ) algorithm can be summarized as following. Algorithm 1. TSPP(σ 1,..., σ n ) algorithm 1. Compute the TSVD: σ 1,..., σ n, U 1, V 1. 2. Compute Ũ2 = null(u T 1 ) and Ṽ2 = null(v T 1 ). 3. Solve Eqs. (8) and (9) and compute the solution x = U 1 z 1 + Ũ2z 2 of (1). 3

1 Consider the Hilbert matrix H = ( ) of dimension 14. This matrix is a very i+j 1 ill-conditioned matrix with 2-norm condition number 4 10 17. The singular values of this matrix computed by the svd() function of MATLAB are σ 1 1.83 >... > σ 8 3.507e 008 > σ 9 1.004e 009 >... > σ 14 4.49e 018. Therefore, if in the Volokh and Vilnay s method we choose ɛ = 1e 8 then the condition number of the system (9) would be σ 8 σ 14 10 9. This shows that the coefficient matrix of system (9) is still ill-conditioned. In general, if the parameter ɛ is chosen small, then the singular values used in the coefficient matrix of Eq. (8) may be inaccurate. On the other hand, when ɛ is chosen large, then the condition number of the coefficient matrix of Eq. (9) would be large. To overcome on this problem we propose a multistage method based on the Volokh and Vilnay s method as follows. Let ɛ 1 > ɛ 2 >... > ɛ r 1 divide the singular values of A into r subsets σ n0 = σ 1... σ n1 > ɛ 1 > σ n1 +1... σ n2 > ɛ 2 > σ n2 +1... σ n3.. > ɛ r 1 > σ nr 1 +1... σ nr = σ m. In the multistage method the TSPP(σ 1,..., σ n1 ) method is applied for solving the Eq. (1). In this case a linear system of equations with coefficient matrix diag(σ 1,..., σ n1 ) and a linear system of equations with coefficient matrix C 1 should be solved. The first one is solved easily and to solve the second one we can proceed as follows. The singular values of coefficient matrix of the second system are σ n1 +1... σ m. Hence we can apply TSPP(σ n1 +1,..., σ n2 ) to solve this system. Continuing in this manner in r 1 stages we can obtain the solution of Eq. (1). The coefficient matrices of all of the linear systems arisen in the multistage method are diagonal except the last one. The 2-norm condition number of the last linear system is κ 2 (C r 1 ) = C r 1 2 C 1 r 1 2 = σ n r 1 +1 σ m, where the C r 1 is the coefficient matrix. Therefore we have κ 2 (C r 1 ) κ 2 (A) = σ n r 1 +1 σ 1 < ɛ r 1 σ 1 1 1. Note the in the Volokh and Vilnay s method we have 4

κ 2 (C 1 ) κ 2 (A) = σ n 1 +1 ( κ 2(C r 1 ) ). (10) σ 1 κ 2 (A) Relation (10) shows that the coefficient matrices arisen in the new approach are betterconditioned than that of the Volokh and Vilnay s method. 3. Numerical examples In this section, we give some numerical experiments to show the success of proposed method. All the numerical experiments were computed by some MATLAB codes in double precision (format long). For computing the nullspace of a matrix the function null() of MATLAB was used and the Gaussian elimination were applied to solve the linear systems of equations with non-diagonal coefficient matrices appearing in the methods. We use two test matrices used for the numerical tests in [6]. The first test matrix is the Hilbert matrix H of dimension 14. We apply the Volokh and Vilnay s method (VVM) with ɛ = 1e 9 and the proposed method (PM) with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 to solve Hx = b, where the vector b = H(1, 1,..., 1) T is the right-hand side. Numerical results are given in Table 1. As we see the numerical results of the proposed method are better than that of the Volokh and Vilnay s method. The relative error in the solution computed by the Volokh and Vilnay s method is 2.551 whereas this number for the proposed method is 0.004. Numerical experiments show that ɛ = 1e 9 is the best choice among ɛ = 1e k, k = 8, 9,..., 14 for the Volokh and Vilnay s method. In the second example we consider the matrix the following 14 14 matrix F = 1/9 1/12 1/15 1/3 1/7 1/10 1/13 1/16 1/5 1/8 1/11 1/14 1/17 1/6 1/13 1/16 1/19 1/8 1/11 1/14 1/17 1/20 1/9 1/12 1/15 1/18 1/21 1/10 1/17 1/20 1/23 1/12 1/15 1/18 1/21 1/24 1/13 1/16 1/19 1/22 1/25 1/14 1/7 1/10 1/13 1 1/5 1/8 1/11 1/14 1/3 1/6 1/9 1/12 1/15 1/4 1/11 1/14 1/17 1/6 1/9 1/12 1/15 1/18 1/7 1/10 1/13 1/16 1/19 1/8 1/15 1/18 1/21 1/10 1/13 1/16 1/19 1/22 1/11 1/14 1/17 1/20 1/23 1/12 1/19 1/22 1/25 1/14 1/17 1/20 1/23 1/26 1/15 1/18 1/21 1/24 1/27 1/16 1/22 1/25 1/28 1/17 1/20 1/23 1/26 1/29 1/18 1/21 1/24 1/27 1/30 1/19 1/10 1/13 1/16 1/5 1/8 1/11 1/14 1/17 1/6 1/9 1/12 1/15 1/18 1/7 1/14 1/17 1/20 1/9 1/12 1/15 1/18 1/21 1/10 1/13 1/16 1/19 1/22 1/11 1/18 1/21 1/24 1/13 1/16 1/19 1/22 1/25 1/14 1/17 1/20 1/23 1/26 1/15 1/8 1/11 1/14 1/2 1/6 1/9 1/12 1/15 1/4 1/7 1/10 1/13 1/16 1/5 1/21 1/24 1/27 1/16 1/19 1/22 1/25 1/28 1/17 1/20 1/23 1/26 1/29 1/18 1/16 1/19 1/22 1/11 1/14 1/17 1/20 1/23 1/12 1/15 1/18 1/21 1/24 1/13 This is another ill-conditioned matrix with 2-norm condition number 10 18. All of the assumptions are assumed as before. We apply the Volokh and Vilnay s method (VVM) with ɛ = 1e 11 and the proposed method (PM) with ɛ 1 = 1e 8 and ɛ 2 = 1e 13. The numerical results were given in Table 2. The relative error in the solution computed by the Volokh and Vilnay s method is 2.448 whereas this number for the proposed method is 0.007. Numerical experiments show that ɛ = 1e 11 is the best choice among ɛ = 1e k, k = 8, 9,..., 14 for the Volokh and Vilnay s method.. Acknowledgments The authors would like to thank the referee for his (or her) helpful comments. 5

Table 1: Numerical results for the Hilbert matrix of dimension n = 14. VVM with ɛ = 1e 9 PM with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 Computed Solution Absolute Error Computed Solution Absolute Error 1.00000010675783 1.06757833684412e-007 1.00000005652109 5.65210893643808e-008 0.999985095706563 1.49042934373123e-005 0.999999769922515 2.30077484508762e-007 1.0005064497271 0.000506449727099012 0.999957432240955 4.25677590445428e-005 0.992737204892941 0.00726279510705941 1.00054006152174 0.000540061521738799 1.05370865144561 0.0537086514456144 0.997529484315274 0.00247051568472623 0.780843360678186 0.219156639321814 1.00513709301642 0.00513709301641563 1.45838877350587 0.458388773505872 0.994950151062818 0.00504984893718197 0.847232018909457 0.152767981090543 1.00323021917093 0.00323021917092858-0.692613383916612 1.69261338391661 0.996948557124937 0.00305144287506276 5.68702514885676 4.68702514885676 0.998170691963699 0.00182930803630088-5.26474177941459 6.26474177941459 1.00979225986743 0.00979225986742605 5.76697826377195 4.76697826377195 0.99334689709916 0.00665310290083965-0.9797870527379 1.9797870527379 0.998649407767298 0.0013505922327024 1.34973726089356 0.349737260893559 1.00174824263405 0.00174824263405382 Table 2: Numerical results for the second test matrix. VVM with ɛ = 1e 11 PM with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 Computed Solution Absolute Error Computed Solution Absolute Error 1.4085137481626 0.408513748162596 1.00336639676309 0.00336639676309214 1.99251821242259 0.992518212422587 1.01007743512265 0.0100774351226496-3.67105129181578 4.67105129181578 1.0026818209199 0.00268182091990177 0.999999999994483 5.51736434317718e-012 1.0000000002954 2.95403923544768e-010 1.01407224270867 0.0140722427086655 1.00081639602976 0.000816396029760691 0.0739428959603102 0.92605710403969 1.00737379083455 0.00737379083454504-3.27748488407278 4.27748488407278 1.00266715030278 0.00266715030277576 2.91883795352396 1.91883795352396 0.991568701483989 0.00843129851601054 1.00002607590343 2.60759034251823e-005 0.999998906714366 1.09328563357991e-006 0.899051396844755 0.100948603155245 0.996721712245392 0.0032782877546077 1.878865777701 0.878865777701 0.981272890732537 0.0187271092674632 7.09672161502306 6.09672161502306 0.999255007099719 0.000744992900280517 0.66698121245165 0.33301878754835 1.00425807757395 0.00425807757395158 0.999004990432837 0.000995009567163141 0.999942237335184 5.77626648161633e-005 6

References [1] G.H. Golub and C. Van Loan, Matrix computations, The John Hapkins University Press, Baltimore, 1996. [2] R. Kress, Numerical Analysis, Springer-Verlag New York, 1998. [3] R.S. Martin, G. Peters and J.H. Wilkinson, Symmetric decompositions of a positive definite matrix, Numer. Math. 7 (1965) 362383. [4] R.S. Martin, G. Peters and J.H. Wilkinson, Iterative refinement of the solution of a positive definite system of equations, In: F.L. Baner, Editor, Linear Algebra Handbook for Automatic Computation Volume II, Springer-Verlag, Berlin, 1971. [5] C.D. Meyer, Matrix analysis and applied linear algebra, SIAM, 2004. [6] K.Y. Volokh and O. Vilnay, Pin-pointing solution of ill-conditioned square systems of linear equations, Appl. Math. Lett., 13 (2000) 119-124. [7] X. Wu, R. Shao and Y. Zhu, New iterative improvement of a solution for an Illconditioned system of linear equations based on a linear dynamic system, Comput. Math. Appl, 44 (2002) 1109-1116. 7