A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations
|
|
- Ophelia Allen
- 5 years ago
- Views:
Transcription
1 A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations Davod Khojasteh Salkuyeh 1 and Mohsen Hasani 2 1,2 Department of Mathematics, University of Mohaghegh Ardabili, P. O. Box. 179, Ardabil, Iran 1 khojaste@uma.ac.ir 2 hasani.mo@gmail.com Abstract Volokh and Vilnay in [Appl. Math. Lett. 13 (2000) ] proposed a method for computing an accurate solution of nearly singular linear systems of equations. The process of the method is done in two stages. Their method uses the truncated singular value decomposition of the initial coefficient matrix at the first stage and the Gaussian elimination procedure for well-conditioned reduced system of linear equations at the second stage. In this note, we propose a multistage approach based on the Volokh and Vilnay s method. Some numerical results are given to show the efficiency of the proposed method. AMS Subject Classification : 65F22. Keywords: linear system, ill-conditioned, SVD, accuracy, nullspace, multistage. 1. Introduction Consider the linear system of equations Ax = b, (1) where the matrix A R m m is rather ill-conditioned and x, b R m. There are various direct and iterative methods to solve such systems. Unfortunately, direct solvers such as Gaussian elimination method may lead to an inaccurate solution of (1). Some methods have been proposed to overcome on this problem in the literature. One approach is to use the scaling strategy, but scaling of the equations and unknowns must proceed on a problem-by-problem basic. Hence, general scaling strategy are unreliable [1, 6]. Another approach is to use an iterative refinement of the solution obtained by a direct solver. This method have its own drawbacks and limitations [7]. The other method is to consider the problem as a least squares (LS) problem min Ax b 2. x R m 1
2 There are several techniques to solve this problem. One of the known methods involves the use of singular value decomposition (SVD) to solve it. Let the SVD of the matrix A takes the form m A = V ΣU T = σ i v i u T i, (2) where the columns of U and V are the right and left singular vectors, respectively, and the diagonal m m matrix Σ contains m singular values σ 1 σ 2... σ m > 0 of A. By using (2), the solution of (1) takes the form i=1 x = m i=1 σ 1 i u i (v T i b). (3) The SVD solution of (1) may be inaccurate due to appearing small singular values for badly conditioned matrices. This is an inherent property of the badly conditioned matrices and solving this problem is difficult. In [6], Volokh and Vilnay proposed an interesting method for computing an accurate solution of nearly singular linear system of equations. Their method is based on truncated SVD of A and is done in two stages. Their method uses the truncated SVD of the initial coefficient matrix at the first stage and the Gaussian elimination procedure for well-conditioned reduced system of linear equations at the second stage. In this note we propose a multistage method based on the Volokh and Vilnay s method. This paper is organized as follows. In section 2, we give a brief description of the Volokh and Vilnay s method and propose our method. Section 3 is devoted to numerical experiments. 2. The Volokh and Vilnay s method and the new idea Given ɛ > 0, let ɛ > σ n+1 σ n+2 σ m, (4) be the dangerous small singular values of A. By neglecting these singular values the SVD solution (3) takes the form x 1 = n i=1 σ 1 i u i (v T i b). (5) Parameter ɛ defines version of the truncated SVD (TSVD). The TSVD is widely used for the regularization of ill-conditioned linear systems. However, for badly ill-conditioned matrices the TSVD solution may not be accurate enough. Volokh and Vilnay in [6] proposed a method for pin-pointing this solution. Hereafter, we denote their method by TSPP (Two-Stage Pin-Pointing). Let V 1 = [v 1, v 2,..., v n ] and U 1 = [u 1, u 2,..., u n ] 2
3 be the left and right singular matrices of TSVD and Ṽ2 = [ṽ n+1,..., ṽ m ] V1 and Ũ 2 = [ũ n+1,..., ũ m ] U1 be their orthogonal complements. Columns of Ṽ2 and Ũ2 span nullspaces of V1 T and U2 T, respectively [5]. In the TSPP method the Eq. (1) is written as following ( ) [V 1, Ṽ2] T A[U 1, Ũ2] [U 1, Ũ2] T x = [V 1, Ṽ2] T b1. (6) b 2 Letting Eq. (6) takes the form à = [V 1, Ṽ2] T A[U 1, Ũ2], x = [U 1, Ũ2]z, C 1 = Ṽ T 2 AŨ2, b 1 = V T 1 b, b 2 = V T 2 b, ( ) ( ) diag(σ1,..., σ Ãz = n ) 0 z1 = 0 C 1 z 2 ( b1 b 2 ). (7) After computing the vectors z 1 and z 2 the solution of Eq. (1) is computed via x = U 1 z 1 + Ũ2z 2. The latter system is solved independently for z 1 and z 2, i.e., the system diag(σ 1,..., σ n )z 1 = b 1, (8) C 1 z 2 = b 2, (9) are solved independently. The first system of equations is solved easily and the second one may be solved by Gaussian elimination procedure. It can be seen that the singular values of C 1 are σ n+1 σ n+2 σ m. Therefore we have Therefore κ 2 (A) = A 2 A 1 2 = σ 1 σ 1 m, κ 2 (C 1 ) = C 1 2 C = σ n+1 σ 1 m. κ 2 (C 1 ) κ 2 (A) = σ n+1 σ 1 < ɛσ By the above discussion the TSPP(σ 1,..., σ n ) algorithm can be summarized as following. Algorithm 1. TSPP(σ 1,..., σ n ) algorithm 1. Compute the TSVD: σ 1,..., σ n, U 1, V Compute Ũ2 = null(u T 1 ) and Ṽ2 = null(v T 1 ). 3. Solve Eqs. (8) and (9) and compute the solution x = U 1 z 1 + Ũ2z 2 of (1). 3
4 1 Consider the Hilbert matrix H = ( ) of dimension 14. This matrix is a very i+j 1 ill-conditioned matrix with 2-norm condition number The singular values of this matrix computed by the svd() function of MATLAB are σ >... > σ e 008 > σ e 009 >... > σ e 018. Therefore, if in the Volokh and Vilnay s method we choose ɛ = 1e 8 then the condition number of the system (9) would be σ 8 σ This shows that the coefficient matrix of system (9) is still ill-conditioned. In general, if the parameter ɛ is chosen small, then the singular values used in the coefficient matrix of Eq. (8) may be inaccurate. On the other hand, when ɛ is chosen large, then the condition number of the coefficient matrix of Eq. (9) would be large. To overcome on this problem we propose a multistage method based on the Volokh and Vilnay s method as follows. Let ɛ 1 > ɛ 2 >... > ɛ r 1 divide the singular values of A into r subsets σ n0 = σ 1... σ n1 > ɛ 1 > σ n σ n2 > ɛ 2 > σ n σ n3.. > ɛ r 1 > σ nr σ nr = σ m. In the multistage method the TSPP(σ 1,..., σ n1 ) method is applied for solving the Eq. (1). In this case a linear system of equations with coefficient matrix diag(σ 1,..., σ n1 ) and a linear system of equations with coefficient matrix C 1 should be solved. The first one is solved easily and to solve the second one we can proceed as follows. The singular values of coefficient matrix of the second system are σ n σ m. Hence we can apply TSPP(σ n1 +1,..., σ n2 ) to solve this system. Continuing in this manner in r 1 stages we can obtain the solution of Eq. (1). The coefficient matrices of all of the linear systems arisen in the multistage method are diagonal except the last one. The 2-norm condition number of the last linear system is κ 2 (C r 1 ) = C r 1 2 C 1 r 1 2 = σ n r 1 +1 σ m, where the C r 1 is the coefficient matrix. Therefore we have κ 2 (C r 1 ) κ 2 (A) = σ n r 1 +1 σ 1 < ɛ r 1 σ Note the in the Volokh and Vilnay s method we have 4
5 κ 2 (C 1 ) κ 2 (A) = σ n 1 +1 ( κ 2(C r 1 ) ). (10) σ 1 κ 2 (A) Relation (10) shows that the coefficient matrices arisen in the new approach are betterconditioned than that of the Volokh and Vilnay s method. 3. Numerical examples In this section, we give some numerical experiments to show the success of proposed method. All the numerical experiments were computed by some MATLAB codes in double precision (format long). For computing the nullspace of a matrix the function null() of MATLAB was used and the Gaussian elimination were applied to solve the linear systems of equations with non-diagonal coefficient matrices appearing in the methods. We use two test matrices used for the numerical tests in [6]. The first test matrix is the Hilbert matrix H of dimension 14. We apply the Volokh and Vilnay s method (VVM) with ɛ = 1e 9 and the proposed method (PM) with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 to solve Hx = b, where the vector b = H(1, 1,..., 1) T is the right-hand side. Numerical results are given in Table 1. As we see the numerical results of the proposed method are better than that of the Volokh and Vilnay s method. The relative error in the solution computed by the Volokh and Vilnay s method is whereas this number for the proposed method is Numerical experiments show that ɛ = 1e 9 is the best choice among ɛ = 1e k, k = 8, 9,..., 14 for the Volokh and Vilnay s method. In the second example we consider the matrix the following matrix F = 1/9 1/12 1/15 1/3 1/7 1/10 1/13 1/16 1/5 1/8 1/11 1/14 1/17 1/6 1/13 1/16 1/19 1/8 1/11 1/14 1/17 1/20 1/9 1/12 1/15 1/18 1/21 1/10 1/17 1/20 1/23 1/12 1/15 1/18 1/21 1/24 1/13 1/16 1/19 1/22 1/25 1/14 1/7 1/10 1/13 1 1/5 1/8 1/11 1/14 1/3 1/6 1/9 1/12 1/15 1/4 1/11 1/14 1/17 1/6 1/9 1/12 1/15 1/18 1/7 1/10 1/13 1/16 1/19 1/8 1/15 1/18 1/21 1/10 1/13 1/16 1/19 1/22 1/11 1/14 1/17 1/20 1/23 1/12 1/19 1/22 1/25 1/14 1/17 1/20 1/23 1/26 1/15 1/18 1/21 1/24 1/27 1/16 1/22 1/25 1/28 1/17 1/20 1/23 1/26 1/29 1/18 1/21 1/24 1/27 1/30 1/19 1/10 1/13 1/16 1/5 1/8 1/11 1/14 1/17 1/6 1/9 1/12 1/15 1/18 1/7 1/14 1/17 1/20 1/9 1/12 1/15 1/18 1/21 1/10 1/13 1/16 1/19 1/22 1/11 1/18 1/21 1/24 1/13 1/16 1/19 1/22 1/25 1/14 1/17 1/20 1/23 1/26 1/15 1/8 1/11 1/14 1/2 1/6 1/9 1/12 1/15 1/4 1/7 1/10 1/13 1/16 1/5 1/21 1/24 1/27 1/16 1/19 1/22 1/25 1/28 1/17 1/20 1/23 1/26 1/29 1/18 1/16 1/19 1/22 1/11 1/14 1/17 1/20 1/23 1/12 1/15 1/18 1/21 1/24 1/13 This is another ill-conditioned matrix with 2-norm condition number All of the assumptions are assumed as before. We apply the Volokh and Vilnay s method (VVM) with ɛ = 1e 11 and the proposed method (PM) with ɛ 1 = 1e 8 and ɛ 2 = 1e 13. The numerical results were given in Table 2. The relative error in the solution computed by the Volokh and Vilnay s method is whereas this number for the proposed method is Numerical experiments show that ɛ = 1e 11 is the best choice among ɛ = 1e k, k = 8, 9,..., 14 for the Volokh and Vilnay s method.. Acknowledgments The authors would like to thank the referee for his (or her) helpful comments. 5
6 Table 1: Numerical results for the Hilbert matrix of dimension n = 14. VVM with ɛ = 1e 9 PM with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 Computed Solution Absolute Error Computed Solution Absolute Error e e e e e Table 2: Numerical results for the second test matrix. VVM with ɛ = 1e 11 PM with ɛ 1 = 1e 8 and ɛ 2 = 1e 13 Computed Solution Absolute Error Computed Solution Absolute Error e e e e e-005 6
7 References [1] G.H. Golub and C. Van Loan, Matrix computations, The John Hapkins University Press, Baltimore, [2] R. Kress, Numerical Analysis, Springer-Verlag New York, [3] R.S. Martin, G. Peters and J.H. Wilkinson, Symmetric decompositions of a positive definite matrix, Numer. Math. 7 (1965) [4] R.S. Martin, G. Peters and J.H. Wilkinson, Iterative refinement of the solution of a positive definite system of equations, In: F.L. Baner, Editor, Linear Algebra Handbook for Automatic Computation Volume II, Springer-Verlag, Berlin, [5] C.D. Meyer, Matrix analysis and applied linear algebra, SIAM, [6] K.Y. Volokh and O. Vilnay, Pin-pointing solution of ill-conditioned square systems of linear equations, Appl. Math. Lett., 13 (2000) [7] X. Wu, R. Shao and Y. Zhu, New iterative improvement of a solution for an Illconditioned system of linear equations based on a linear dynamic system, Comput. Math. Appl, 44 (2002)
On the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationOptimal Iterate of the Power and Inverse Iteration Methods
Optimal Iterate of the Power and Inverse Iteration Methods Davod Khojasteh Salkuyeh and Faezeh Toutounian Department of Mathematics, University of Mohaghegh Ardabili, P.O. Box. 79, Ardabil, Iran E-mail:
More informationON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS
ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationnonlinear simultaneous equations of type (1)
Module 5 : Solving Nonlinear Algebraic Equations Section 1 : Introduction 1 Introduction Consider set of nonlinear simultaneous equations of type -------(1) -------(2) where and represents a function vector.
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationGeneralized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,
Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9 GENE H GOLUB 1 Error Analysis of Gaussian Elimination In this section, we will consider the case of Gaussian elimination and perform a detailed
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationSingular value decomposition
Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m
More informationOn the Iwasawa decomposition of a symplectic matrix
Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationWhy the QR Factorization can be more Accurate than the SVD
Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More information1 Error analysis for linear systems
Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationAlgebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:
Chapter 2. 2.0 Introduction Solution of Linear Algebraic Equations A set of linear algebraic equations looks like this: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1N x N =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +
More informationWeaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms
DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,
More informationRow Space, Column Space, and Nullspace
Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationTHE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction
Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of
More informationA generalization of the Gauss-Seidel iteration method for solving absolute value equations
A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,
More informationSingular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0
Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More informationThe SVD-Fundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationDefining Equations for Bifurcations and Singularities
Defining Equations for Bifurcations and Singularities John Guckenheimer and Yongjian Xiang Mathematics Department, Ithaca, NY 14853 For Vladimir Arnold on the occasion of his 65th birthday July 1, 2002
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationLinear Methods in Data Mining
Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software
More informationMajorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)
1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics
More information3D Computer Vision - WT 2004
3D Computer Vision - WT 2004 Singular Value Decomposition Darko Zikic CAMP - Chair for Computer Aided Medical Procedures November 4, 2004 1 2 3 4 5 Properties For any given matrix A R m n there exists
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationMath Fall Final Exam
Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More informationUNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES
UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationProblem set 5: SVD, Orthogonal projections, etc.
Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More information9 Searching the Internet with the SVD
9 Searching the Internet with the SVD 9.1 Information retrieval Over the last 20 years the number of internet users has grown exponentially with time; see Figure 1. Trying to extract information from this
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationGene Golub Day Memorial Workshop HKBU. SVD the Swiss Army Knife of Linear Algebra. Walter Gander. February 7, 2015
1 Gene Golub Day Memorial Workshop HKBU SVD the Swiss Army Knife of Linear Algebra Walter Gander February 7, 15 2 Swiss Army Knife? Dianne P. O'Leary Matrix Factorizations for Information Retrieval (06)
More informationA Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations
MATHEMATICAL COMMUNICATIONS Math. Commun. 2(25), 5 A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations Davod Khojasteh Salkuyeh, and Zeinab Hassanzadeh Faculty
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationAN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS
AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More informationDesigning Information Devices and Systems II
EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationInner Product, Length, and Orthogonality
Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The
More informationRESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS
RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS L. DYKES, S. NOSCHESE, AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) of a pair of matrices expresses each matrix
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More information5 Linear Algebra and Inverse Problem
5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationSolving a class of nonlinear two-dimensional Volterra integral equations by using two-dimensional triangular orthogonal functions.
Journal of Mathematical Modeling Vol 1, No 1, 213, pp 28-4 JMM Solving a class of nonlinear two-dimensional Volterra integral equations by using two-dimensional triangular orthogonal functions Farshid
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationSingular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationThe Deflation Accelerated Schwarz Method for CFD
The Deflation Accelerated Schwarz Method for CFD J. Verkaik 1, C. Vuik 2,, B.D. Paarhuis 1, and A. Twerda 1 1 TNO Science and Industry, Stieltjesweg 1, P.O. Box 155, 2600 AD Delft, The Netherlands 2 Delft
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationLecture 5: Web Searching using the SVD
Lecture 5: Web Searching using the SVD Information Retrieval Over the last 2 years the number of internet users has grown exponentially with time; see Figure. Trying to extract information from this exponentially
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationSome Algorithms Providing Rigourous Bounds for the Eigenvalues of a Matrix
Journal of Universal Computer Science, vol. 1, no. 7 (1995), 548-559 submitted: 15/12/94, accepted: 26/6/95, appeared: 28/7/95 Springer Pub. Co. Some Algorithms Providing Rigourous Bounds for the Eigenvalues
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More information