On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

Similar documents
Stable Border Bases for Ideals of Points arxiv: v2 [math.ac] 16 Oct 2007

Oil Fields and Hilbert Schemes

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Linear Algebra Massoud Malek

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Homework 2. Solutions T =

Solution of Linear Equations

Simple Approximate Varieties for Sets of Empirical Data

Chapter 3 Transformations

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Tikhonov Regularization of Large Symmetric Problems

Principal Component Analysis

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Basic Elements of Linear Algebra

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Some preconditioners for systems of linear inequalities

EE731 Lecture Notes: Matrix Computations for Signal Processing

WORKING WITH MULTIVARIATE POLYNOMIALS IN MAPLE

The Lanczos and conjugate gradient algorithms

Econ Slides from Lecture 7

1 Singular Value Decomposition and Principal Component

UNIT 6: The singular value decomposition.

EE731 Lecture Notes: Matrix Computations for Signal Processing

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

MATH 235: Inner Product Spaces, Assignment 7

Mathematical foundations - linear algebra

Linear Algebra Methods for Data Mining

Lecture 7: Positive Semidefinite Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices

ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Angular momentum operator algebra

Multivariate Statistical Analysis

Chapter 3. Vector spaces

Review of Some Concepts from Linear Algebra: Part 2

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

The SVD-Fundamental Theorem of Linear Algebra

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

Conceptual Questions for Review

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

MTH 2032 SemesterII

SVD, PCA & Preprocessing

A Review of Linear Algebra

On the Solution of Constrained and Weighted Linear Least Squares Problems

7. Dimension and Structure.

Cheat Sheet for MATH461

Supplementary Notes on Linear Algebra

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MCS 563 Spring 2014 Analytic Symbolic Computation Friday 31 January. Quotient Rings

Block Bidiagonal Decomposition and Least Squares Problems

Knowledge Discovery and Data Mining 1 (VO) ( )

14.2 QR Factorization with Column Pivoting

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ELE/MCE 503 Linear Algebra Facts Fall 2018

Linear Algebra Review. Vectors

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Vector Space Concepts

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

Multiplicative Perturbation Analysis for QR Factorizations

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Review of Linear Algebra

Numerical Methods in Matrix Computations

Econ Slides from Lecture 8

Matrix Vector Products

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

MAT Linear Algebra Collection of sample exams

Total Least Squares Approach in Regression Methods

Systems of Linear Equations

Statistical Geometry Processing Winter Semester 2011/2012

Lecture 23: 6.1 Inner Products

Large-scale eigenvalue problems

Chap 3. Linear Algebra

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Maths for Signals and Systems Linear Algebra in Engineering

Math Matrix Algebra

Problem set #4. Due February 19, x 1 x 2 + x 3 + x 4 x 5 = 0 x 1 + x 3 + 2x 4 = 1 x 1 x 2 x 4 x 5 = 1.

A fast randomized algorithm for overdetermined linear least-squares regression

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Numerically Stable Cointegration Analysis

Lecture 1: Review of linear algebra

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

ACM 104. Homework Set 4 Solutions February 14, 2001

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

Quantum Computing Lecture 2. Review of Linear Algebra

Overview of Computer Algebra

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

CS 246 Review of Linear Algebra 01/17/19

Efficient and Accurate Rectangular Window Subspace Tracking

On the solving of matrix equation of Sylvester type

Dense LU factorization and its error analysis

Transcription:

Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Claudia Fassino Received 17 June 005 Abstract We propose a criterion in order to establish, dealing with perturbed data, when a vector v belongs to a subspace W of a vector space U from a numerical point of view. The criterion formalizes the intuitive idea that, due to the errors that affect the knowledge of the vector and the subspace, we consider the exact vector lying in the exact subspace if a sufficiently small perturbation of v belongs to a sufficiently small perturbation of W. The criterion, obviously chosen as independent of the norm of v and of the basis of W, is computationally simple to use, but it requires the choice of a threshold. Suitable values of the threshold such that the criterion adheres to the previous intuitive idea are found both when we know a basis of the exact subspace and when an orthonormal basis of the perturbed subspace is given. 1 Introduction Several mathematical problems involve the necessity of detecting if an assigned vector v 0 of a vector space U lies in an assigned subspace W 0 of U. When v 0 and W 0 are known in an exact way, the theoretical approach to this detection gives a well known completely satisfactory answer. However, when we deal with real data, for example with physical measurements, the coordinates of the vectors are, in general, affected by errors, whose estimation is known. Obviously, in this case, the exact approach can lead to meaningless answers, especially when the vector is close to lying in the subspace. In this paper we present a suitable criterion for detecting numerical dependence. This criterion is powerful enough to overcome the difficulties occurring in presence of perturbed data, in the sense that if the perturbed vector v does not belong from the numerical point of view to the perturbed subspace W, we show that also the exact vector v 0 does not lie in the exact subspace W 0. Vice versa, if v numerically lies in W, we prove that there exists at least one small perturbation ˆv of v exactly belonging to the subspace Ŵ, which is a slight perturbation of W. In this case, since ˆv and Ŵ could correspond to the exact vector v 0 and the exact subspace W 0, obviously unknown, the Mathematics Subject Classifications: 15A03, 15A18, 65F99, 65G99. Dipartimento di Matematica, Università di Genova, via Dodecaneso 35, 16146 Genova, Italy. 65

66 Perturbed Vector in a Subspace criterion points out the possibility for v 0 of belonging exactly to W 0. Therefore the criterion formalizes the intuitive idea that we consider the exact vector v 0 lying in W 0 from a numerical point of view if v is sufficiently near to W and vice versa. The criterion, presented in Section, has been obviously chosen as independent of the norm of v and of the basis of the subspace W ; moreover, it is computationally simple to use. However, the criterion requires the choice of a suitable threshold in order to adhere to the intuitive idea described above. We also show, in Section 3, how to choose the threshold both in the simpler case, when a basis of the exact subspace W 0 is known and when a set of orthonormal perturbed vectors, which is regarded as the basis of the perturbed subspace W, is given. We specify that our interest in this problem arises from the analysis of the Buchberger- Möller algorithm [1] for computing the Gröbner basis of an ideal of polynomials of Q[x 1,...,x k ], vanishing on a finite set of perturbed points of Q k. At each step, the numerical version of the Buchberger-Möller algorithm executes a fundamental check based on the linear dependence of a perturbed vector v on a set of orthonormal perturbed vectors v 1,...,v n, and so we need a criterion for the numerical belonging of a vector to a subspace. Then, since this algorithm is implemented using the CoCoA package [], which performs all the computations on the rational numbers in exact arithmetic, we do not deal with roundoff errors and we are only interested in the effects of the errors which perturb the input points. Moreover, for the same reason, the vectors v 1,...,v n, computed from perturbed data, are exactly orthonormal and so we are interested in the case where an orthonormal basis of the perturbed subspace is known. On the Numerical Dependence Intuitively, when the exact case is unknown, we suppose the exact vector v 0 lying in the exact subspace W 0 if a perturbation v of v 0 is near to a perturbation W of W 0. We formalize this idea analyzing the ratio ρ / v, where ρ is the component of v orthogonal to W and is the vector -norm. DEFINITION.1. Given a subspace W and a threshold S>0, a vector v numerically lies in W if ρ S, (1) v where ρ is the component of v orthogonal to W. In this case we write v S W. Condition (1) obviously formalizes the intuitive idea described above in a way dependent only on the subspace W and on the direction of v and then independently of v and of the choice of the basis of W. Therefore, we can suppose v = 1 and W described by an orthonormal basis, if necessary. Moreover, note that, given a basis {v 1,...,v n } of W, the vector ρ can be obtained computing the residual of a least squares problem with matrix [v 1,...,v n ] and right-side vector v. However, condition (1) depends on the choice of the threshold S. Of course, an arbitrary choice of S can lead to results that do not reflect the intuitive idea. Then it is necessary to analyze how the threshold can be chosen in relation with the errors on the data. In the next

C. Fassino 67 section, we show how to choose a suitable value of S, in both situations, when we know the exact subspace W 0 and when we know only an approximation of it. 3 The Choice of Suitable Thresholds We analyze firstly the simpler case when the exact subspace W 0 U is known. Since the vector v is perturbed by errors, it can be considered as a representative element of a set I which contains the unknown exact vector v 0 and its admissible perturbations. Given an estimation ε of the relative error which perturbs v componentwise, we denote by I the set of admissible perturbations of v i=1 I = {t U : t v ε v }. Following the intuitive idea of numerical dependence, a suitable threshold S must be such that if v/ S W 0 then each vector t I does not lie in W 0, that implies v 0 / W 0. Vice versa, if v S W 0 there exists a vector ˆt of I which belongs to W 0 so that it can happen that v 0 lies in W 0. We want to show that, if W 0 is known, the value S = ε is a suitable choice. If {v (0) 1,...,v(0) n } is a basis of the exact subspace W 0, then v can be written as v = n i=1 α iv (0) i + ρ, where ρ W0. Let v/ ε W 0 : then, for condition (1), ρ >ε v. Since ρ W0, for each t W 0, t = n i=1 β iv (0) i, we have n n v t = (α i β i )v (0) i ρ = (α i β i )v (0) i + ρ. We have then v t ρ >ε v, and therefore t/ I. Vice versa, if v ε W 0, that is ρ ε v, the vector ˆt = v ρ, belonging to W 0,is such that v ˆt = ρ ε v. It follows that there exists an element of I, the vector ˆt, which also lies in W 0. The more general situation where not only the exact vector v 0, but also the exact subspace W 0 is unknown, requires a different analysis. We suppose to know a set of orthonormal vectors {v 1,...,v n }, approximation of the orthonormal basis {v (0),...,v n (0) } of W 0, and a perturbed vector v, with v = 1, approximation of v 0. Denoting by W the subspace spanned by {v 1,...,v n }, we apply condition (1) for detecting if the perturbed vector v numerically belongs to the perturbed subspace W and, as above, we have to choose a suitable threshold S. With respect to the previous case, there is a conceptual difference because in this situation we have to analyze also the effects of small perturbations of the basis of W. For this reason, a suitable threshold is such that if v/ S W, the vectors {v 1,...,v n,v} are linearly independent even if all of them are slightly perturbed. Vice versa, if v S W, we require that there exists a small perturbation of {v 1,...,v n,v} which is a set of linear dependent vectors. We suppose that ε is an estimation of the componentwise relative errors such that V V 0 ε V, where V 0 =[v (0) 1,...,v(0) n,v 0 ] and V =[v 1,...,v n,v] are, respectively, i=1

68 Perturbed Vector in a Subspace the matrices formed by the exact and the perturbed vectors, and is the -norm of a matrix. We define the set of the admissible perturbations as: I = {B V B ε V }. A suitable threshold must be such that, if v / S W, all the elements of I are full rank matrices, and so, since V 0 I, the exact vectors v (0) 1,...,v(0) n,v 0 are linearly independent. Vice versa, if v S W, the value of S must be such that a deficient rank matrix ˆB Iexists. Since ˆB can correspond to V 0, the exact vectors v (0) 1...v n (0) and v 0 could be linearly dependent. It is well known [3] that, if σ n+1 is the least singular value of V, then σ n+1 = min B V. rank(b)=n Therefore a suitable threshold must be such that the condition v/ S W implies ε V <σ n+1, () ensuring that all the elements of I are full rank matrices. Vice versa, the condition v S W must imply ε V σ n+1. Since the largest singular value σ 1 of V is σ 1 = V, the inequality () can be rewritten as εσ 1 <σ n+1. Condition () points out that, when the relative error ε is greater than 1, the problem of belonging of v to W is intrinsically ill posed. In fact in this case ε V = εσ 1 σ n+1 and then, independently of the choice of S, always there exists a rank deficient matrix B I. Therefore later on we suppose ε<1. The following theorem gives the singular values of the matrix V. THEOREM 3.1. Let {v 1,...,v n } be a set of orthonormal vectors in R m, m>n, and let v R m be a vector such that n v = α i v i + ρ with ρ Span{v 1,...,v n }. i=1 Then the singular values of the matrix V =[v 1,...,v n,v] are σ 1 = 1 1+ v + (1 + v ) 4 ρ 1, σ = = σ n =1, σ n+1 = 1 1+ v (1 + v ) 4 ρ 1. PROOF. It is well known [4] that the values σ1,...,σ n+1 are the eigenvalues of the matrix M = V t V, ( ) In α M = α t v, where α t =(α 1...α n ) and I n is the n n identity matrix.

C. Fassino 69 Let M j be the principal submatrix of M formed by selecting the last j rows and the last j columns, and let d j be the determinant of (M j λi j ). If we compute d n+1 with respect to the first row, we have and, in an analogous way, d n+1 =(1 λ)d n α 1(1 λ) n 1 d j+1 =(1 λ)d j α n j+1(1 λ) j 1, so that d n+1 =(1 λ) n d 1 α (1 λ) n 1. Since d 1 = v λ and v = ρ + α, we obtain d n+1 =(1 λ) n 1 [ λ (1 + v )λ + ρ ], so that the eigenvalues of V t V are 1, with algebraic multiplicity n 1, and λ + = 1 [1+ v + ] (1 + v ) 4 ρ, λ = 1 [1+ v ] (1 + v ) 4 ρ. Since ρ v, it is easy to verify that λ + 1 and λ 1, from which the theorem follows. As consequence of Theorem 3.1, since v = 1, the inequality () can be rewritten as ε 1+ 1 ρ < 1 1 ρ, and therefore, when ε<1, it is equivalent to ρ > ε 1+ε. The previous analysis indicates that the value S =ε/(1 + ε ) is a suitable choice for the threshold in this more general case. 4 Conclusions We have proposed a threshold dependent criterion for establishing when a vector v belongs to a subspace W from a numerical point of view if v and, in a more general case, W are perturbed by errors. When an estimation ε of the relative error on the data is known, we show that the following values of S are suitable threshold: i) S = ε if a basis of the exact subspace is known, ii) S =ε/(1 + ε ) if an orthonormal basis of the perturbed subspace is given and v =1. Several numerical tests straightforwardly confirm, from the computationally point of view, the theoretical evaluations (i) and (ii) of the threshold S.

70 Perturbed Vector in a Subspace References [1] B. Buchberger and H. Möller, The construction of multivariate polynomials with preassigned zeros, in: Proceedings of the European Computer Algebra Conference (EUROCAM 8), Vol. 144 of Lectures Notes In Comp. Sci., Springer, Marseille - France, 198, pp. 4 31. [] J. Abbott, A. Bigatti, M. Kreuzer and L. Robbiano, Computing ideals of points, JSYMC 30(4)(000) 341 356. [3] Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996. [4] G. H. Golub and C. V. Loan, Matrix Computations. Second Edition, The Johns Hopkins University Press, Baltimore, Maryland, 1991.