On the Solution of Constrained and Weighted Linear Least Squares Problems

Similar documents
We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

ACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM

Chapter 1. Matrix Algebra

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Review Questions REVIEW QUESTIONS 71

Dense LU factorization and its error analysis

Tikhonov Regularization for Weighted Total Least Squares Problems

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

arxiv: v1 [math.na] 1 Sep 2018

Chapter 3 Transformations

A Note on Simple Nonzero Finite Generalized Singular Values

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Properties of Matrices and Operations on Matrices

Introduction to Numerical Linear Algebra II

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

c 1999 Society for Industrial and Applied Mathematics

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

14.2 QR Factorization with Column Pivoting

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

CHAPTER 7. Regression

Lecture notes: Applied linear algebra Part 1. Version 2

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Linear Algebra in Actuarial Science: Slides to the lecture

Math 407: Linear Optimization

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Applied Numerical Linear Algebra. Lecture 8

Chap 3. Linear Algebra

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

Orthonormal Transformations and Least Squares

Scientific Computing: Dense Linear Systems

Lecture 9: Numerical Linear Algebra Primer (February 11st)

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Lecture 7. Econ August 18

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

A Note on Eigenvalues of Perturbed Hermitian Matrices

Interval solutions for interval algebraic equations

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b)

1 Error analysis for linear systems

Scientific Computing

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

c 2002 Society for Industrial and Applied Mathematics

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

A Note on Inverse Iteration

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

There are six more problems on the next two pages

Error estimates for the ESPRIT algorithm

EE731 Lecture Notes: Matrix Computations for Signal Processing

ELE/MCE 503 Linear Algebra Facts Fall 2018

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Real Symmetric Matrices and Semidefinite Programming

Mathematical foundations - linear algebra

Linear Algebraic Equations

Linear Algebra, part 3 QR and SVD

Numerical Linear Algebra Notes

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Tikhonov Regularization of Large Symmetric Problems

MATH 5524 MATRIX THEORY Problem Set 4

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Multiplicative Perturbation Analysis for QR Factorizations

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

B553 Lecture 5: Matrix Algebra Review

Review of Basic Concepts in Linear Algebra

ETNA Kent State University

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)

Algebra C Numerical Linear Algebra Sample Exam Problems

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Subset selection for matrices

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

a Λ q 1. Introduction

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Solution of Linear Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Algebra for Machine Learning. Sargur N. Srihari

A Distributed Newton Method for Network Utility Maximization, II: Convergence

5 Solving Systems of Linear Equations

NORMS ON SPACE OF MATRICES

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

On condition numbers for the canonical generalized polar decompostion of real matrices

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Knowledge Discovery and Data Mining 1 (VO) ( )

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Transcription:

International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science Faculty of Science, Kuwait University P. O. Box 5969, Safat 13060, Kuwait mraziz@mcs.sci.kuniv.edu.kw Abstract This paper aims to investigate some theoretical characteristics of the solution of constrained and weighted linear least squares problem CWLS). This investigation is based on perturbing the CWLS problem and driving bounds for the relative error. Explicit expressions for the inverse and Moore-Penrose inverse are used to estimate these bounds. Moreover, the effects of weights are introduced. Mathematics Subject Classification: Primary 65F15; Secondary 65G05 Keywords: least squares problems, perturbation, weights. 1 Introduction Important problems in many scientific computational areas are least squares problems. The problem of constraint least squares with full column weight matrix is a class of these problems. We are concerned with the connection between the condition numbers and the rounding error in the solution of the CWLS problem. The fact that this problem is an intrinsic feature of least squares problems makes it necessary to study the characteristics of its solution, we investigate the sensitivity of the solution x R n that satisfies the overdetermined system of equations Ax b, 1.1) where A R m n, b R m. System 1.1) corresponds to the CWLS problem minimize d Cx) T W 1 1 d Cx) subject to Bx = c, 1.2) where W 1 is a symmetric positive definite matrix, A =B T C T ) T and b =c T d T ) T, with B R k n, C R l n, c R k and d R l for k + l = m and m n k. Problem 1.2) arises in many practical applications including linear programming, electrical networks, boundary value problems, analysis of large-scale structures and the inequality constrained least squares problems [10, 12]. 1 On leave from Department of Mathematics, Faculty of Science, Alexandria University, Egypt.

1068 Mohammedi R. Abdel-Aziz We will assume that rankb) = k and ranka) = n. 1.3) The condition that B of full row rank ensures that the system Bx = c is consistent and hence that Problem 1.2) has solution. The second condition in 1.3) guarantees that there is a unique solution [2]. An equivalent formulation for Problem 1.2) see [6]) is O O B O W 1 C B T C T O λ μ x = c d 0, 1.4) where λ R k is a vector of Lagrange multipliers and W 1 μ is the residual. The augmented system 1.4) is formed from the Karuch-Kuhn-Tuker conditions of the CWLS Problem 1.2) [4]. Intensive research has been developed for studying quadratic problems [13, 9]. The LS problems with quadratic equality and with weighting equality constraints by Gander [7], Van Loan [15], James [11], Hough etal. [10], Wei [16] and Bobrornikova etal. [3]. Techniques for solving unconstrained and constrained are found in [2, 8]. Some algorithms for solving the unweighted class of Problem 1.2) are based on introducing a Lagrange multiplier and generating a sequence of lower and upper bounds without many computations [9]. A superlinearly convergent algorithm has been developed to solve this constrained LS class by recasting the problem into a parameterized eigenvalue problem and computing the optimal solution from the parameterized eigenvector [1]. In this contribution, we introduce an investigation for the sensitivity and the effects of weights on the solution of Problem 1.2). This investigation is based on constructing bounds for the relative error of the equivalent formulation 1.4) and studying the ɛ-dependent weights. Throughout the paper R m n denotes the linear space of all m n real matrices. If A is an m n real matrix, then A T denotes the transpose matrix. The weighted Moore-Penrose inverse for a matrix A and S with appropriate sizes is defined by the unique solution Z of A = AZA, Z = ZAZ, S T SZA) T = S T SZA and AZ is symmetric matrix [5]. The weighted Moore-Penrose inverse of A is given by Z = A + S =I [SP]+ S) + A +, where P = I A + A. Z is the standard Moore- Penrose inverse of A, when S = I. 2 Sensitivity of the solution Throughout this section, we reformulate system 1.4) as ) ) ) Q A z b O O A T =, where Q = O x 0 O W 1 ) 2.1) and z =λ T μ T ) T. System 2.1) represents a generalized formulation for the whole classes of linear squares problems. For an ordinary LS problem, we set Q = I m

On the Solution of Constrained and Weighted... 1069 and for unweighted and constrained LS problems, we set W 1 = I m k. The weighted and unconstrained LS problems for Q is symmetric positive definite matrix can be formulated as minimize b Ax) T Q 1 b Ax), where A R m n,b R m,x R n and m n. We assume that the matrices A and Q T A) T have linearly independent columns [8]. The following Lemmas establishes the inverse of the coefficients matrix of system 2.1). Lemma 2.1 Under the conditions 1.3) and setting W = Q 1 2 and G = A T + W, then the inverse of the matrix in question is ) WP) + WP) +T G WG) T WG) G T Proof: Elden in [5] shows that the inverse of the coefficients matrix of 2.1) is ) P WP) + WP) +T P T G WG) T. WG) G T The proof is completed by taking in account that RangWP) + )= RangWP) T )=RangPW) RangP ) which provides that P WP) + =WP) +. We wish to determine the sensitivity of the solution of Problem 2.1) to the perturbations δa, δq and δb of A, Q and b, respectively. We assume that the conditions 1.3) hold for system 1.4) and for the perturbed data ˆQ Â = A + δa, = Q + δq and ˆb = b + δb which will certainly be true if δa, δq and δb are sufficiently small. The perturbations normwise are measured by the smallest ɛ for which δa F ɛ A F, δq F ɛ Q F, δb 2 ɛ b 2 2.2) where A and Q are matrices and b is a vector of tolerances. The Forbenius norm for matrices is used since it leads to more cheaply computable bounds than the 2-norm. Thus, we define the condition numbers as follows K Q A) = A F G 2, K A Q) = W F WP) + 2 and K A = A F A + 2. 2.3) The following lemma gives us the componentwise perturbation bounds of δx and δz. Lemma 2.2 Let the assumptions of Lemma 2.1 hold. Then δx 2 b 2 ɛk Q A) + Q ) F z 2 +1 +ɛk A F A F x QA) 2 Q 2 z 2 +Oɛ 2 ), 2.4) 2 A F δz 2 ɛk 2 b 2 z AQ) + A ) F +1 + ɛk Q A)+Oɛ 2 ). 2.5) 2 Q F z 2 Q F z 2

1070 Mohammedi R. Abdel-Aziz Proof: Consider the perturbed system of 2.1) which can be written as ) ) ) ˆQ  ẑ ˆb  T =, 2.6) O ˆx 0 Rearranging terms in 2.6) and taking in account that ẑ = z + δz and ˆx = x + δx, we obtain ) ) ) Q A ẑ ˆb δqẑ δaˆx A T = O ˆx δa) T, 2.7) ẑ Substracting 2.1) from 2.7), we obtain ) ) Q A δz A T = O δx δb δqẑ δaˆx δa) T ẑ ), 2.8) Applying Lemma 2.1 and using the definition of Q, we obtain δx = G T δb δqẑ δaˆx)+g T QGδA) T ẑ and δz = WP) + WP) +T δb δqẑ δaˆx) GδA) T ẑ. Using the definitions of ˆx and ẑ, we obtain δx = G T δb δqz δax)+g T QGδA) T z G T δqδz δaδx) +G T QGδA) T δz. δz =WP) + WP) +T δb δqz δax) GδA) T z WP) + WP) +T δqδz + δaδx) GδA) T δz. But, δx and δz are of first order in ɛ, we obtain the first order expressions δx = G T δb δqz δax)+g T QGδA) T z + Oɛ 2 ) and 2.9) δz = WP) + WP) +T δb δqz δax) GδA) T z Oɛ 2 ). 2.10) Taking 2-norm of both sides, Using 2.2) and dividing 2.9) and 2.10) by and z 2, repectively, we obtain δx 2 ɛ G 2 b 2 δz 2 z 2 ɛ + Q F z 2 + A F )+ G 2 2 Q ) 2 A F z 2 + Oɛ 2 ), + Q F + A ) F )+ G 2 A F + Oɛ 2 ). z 2 z 2 WP) + 2 2 b 2 The proof is completed by using the definition of the condition numbers 2.3). The difference between the obtained bounds in Lemma 2.5 and those obtained before in [16] is from the explicit expression of the inverse in Lemma 2.1. The following lemma shows that the condition number K Q A) is bounded by K A Q).

On the Solution of Constrained and Weighted... 1071 Lemma 2.3 Let the assumptions of Lemma 2.1 hold. Then K Q A) A F A + 2 1 + K 2 AQ)) Proof: From the definition of A + S in the first section, we can easily obtain G = A T + W = A [ ]) + I QWP) + +T WP) Taking norm of both sides [ ] ] G 2 A + 2 1+ QWP) + WP) +T 2 A + 2 [1+ W 2 PW) + 2 2 [ = A + 2 1+ W 2 PW) + ) ] 2 2 A + 2 [1 + KAQ)]. 2 The proof is completed by using formula 2.3). The following lemma checks the error bound in Lemma 2.5 by getting a perturbation bound for the standard LS problem. Lemma 2.4 Let B = O, W 1 = I,A = C, c = O and b = d in 1.4) and assuming that b 2 μ 2 + A F. Then δx 2 ɛka) 2+KA)+2) μ 2 A F ) + Oɛ 2 ), 2.11) Proof: Introducing the assumptions of the lemma into the first inequality of Lemma 2.5, we obtain δx 2 μ 2 + A F ɛka) + μ ) 2 +1 + ɛk 2 μ 2 A) + Oɛ 2 ). A F A F A F The proof is completed by rearranging the terms. The bound of inequality 2.4) does not yield a condition number for Problem 1.2), since it is not attainable. Thus, we must combine the two terms of δa and of δq in formulas 2.9) and 2.10). This can be done by using the vec operator, that stores the columns of a matrix into one long column, together with the tensor product X Y =x ij Y ) see [14]). The following lemma establish a sharp bound for the solution of Problem 1.2) by applying the vec operator and using the property veczxy )=Y T Z)vecX). Lemma 2.5 Let the assumptions of Lemma 2.1 hold and set Γ 1 = 1 G T 2 b 2 + z T G T 2 Q F + z T G T QG))Π x T G T ) 2 A F ), Γ 2 = 1 z 2 D 2 b 2 + z T D 2 Q F + z T G)Π + x T D) 2 A F ). Then δx 2 Γ 1 ɛ + Oɛ 2 ) and δz 2 z 2 Γ 2 ɛ + Oɛ 2 ).

1072 Mohammedi R. Abdel-Aziz Proof: Applying the vec operator and the above given property to formulas 2.9) and 2.10) and setting D =WP) + WP) +T, we obtain δx = G T δb z T G T )vecδq) x T G T )vecδa)+z T G T QG))vecδA T )+Oɛ 2 ), 2.12) δz = Dδb z T D)vecδQ) x T D)vecδA) z T G)vecδA T )+Oɛ 2 ). 2.13) Introducing formula vecδa T )=ΠvecδA) into formulas 2.12) and 2.13), where Π is the vec permutation matrix [14], we obtain δx = G T δb z T G T )vecδq)+[z T G T QG))Π x T G T )]vecδa)+oɛ 2 ). 2.14) δz = Dδb z T D)vecδQ) x T D +z T G)Π)vecδA)+Oɛ 2 ). 2.15) The proof is completed by taking 2-norm of formulas 2.14) and 2.15), setting vecδa) 2 = A F and using formula 2.2). 3 Effects of Weights The goal of this section is to study the effect of the weighted matrix W = diagw 1,w 2,,w m+n ) such that w j 0 for each 1 j m + n) on the solution of weighted least squares problem WLS) minimize W [Hx h] 2, rankh) =n. 3.1) Consider the solution x of the unweighted LS minimize Hx h 2. 3.2) Thus, the solutions x W and x of problems 3.1) and 3.2) satisfy x W x =[H T W 2 H] 1 H T [W 2 I][h Hx] This shows that the row weighting in the LS affects the solution and x W = x, when h RangeH). These effects can be clarified through the entries of the residual. For simplicity, we define W α) =diag1, 1,, 1+α, 1,, 1) R m+n) m+n), where α> 1, e T k We k =1+α and e k R m+n is the co-ordinate vector with 1 in its k-th position. Thus, we can write W α) =I + αe k e T k and H is a full column rank matrix. Lemma 3.1 If xα) is the solution of the WLS problem, minimize W α)[hx h] and its residual is rα) = h Hxα), then 3.3) H[H T H] 1 H T e k e T k rα) =I β 1+βe T k H[HT H] 1 H T )r, 3.4) e k where β = αα +2).

On the Solution of Constrained and Weighted... 1073 Proof: The normal equations to problem 3.3) is [H T W 2 α)h]xα) =H T W 2 α)h. 3.5) But, W 2 α) =I + βe k e T k. Hence formula 3.5) becomes [H T H + βh T e k e T k H]xα) =H T h + βh T e k e T k h. 3.6) Multiplying both sides of equality 3.6) by H[H T H] 1 and adding h to each of them, we obtain h Hxα)) βh[h T H] 1 H T e k e T k Hxα) = h H[H T H] 1 H T h βh[h T H] 1 H T e k e T k h. Introducing, rα) =h Hxα), r = h Hx, and x =[H T H] 1 H T h into the previous equality, we obtain rα) =[I + βh[h T H] 1 H T e k e T k ] 1 r. 3.7) The inverse of the rank-one matrix [I + e k e T k ] takes the form [I 1 2 e ke T k ]. Thus the inverse of the matrix, [I + βh[h T H] 1 H T e k e T k ] will take the form, [I γβ[h[h T H] 1 H T e k e T k ]. If we determine the parameter γ, then we obtain the inverse. Since [I + βh[h T H] 1 H T e k e T k ][I γβh[ht H] 1 H T e k e T k ]=I. 3.8) Equating the k,k) entry of the matrix equation 3.8), we obtain [1 + βe T k H[HT H] 1 H T e k ][1 γβe T k H[HT H] 1 H T e k ]=1. After some simple algebraic operations, we obtain γ = Introducing formula 3.9) into 3.8), we obtain 1 [1 + βe T k H[HT H] 1 H T e k ]. 3.9) [I + βh[h T H] 1 H T e k e T H[H T H] 1 H T e k e T k ] 1 k =[I β 1+βe T k H[HT H] 1 H T ] 3.10) e k The proof is completed by introducing the inverse 3.10) into formula 3.7). Lemma 3.2 If xα) minimizes W α)[hx h] and r k α) is the k-th component of the residual rα), then r k α) = r k 1+βe T k H[HT H] 1 H T e k 3.11)

1074 Mohammedi R. Abdel-Aziz Proof: The proof is straight forward from lemma 3.1) by equating the k-th component of equality 3.4). Expression 3.11) shows that r k α) is a monotone decreasing function of α. It is known that the equality constrained least squares problem minimize x R m W 1 2 [H1 x h 1 ] 2, subject to H 2 x = h 2 3.12) can be viewed as the limiting case of an unconstrained problem, when an infinite diagonal weight matrix is associated with H 2 [12]. We partitioned H as H =H 1 H 2 ), where H 1 R m n 1, H 2 R m n 2 and n = n 1 + n 2. Also, we partitioned h as h =h T 1,hT 2 )T and the weight matrix W R n 1 n 1 is a nonsingular diagonal matrix. We assume that rankh) =m and rankh 2 )=n 2. 3.13) The following lemma reviewed these results Lemma 3.3 Let H R m n and Problem 3.12) satisfies conditions 3.13) and it has the solution xw, H 1,H 2,h 1,h 2 ). Then, with x as the solution of the linear system W 1 O H T 1 r 1 h 1 O O H2 T r 2 = h 2, 3.14) H 1 H 2 O x 0 it holds that x = xw, H 1,H 2,h 1,h 2 ). Moreover, if ) W O W ɛ) = O 1 ɛ I for ɛ>0, then lim ) 1 HWɛ)h = xw, H ɛ 0 +HWɛ)HT 1,H 2,h 1,h 2 ). Proof: Problem 3.12) is a convex quadratic programming, where the objective function is bounded from below. The first order optimality conditions are necessary and sufficient i.e. H 1 WH1 T x H 2r 2 = H 1 Wh 1 3.15) H2 T x = h 2. 3.16) With r 1 = W h 1 H1 T x). Formulas 3.15) and 3.16) are equivalent to 3.14). To show that the system 3.14) is nonsingular, assume that there is a solution for W 1 v 1 + H1 T v 3 = 0, 3.17) H2 T v 3 = 0, 3.18) H 1 v 1 + H 2 v 2 = 0. 3.19) After some simple calculations and rearrangement, we obtain v T 1 W 1 v 1 + v T 1 H T 1 v 3 = v T 1 W 1 v 1 v T 2 H T 2 v 3 = v T 1 W 1 v 1 =0. Since W is positive definite, this means v 1 = 0. But if v 1 = 0, the full row rank of H in conjunction with 3.17) and 3.18) implies v 3 = 0. Similarly, 3.19) and the full

On the Solution of Constrained and Weighted... 1075 column rank of H 2 implies that v 2 = 0. Hence, r 1,r 2 and x is the unique solution of 3.14) and x = xw, H 1,H 2,h 1,h 2 ). If xɛ) =HWɛ)H T ) 1 HWɛ)h with W ɛ) defined as before, Problem 3.12) implies that r 1 ɛ) and r 2 ɛ) are uniquely defined by W 1 O H T 1 O O H T 2 H 1 H 2 O r 1 ɛ) r 2 ɛ) xɛ) = h 1 h 2 0. 3.20) But the systems 3.14) and 3.20) are nonsingular. Hence, they are identical when ɛ = 0 and the solution of 3.20) is continuous function of ɛ, we obtain lim ɛ 0 + xɛ) = x. 4 Concluding Remarks We have presented some characteristics of the solution of the constrained weighted least squares problems. The relative error of the solution is bounded by a combination of the norm-wise perturbation in the A and Q matrices. The effects of the row weighting satisfy a certain monotonicity condition in the components of the residual. References [1] M. R. Abdel-Aziz and M. M. El-Alem, Solving large-scale constrained least squares problems, J. of Appl. Math. and Comput., Vol. 137 2003), 571-587. [2] Å. Bjőrck, Least Squares Methods, SIAM philadelphia, PA, USA, 1996. [3] E. Y. Bobrovnikova and S. A. Vavasis, Accurate solution of weighted least squares by itertative methods, SIAM J. of matrix Anal. Appl., Vol. 22, No. 4 2001), 1153 1174. [4] Douglas James, Implicit Nullspace iterative methods for constrained Least squares problems, SIAM J. of matrix Anal. Appl., Vol. 13, No. 3 1992), 944 961. [5] L. Elden, A weighted pseudoinverse, generalized singular values and constrained least squares problems, BIT, Vol. 22 1982), 487 502. [6] P. D.Forsgens, On linear least squares with diagonally dominant wieght matrices, Dept. of Math., Royal Inst. of Tech. S-10044 Stockholm, Sweden, 1995. [7] W. Gander, Least squares with quadratic constraint, Num. Math. Vol. 36 1981), 291 307. [8] G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd Eds. The Johns Hopkins University Press, Baltimore, 1996.

1076 Mohammedi R. Abdel-Aziz [9] G. H. Golub and U. Von Matt, Quadratically constrained least squares and quadratic problems, Num. Math. Vol. 59 1991), 561 580. [10] P. D. Hough and S. A. Vavasis, A complete orthogonal decomposition for weighted least squares, SIAM J. of matrix Anal. Appl., Vol. 18, No. 2 1997), 369 392. [11] D. James, Implicit nullspace iterative method for constrained least squares problems, SIAM J. Matrix Anal. Appl., Vol. 13, No.3 1992), 962 978. [12] C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, SIAM, philadelphia, PA, USA, 1995. [13] V. Ngo. Khiem, An approach of eigenvalue perturbation theory, Appl. Num. Anal. Comp. Math., Vol. 21) 2005), 108 125. [14] G. W. Stewart and J. Sun, Matrix Perturbation Theory, Academic Press Limited, New York, USA., 1990. [15] C. F. Van Loan, On the method of weighting for equality-constrained leastsquares problems, SIAM J. of Numer. Anal. Vol. 22, No.5 1985), 851 864. [16] M. Wei, Perturbation theory for the rank-deficient equality constrained leaset squares problems, SIAM J. Numer. Anal., Vol. 29, No.5 1992), 1462 1481. Received: October 10, 2005