# On the Solution of Constrained and Weighted Linear Least Squares Problems

Size: px
Start display at page:

Download "On the Solution of Constrained and Weighted Linear Least Squares Problems"

## Transcription

1 International Mathematical Forum, 1, 2006, no. 22, On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science Faculty of Science, Kuwait University P. O. Box 5969, Safat 13060, Kuwait Abstract This paper aims to investigate some theoretical characteristics of the solution of constrained and weighted linear least squares problem CWLS). This investigation is based on perturbing the CWLS problem and driving bounds for the relative error. Explicit expressions for the inverse and Moore-Penrose inverse are used to estimate these bounds. Moreover, the effects of weights are introduced. Mathematics Subject Classification: Primary 65F15; Secondary 65G05 Keywords: least squares problems, perturbation, weights. 1 Introduction Important problems in many scientific computational areas are least squares problems. The problem of constraint least squares with full column weight matrix is a class of these problems. We are concerned with the connection between the condition numbers and the rounding error in the solution of the CWLS problem. The fact that this problem is an intrinsic feature of least squares problems makes it necessary to study the characteristics of its solution, we investigate the sensitivity of the solution x R n that satisfies the overdetermined system of equations Ax b, 1.1) where A R m n, b R m. System 1.1) corresponds to the CWLS problem minimize d Cx) T W 1 1 d Cx) subject to Bx = c, 1.2) where W 1 is a symmetric positive definite matrix, A =B T C T ) T and b =c T d T ) T, with B R k n, C R l n, c R k and d R l for k + l = m and m n k. Problem 1.2) arises in many practical applications including linear programming, electrical networks, boundary value problems, analysis of large-scale structures and the inequality constrained least squares problems [10, 12]. 1 On leave from Department of Mathematics, Faculty of Science, Alexandria University, Egypt.

2 1068 Mohammedi R. Abdel-Aziz We will assume that rankb) = k and ranka) = n. 1.3) The condition that B of full row rank ensures that the system Bx = c is consistent and hence that Problem 1.2) has solution. The second condition in 1.3) guarantees that there is a unique solution [2]. An equivalent formulation for Problem 1.2) see [6]) is O O B O W 1 C B T C T O λ μ x = c d 0, 1.4) where λ R k is a vector of Lagrange multipliers and W 1 μ is the residual. The augmented system 1.4) is formed from the Karuch-Kuhn-Tuker conditions of the CWLS Problem 1.2) [4]. Intensive research has been developed for studying quadratic problems [13, 9]. The LS problems with quadratic equality and with weighting equality constraints by Gander [7], Van Loan [15], James [11], Hough etal. [10], Wei [16] and Bobrornikova etal. [3]. Techniques for solving unconstrained and constrained are found in [2, 8]. Some algorithms for solving the unweighted class of Problem 1.2) are based on introducing a Lagrange multiplier and generating a sequence of lower and upper bounds without many computations [9]. A superlinearly convergent algorithm has been developed to solve this constrained LS class by recasting the problem into a parameterized eigenvalue problem and computing the optimal solution from the parameterized eigenvector [1]. In this contribution, we introduce an investigation for the sensitivity and the effects of weights on the solution of Problem 1.2). This investigation is based on constructing bounds for the relative error of the equivalent formulation 1.4) and studying the ɛ-dependent weights. Throughout the paper R m n denotes the linear space of all m n real matrices. If A is an m n real matrix, then A T denotes the transpose matrix. The weighted Moore-Penrose inverse for a matrix A and S with appropriate sizes is defined by the unique solution Z of A = AZA, Z = ZAZ, S T SZA) T = S T SZA and AZ is symmetric matrix [5]. The weighted Moore-Penrose inverse of A is given by Z = A + S =I [SP]+ S) + A +, where P = I A + A. Z is the standard Moore- Penrose inverse of A, when S = I. 2 Sensitivity of the solution Throughout this section, we reformulate system 1.4) as ) ) ) Q A z b O O A T =, where Q = O x 0 O W 1 ) 2.1) and z =λ T μ T ) T. System 2.1) represents a generalized formulation for the whole classes of linear squares problems. For an ordinary LS problem, we set Q = I m

3 On the Solution of Constrained and Weighted and for unweighted and constrained LS problems, we set W 1 = I m k. The weighted and unconstrained LS problems for Q is symmetric positive definite matrix can be formulated as minimize b Ax) T Q 1 b Ax), where A R m n,b R m,x R n and m n. We assume that the matrices A and Q T A) T have linearly independent columns [8]. The following Lemmas establishes the inverse of the coefficients matrix of system 2.1). Lemma 2.1 Under the conditions 1.3) and setting W = Q 1 2 and G = A T + W, then the inverse of the matrix in question is ) WP) + WP) +T G WG) T WG) G T Proof: Elden in [5] shows that the inverse of the coefficients matrix of 2.1) is ) P WP) + WP) +T P T G WG) T. WG) G T The proof is completed by taking in account that RangWP) + )= RangWP) T )=RangPW) RangP ) which provides that P WP) + =WP) +. We wish to determine the sensitivity of the solution of Problem 2.1) to the perturbations δa, δq and δb of A, Q and b, respectively. We assume that the conditions 1.3) hold for system 1.4) and for the perturbed data ˆQ Â = A + δa, = Q + δq and ˆb = b + δb which will certainly be true if δa, δq and δb are sufficiently small. The perturbations normwise are measured by the smallest ɛ for which δa F ɛ A F, δq F ɛ Q F, δb 2 ɛ b 2 2.2) where A and Q are matrices and b is a vector of tolerances. The Forbenius norm for matrices is used since it leads to more cheaply computable bounds than the 2-norm. Thus, we define the condition numbers as follows K Q A) = A F G 2, K A Q) = W F WP) + 2 and K A = A F A ) The following lemma gives us the componentwise perturbation bounds of δx and δz. Lemma 2.2 Let the assumptions of Lemma 2.1 hold. Then δx 2 b 2 ɛk Q A) + Q ) F z ɛk A F A F x QA) 2 Q 2 z 2 +Oɛ 2 ), 2.4) 2 A F δz 2 ɛk 2 b 2 z AQ) + A ) F +1 + ɛk Q A)+Oɛ 2 ). 2.5) 2 Q F z 2 Q F z 2

4 1070 Mohammedi R. Abdel-Aziz Proof: Consider the perturbed system of 2.1) which can be written as ) ) ) ˆQ Â ẑ ˆb Â T =, 2.6) O ˆx 0 Rearranging terms in 2.6) and taking in account that ẑ = z + δz and ˆx = x + δx, we obtain ) ) ) Q A ẑ ˆb δqẑ δaˆx A T = O ˆx δa) T, 2.7) ẑ Substracting 2.1) from 2.7), we obtain ) ) Q A δz A T = O δx δb δqẑ δaˆx δa) T ẑ ), 2.8) Applying Lemma 2.1 and using the definition of Q, we obtain δx = G T δb δqẑ δaˆx)+g T QGδA) T ẑ and δz = WP) + WP) +T δb δqẑ δaˆx) GδA) T ẑ. Using the definitions of ˆx and ẑ, we obtain δx = G T δb δqz δax)+g T QGδA) T z G T δqδz δaδx) +G T QGδA) T δz. δz =WP) + WP) +T δb δqz δax) GδA) T z WP) + WP) +T δqδz + δaδx) GδA) T δz. But, δx and δz are of first order in ɛ, we obtain the first order expressions δx = G T δb δqz δax)+g T QGδA) T z + Oɛ 2 ) and 2.9) δz = WP) + WP) +T δb δqz δax) GδA) T z Oɛ 2 ). 2.10) Taking 2-norm of both sides, Using 2.2) and dividing 2.9) and 2.10) by and z 2, repectively, we obtain δx 2 ɛ G 2 b 2 δz 2 z 2 ɛ + Q F z 2 + A F )+ G 2 2 Q ) 2 A F z 2 + Oɛ 2 ), + Q F + A ) F )+ G 2 A F + Oɛ 2 ). z 2 z 2 WP) b 2 The proof is completed by using the definition of the condition numbers 2.3). The difference between the obtained bounds in Lemma 2.5 and those obtained before in [16] is from the explicit expression of the inverse in Lemma 2.1. The following lemma shows that the condition number K Q A) is bounded by K A Q).

5 On the Solution of Constrained and Weighted Lemma 2.3 Let the assumptions of Lemma 2.1 hold. Then K Q A) A F A K 2 AQ)) Proof: From the definition of A + S in the first section, we can easily obtain G = A T + W = A [ ]) + I QWP) + +T WP) Taking norm of both sides [ ] ] G 2 A QWP) + WP) +T 2 A + 2 [1+ W 2 PW) [ = A W 2 PW) + ) ] 2 2 A + 2 [1 + KAQ)]. 2 The proof is completed by using formula 2.3). The following lemma checks the error bound in Lemma 2.5 by getting a perturbation bound for the standard LS problem. Lemma 2.4 Let B = O, W 1 = I,A = C, c = O and b = d in 1.4) and assuming that b 2 μ 2 + A F. Then δx 2 ɛka) 2+KA)+2) μ 2 A F ) + Oɛ 2 ), 2.11) Proof: Introducing the assumptions of the lemma into the first inequality of Lemma 2.5, we obtain δx 2 μ 2 + A F ɛka) + μ ) ɛk 2 μ 2 A) + Oɛ 2 ). A F A F A F The proof is completed by rearranging the terms. The bound of inequality 2.4) does not yield a condition number for Problem 1.2), since it is not attainable. Thus, we must combine the two terms of δa and of δq in formulas 2.9) and 2.10). This can be done by using the vec operator, that stores the columns of a matrix into one long column, together with the tensor product X Y =x ij Y ) see [14]). The following lemma establish a sharp bound for the solution of Problem 1.2) by applying the vec operator and using the property veczxy )=Y T Z)vecX). Lemma 2.5 Let the assumptions of Lemma 2.1 hold and set Γ 1 = 1 G T 2 b 2 + z T G T 2 Q F + z T G T QG))Π x T G T ) 2 A F ), Γ 2 = 1 z 2 D 2 b 2 + z T D 2 Q F + z T G)Π + x T D) 2 A F ). Then δx 2 Γ 1 ɛ + Oɛ 2 ) and δz 2 z 2 Γ 2 ɛ + Oɛ 2 ).

6 1072 Mohammedi R. Abdel-Aziz Proof: Applying the vec operator and the above given property to formulas 2.9) and 2.10) and setting D =WP) + WP) +T, we obtain δx = G T δb z T G T )vecδq) x T G T )vecδa)+z T G T QG))vecδA T )+Oɛ 2 ), 2.12) δz = Dδb z T D)vecδQ) x T D)vecδA) z T G)vecδA T )+Oɛ 2 ). 2.13) Introducing formula vecδa T )=ΠvecδA) into formulas 2.12) and 2.13), where Π is the vec permutation matrix [14], we obtain δx = G T δb z T G T )vecδq)+[z T G T QG))Π x T G T )]vecδa)+oɛ 2 ). 2.14) δz = Dδb z T D)vecδQ) x T D +z T G)Π)vecδA)+Oɛ 2 ). 2.15) The proof is completed by taking 2-norm of formulas 2.14) and 2.15), setting vecδa) 2 = A F and using formula 2.2). 3 Effects of Weights The goal of this section is to study the effect of the weighted matrix W = diagw 1,w 2,,w m+n ) such that w j 0 for each 1 j m + n) on the solution of weighted least squares problem WLS) minimize W [Hx h] 2, rankh) =n. 3.1) Consider the solution x of the unweighted LS minimize Hx h ) Thus, the solutions x W and x of problems 3.1) and 3.2) satisfy x W x =[H T W 2 H] 1 H T [W 2 I][h Hx] This shows that the row weighting in the LS affects the solution and x W = x, when h RangeH). These effects can be clarified through the entries of the residual. For simplicity, we define W α) =diag1, 1,, 1+α, 1,, 1) R m+n) m+n), where α> 1, e T k We k =1+α and e k R m+n is the co-ordinate vector with 1 in its k-th position. Thus, we can write W α) =I + αe k e T k and H is a full column rank matrix. Lemma 3.1 If xα) is the solution of the WLS problem, minimize W α)[hx h] and its residual is rα) = h Hxα), then 3.3) H[H T H] 1 H T e k e T k rα) =I β 1+βe T k H[HT H] 1 H T )r, 3.4) e k where β = αα +2).

7 On the Solution of Constrained and Weighted Proof: The normal equations to problem 3.3) is [H T W 2 α)h]xα) =H T W 2 α)h. 3.5) But, W 2 α) =I + βe k e T k. Hence formula 3.5) becomes [H T H + βh T e k e T k H]xα) =H T h + βh T e k e T k h. 3.6) Multiplying both sides of equality 3.6) by H[H T H] 1 and adding h to each of them, we obtain h Hxα)) βh[h T H] 1 H T e k e T k Hxα) = h H[H T H] 1 H T h βh[h T H] 1 H T e k e T k h. Introducing, rα) =h Hxα), r = h Hx, and x =[H T H] 1 H T h into the previous equality, we obtain rα) =[I + βh[h T H] 1 H T e k e T k ] 1 r. 3.7) The inverse of the rank-one matrix [I + e k e T k ] takes the form [I 1 2 e ke T k ]. Thus the inverse of the matrix, [I + βh[h T H] 1 H T e k e T k ] will take the form, [I γβ[h[h T H] 1 H T e k e T k ]. If we determine the parameter γ, then we obtain the inverse. Since [I + βh[h T H] 1 H T e k e T k ][I γβh[ht H] 1 H T e k e T k ]=I. 3.8) Equating the k,k) entry of the matrix equation 3.8), we obtain [1 + βe T k H[HT H] 1 H T e k ][1 γβe T k H[HT H] 1 H T e k ]=1. After some simple algebraic operations, we obtain γ = Introducing formula 3.9) into 3.8), we obtain 1 [1 + βe T k H[HT H] 1 H T e k ]. 3.9) [I + βh[h T H] 1 H T e k e T H[H T H] 1 H T e k e T k ] 1 k =[I β 1+βe T k H[HT H] 1 H T ] 3.10) e k The proof is completed by introducing the inverse 3.10) into formula 3.7). Lemma 3.2 If xα) minimizes W α)[hx h] and r k α) is the k-th component of the residual rα), then r k α) = r k 1+βe T k H[HT H] 1 H T e k 3.11)

8 1074 Mohammedi R. Abdel-Aziz Proof: The proof is straight forward from lemma 3.1) by equating the k-th component of equality 3.4). Expression 3.11) shows that r k α) is a monotone decreasing function of α. It is known that the equality constrained least squares problem minimize x R m W 1 2 [H1 x h 1 ] 2, subject to H 2 x = h ) can be viewed as the limiting case of an unconstrained problem, when an infinite diagonal weight matrix is associated with H 2 [12]. We partitioned H as H =H 1 H 2 ), where H 1 R m n 1, H 2 R m n 2 and n = n 1 + n 2. Also, we partitioned h as h =h T 1,hT 2 )T and the weight matrix W R n 1 n 1 is a nonsingular diagonal matrix. We assume that rankh) =m and rankh 2 )=n ) The following lemma reviewed these results Lemma 3.3 Let H R m n and Problem 3.12) satisfies conditions 3.13) and it has the solution xw, H 1,H 2,h 1,h 2 ). Then, with x as the solution of the linear system W 1 O H T 1 r 1 h 1 O O H2 T r 2 = h 2, 3.14) H 1 H 2 O x 0 it holds that x = xw, H 1,H 2,h 1,h 2 ). Moreover, if ) W O W ɛ) = O 1 ɛ I for ɛ>0, then lim ) 1 HWɛ)h = xw, H ɛ 0 +HWɛ)HT 1,H 2,h 1,h 2 ). Proof: Problem 3.12) is a convex quadratic programming, where the objective function is bounded from below. The first order optimality conditions are necessary and sufficient i.e. H 1 WH1 T x H 2r 2 = H 1 Wh ) H2 T x = h ) With r 1 = W h 1 H1 T x). Formulas 3.15) and 3.16) are equivalent to 3.14). To show that the system 3.14) is nonsingular, assume that there is a solution for W 1 v 1 + H1 T v 3 = 0, 3.17) H2 T v 3 = 0, 3.18) H 1 v 1 + H 2 v 2 = ) After some simple calculations and rearrangement, we obtain v T 1 W 1 v 1 + v T 1 H T 1 v 3 = v T 1 W 1 v 1 v T 2 H T 2 v 3 = v T 1 W 1 v 1 =0. Since W is positive definite, this means v 1 = 0. But if v 1 = 0, the full row rank of H in conjunction with 3.17) and 3.18) implies v 3 = 0. Similarly, 3.19) and the full

9 On the Solution of Constrained and Weighted column rank of H 2 implies that v 2 = 0. Hence, r 1,r 2 and x is the unique solution of 3.14) and x = xw, H 1,H 2,h 1,h 2 ). If xɛ) =HWɛ)H T ) 1 HWɛ)h with W ɛ) defined as before, Problem 3.12) implies that r 1 ɛ) and r 2 ɛ) are uniquely defined by W 1 O H T 1 O O H T 2 H 1 H 2 O r 1 ɛ) r 2 ɛ) xɛ) = h 1 h ) But the systems 3.14) and 3.20) are nonsingular. Hence, they are identical when ɛ = 0 and the solution of 3.20) is continuous function of ɛ, we obtain lim ɛ 0 + xɛ) = x. 4 Concluding Remarks We have presented some characteristics of the solution of the constrained weighted least squares problems. The relative error of the solution is bounded by a combination of the norm-wise perturbation in the A and Q matrices. The effects of the row weighting satisfy a certain monotonicity condition in the components of the residual. References [1] M. R. Abdel-Aziz and M. M. El-Alem, Solving large-scale constrained least squares problems, J. of Appl. Math. and Comput., Vol ), [2] Å. Bjőrck, Least Squares Methods, SIAM philadelphia, PA, USA, [3] E. Y. Bobrovnikova and S. A. Vavasis, Accurate solution of weighted least squares by itertative methods, SIAM J. of matrix Anal. Appl., Vol. 22, No ), [4] Douglas James, Implicit Nullspace iterative methods for constrained Least squares problems, SIAM J. of matrix Anal. Appl., Vol. 13, No ), [5] L. Elden, A weighted pseudoinverse, generalized singular values and constrained least squares problems, BIT, Vol ), [6] P. D.Forsgens, On linear least squares with diagonally dominant wieght matrices, Dept. of Math., Royal Inst. of Tech. S Stockholm, Sweden, [7] W. Gander, Least squares with quadratic constraint, Num. Math. Vol ), [8] G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd Eds. The Johns Hopkins University Press, Baltimore, 1996.

10 1076 Mohammedi R. Abdel-Aziz [9] G. H. Golub and U. Von Matt, Quadratically constrained least squares and quadratic problems, Num. Math. Vol ), [10] P. D. Hough and S. A. Vavasis, A complete orthogonal decomposition for weighted least squares, SIAM J. of matrix Anal. Appl., Vol. 18, No ), [11] D. James, Implicit nullspace iterative method for constrained least squares problems, SIAM J. Matrix Anal. Appl., Vol. 13, No ), [12] C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, SIAM, philadelphia, PA, USA, [13] V. Ngo. Khiem, An approach of eigenvalue perturbation theory, Appl. Num. Anal. Comp. Math., Vol. 21) 2005), [14] G. W. Stewart and J. Sun, Matrix Perturbation Theory, Academic Press Limited, New York, USA., [15] C. F. Van Loan, On the method of weighting for equality-constrained leastsquares problems, SIAM J. of Numer. Anal. Vol. 22, No ), [16] M. Wei, Perturbation theory for the rank-deficient equality constrained leaset squares problems, SIAM J. Numer. Anal., Vol. 29, No ), Received: October 10, 2005

### We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

### Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

### ACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM

BIT 0006-3835/98/3901-0034 \$15.00 1999, Vol. 39, No. 1, pp. 34 50 c Swets & Zeitlinger ACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM ANTHONY

### Chapter 1. Matrix Algebra

ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

### Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

### Review Questions REVIEW QUESTIONS 71

REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

### Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

### Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

### you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

### arxiv: v1 [math.na] 1 Sep 2018

On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

### Chapter 3 Transformations

Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

### A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

### A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

### Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

### Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

### c 1999 Society for Industrial and Applied Mathematics

SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

### Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and

### 14.2 QR Factorization with Column Pivoting

page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

### COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

### A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

### Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

### For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

### Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

### CHAPTER 7. Regression

CHAPTER 7 Regression This chapter presents an extended example, illustrating and extending many of the concepts introduced over the past three chapters. Perhaps the best known multi-variate optimisation

### Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

### On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

### YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

### Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### Math 407: Linear Optimization

Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

### Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

### OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

### ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

### Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

### Chap 3. Linear Algebra

Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

### SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

SVD and its Application to Generalized Eigenvalue Problems Thomas Melzer June 8, 2004 Contents 0.1 Singular Value Decomposition.................. 2 0.1.1 Range and Nullspace................... 3 0.1.2

### Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

### Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

### Lecture 9: Numerical Linear Algebra Primer (February 11st)

10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

### CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

### THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

### 2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

### Lecture 7. Econ August 18

Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

### POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

### A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and Ã = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

### Interval solutions for interval algebraic equations

Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

### ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b)

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS Conditioning of linear systems. Estimating errors for solutions of linear systems Backward error analysis Perturbation analysis for linear

### 1 Error analysis for linear systems

Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error

### Scientific Computing

Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

### STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

### BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

### c 2002 Society for Industrial and Applied Mathematics

SIAM J. MATRIX ANAL. APPL. Vol. 24, No. 1, pp. 150 164 c 2002 Society for Industrial and Applied Mathematics VARIANTS OF THE GREVILLE FORMULA WITH APPLICATIONS TO EXACT RECURSIVE LEAST SQUARES JIE ZHOU,

### Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

### A Note on Inverse Iteration

A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

### Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

### There are six more problems on the next two pages

Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

### Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

### EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

### ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

### Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

### Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

### Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

### Linear Algebraic Equations

Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

### Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

### Numerical Linear Algebra Notes

Numerical Linear Algebra Notes Brian Bockelman October 11, 2006 1 Linear Algebra Background Definition 1.0.1. An inner product on F n (R n or C n ) is a function, : F n F n F satisfying u, v, w F n, c

### Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

1 Trouble points Notes for 2016-09-28 At a high level, there are two pieces to solving a least squares problem: 1. Project b onto the span of A. 2. Solve a linear system so that Ax equals the projected

### Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

### Tikhonov Regularization of Large Symmetric Problems

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

### MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

### Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

### Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

### σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

### B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

### Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

### ETNA Kent State University

C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

### FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 0 (Systemsimulation) On a regularization technique for

### Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

### LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

### Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

### The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

### Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

### Subset selection for matrices

Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

### MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

### Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

### a Λ q 1. Introduction

International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

### Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

### Solution of Linear Equations

Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

### AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

### Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

### A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

### 5 Solving Systems of Linear Equations

106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

### NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

### Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

### Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

### On condition numbers for the canonical generalized polar decompostion of real matrices

Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com

### ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this