Aplikace matematiky. Gene Howard Golub Numerical methods for solving linear least squares problems

Similar documents
Aplikace matematiky. Gene Howard Golub Least squares, singular values and matrix approximations. Terms of use:

Aplikace matematiky. Jiří Neuberg Some limit properties of the best determined terms method. Terms of use:

Časopis pro pěstování matematiky

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Next topics: Solving systems of linear equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Časopis pro pěstování matematiky

Časopis pro pěstování matematiky

Math 411 Preliminaries

Časopis pro pěstování matematiky

EQUADIFF 1. Milan Práger; Emil Vitásek Stability of numerical processes. Terms of use:

Computational Linear Algebra

Toposym Kanpur. T. Soundararajan Weakly Hausdorff spaces and the cardinality of topological spaces

Main matrix factorizations

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Mathematica Bohemica

Bounds for Eigenvalues of Tridiagonal Symmetric Matrices Computed. by the LR Method. By Gene H. Golub

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

14.2 QR Factorization with Column Pivoting

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Czechoslovak Mathematical Journal

Numerical Methods I Eigenvalue Problems

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

On the influence of eigenvalues on Bi-CG residual norms

Commentationes Mathematicae Universitatis Carolinae

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Mathematica Slovaca. Antonín Dvořák; David Jedelský; Juraj Kostra The fields of degree seven over rationals with a normal basis generated by a unit

Časopis pro pěstování matematiky

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

Applications of Mathematics

Acta Universitatis Carolinae. Mathematica et Physica

Mathematica Slovaca. Jaroslav Hančl; Péter Kiss On reciprocal sums of terms of linear recurrences. Terms of use:

Lecture 3: QR-Factorization

Commentationes Mathematicae Universitatis Carolinae

Časopis pro pěstování matematiky

Scientific Computing

Numerical Methods I Non-Square and Sparse Linear Systems

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Czechoslovak Mathematical Journal

Numerical Methods - Numerical Linear Algebra

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

WSAA 14. A. K. Kwaśniewski Algebraic properties of the 3-state vector Potts model. Terms of use:

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Linear Algebraic Equations

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

Mathematica Bohemica

Q T A = R ))) ) = A n 1 R

Numerical Methods in Matrix Computations

Singular value decomposition

Commentationes Mathematicae Universitatis Carolinae

Numerical behavior of inexact linear solvers

Časopis pro pěstování matematiky

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Matematický časopis. Bedřich Pondělíček A Note on Classes of Regularity in Semigroups. Terms of use: Persistent URL:

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Orthogonal Transformations

Acta Mathematica et Informatica Universitatis Ostraviensis

Mathematica Bohemica

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Commentationes Mathematicae Universitatis Carolinae

Review Questions REVIEW QUESTIONS 71

What is it we are looking for in these algorithms? We want algorithms that are

Gaussian Elimination for Linear Systems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Differential Transformations of the Second Order

Block Bidiagonal Decomposition and Least Squares Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems

Dense LU factorization and its error analysis

Linear Algebra: Matrix Eigenvalue Problems

EQUADIFF 6. Jean-Claude Nédélec Mixed finite element in 3D in H(div) and H(curl) Terms of use:

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Časopis pro pěstování matematiky a fysiky

CSE 160 Lecture 13. Numerical Linear Algebra

Commentationes Mathematicae Universitatis Carolinae

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

Linear Algebra, part 3 QR and SVD

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

PANM 17. Marta Čertíková; Jakub Šístek; Pavel Burda Different approaches to interface weights in the BDDC method in 3D

AMSC/CMSC 466 Problem set 3

WSGP 23. Lenka Lakomá; Marek Jukl The decomposition of tensor spaces with almost complex structure

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

linear least squares methods nonlinear least squares asymptotically second order iterations M.R. Osborne Something old,something new, and...

S.F. Xu (Department of Mathematics, Peking University, Beijing)

Kybernetika. Margarita Mas; Miquel Monserrat; Joan Torrens QL-implications versus D-implications. Terms of use:

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

1 Number Systems and Errors 1

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Introduction to Numerical Linear Algebra II

The QR Factorization

Mathematica Bohemica

Numerical Linear Algebra

Transcription:

Aplikace matematiky Gene Howard Golub Numerical methods for solving linear least squares problems Aplikace matematiky, Vol. 10 (1965), No. 3, 213--216 Persistent URL: http://dml.cz/dmlcz/102951 Terms of use: Institute of Mathematics AS CR, 1965 Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use. This paper has been digitized, optimized for electronic delivery and stamped with digital signature within the project DML-CZ: The Czech Digital Mathematics Library http://dml.cz

SVAZEK 10 (1965) APLIKACE MATEMATIKY ČÍSLO 3 NUMERICAL METHODS FOR SOLVING LINEAR LEAST SQUARES PROBLEMS 1 ) GENE H. GOLUB (to topic d) One of the problems which arises most frequently in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of contexts, e.g., statistical applications, numerical solution of integral equations of the first kind, etc. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their nature. Let A be a given m x n real matrix with m = We wish to determine a vector x such that (i) Ax = min. n and of rank r, and b a given vector. where... indicates the euclidean norm. It is well known (cf. [4]) that x satisfies* the equation (2) A T Ax = A T b Unfortunately, the matrix A T A is frequently ill-conditioned [4] and influenced greatly by roundoff errors. The following example of LAUCHLI [2] illustrates this well. Suppose A = 1,1, 1, 1,1" «, 0, 0, 0, 0 0,, 0, 0, 0 0, 0,, 0, 0 0, 0, 0, E, 0 0, 0, 0, 0, 1 ) This report was supported in part by Office of Naval Research Contract Nonr 225(37) (NR 044-11) at Stanford University. 213

then (3) A'A = l+ 2, 1, 1 L, 1, L 1, 1 + є 2, 1 U 1, 1, - J 1, 1 + e 2, 1, 1, 1, 1, 1 l, 1 + s 2, 1, 1, 1, 1 l, 1, 1 + e Clearly for 8 + 0, the rank of A T A is five since the eigenvalues of A T A are 5 + 8 2, 2 2 2 2 8, 8, 8, 8. Let us assume that the elements of A T A are computed using double precision arithmetic, and then rounded to single precision accuracy. Now let n be the largest number on the computer such that fl(1.0 + rj) = 1.0 where fl(...) indicates the floating point computation. Then if 8 < va//2, the rank of the computed representation of (3) will be one. Consequently, no matter how accurate the linear equation solver, it is impossible to solve the normal equations (2). In [1], HOUSEHOLDER stressed the use of orthogonal transformations for solving linear least squares problems. In this paper, we shall exploit these transformations. Since the euclidean norm of a vector is unitarily invariant, b - Ax\\ = c - QAx where c = Qb and Q is an orthogonal matrix. We choose Q so that (4) QA = R = where R is an upper triangular matrix. Clearly, x = K N 0/}(m-л)xи K _1 c where c is the first n components of c and consequently, j = m+l Since K is an upper triangular matrix and R T R = A T A, decomposition of A T A. R T R is simply the Choleski There are a number of ways to achieve the decomposition (4); e.g., one could apply a sequence of plane rotations to annihilate the elements below the diagonal of A. A very effective method to realize the decomposition (4) is via Householder transformations [1]. Let A = A (1), and let A (2), A (3),..., A (n+1) be defined as follows: A(k + i) = p(k) A (k) ( kz= i $ 2 9..'.,n). 214

P (k) is a symmetric, orthogonal matrix of the form P (k) = / - 2w (k) w (k)t for suitable w (k) such that w (k)t w (k) = 1. A derivation of P(*) j s g i ve n in [5]. In order to simplify the calculations, we redefine P (k) as follows: P (fc) = / - p k u (k) u (k)t m where a k = ( (a») 2 )*, ft = [a t (cr k + a <«)]-i, and i = A; «<*> = 0 for i < k, u[ k > = sgn «>) ( fflk + a «), (*> = a«for,- > k. Thus ^tt+d = 4w _ (") (ftuw T^(")) _ After P {k) has been applied to A {k), A ik+1) appears as follows: A (k + 1) = (*+!) is a k x k upper triangular matrix which is unchanged by subsequent transformations. Now a ( k k k X) = - (sgn a (k) k) a k so that the rank of A is less than n if G fc = 0. Clearly, where R^k+1) and (n + 1) R = A Q p(n)p(n-l) p(l) although one need not compute Q explicitly. Let x be the intial solution obtained, and let x x + e. Then where b Ax r Ae r b Ax, the residual vector. Thus the correction vector e is itself the solution to a linear least squares problem. Once A has been decomposed then it is a fairly simple matter to compute r and solve for e. Since e critically depends upon the residual vector, the components of r should be computed using double precision inner products and then rounded to single precision accuracy. Naturally, one should continue to iterate as long as improved estimates of x are obtained. 215

The above iteration technique will converge only if the initial approximation to x is sufficiently accurate. Let x («+ n = x (t) + e («) ( q = o, 1,...) with x(0) = 0. Then if e(1) / x(1) > c and if c < J, i.e., "at least one bit of the initial solution is correct", one should not iterate since there is little likelihood that the iterative method will converge. Since convergence tends to be linear, one should terminate the procedure as soon as e(k+1) > c e (k) [. References [1] A. S. Householder: Unitary Triangularization of a Nonѕymmеtriс Matrix, J. Аѕѕoс. Comput. Maсh., Vol. 5 (1958), pp. 339 342. [2] P. Läuchli: Jordan-Elimination und Аuѕglеiсhung naсh klеinѕtеn Quadratеn, Numеr. Math., Vol. 3 (1961), pp. 226 240. [3] Y. Linnik: Method of Lеaѕt Ѕquarеѕ and Prinсiplеѕ of thе Thеory of Obѕеrvationѕ, tranѕlatеd from Ruѕѕian by R. C Elandt, Pеrgamon Pгеѕѕ, Nеw York, 1961. [4] E. E. Osborne: On Lеaѕt Ѕquarеѕ Ѕolutionѕ of Linеar Equationѕ, J. Аѕѕoс. Comput. Maсh., Vol. 8 (1961), pp. 628 636. [5] J. H. Wilkinson: Нouѕеholdеťѕ Mеthod for thе Ѕolution of thе Аlgеbraiс Eigеnproblem, Comput. Ј., Vol. 3 (1960), pp. 23 27. Gene H. Golub, Computation Center, Ѕtanford Univerѕity, Ѕtanford, California, U.Ѕ.А. 216