Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0

Similar documents
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Applied Numerical Linear Algebra. Lecture 8

Least-Squares Systems and The QR factorization

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Orthogonal Transformations

Orthonormal Transformations and Least Squares

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 4 Orthonormal vectors and QR factorization

Orthonormal Transformations

EECS 275 Matrix Computation

Section 6.4. The Gram Schmidt Process

Linear Algebra, part 3 QR and SVD

Linear Analysis Lecture 16

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Math 407: Linear Optimization

CS 322 Homework 4 Solutions

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Orthogonal iteration revisited

ELE/MCE 503 Linear Algebra Facts Fall 2018

Solution of Linear Equations

MAT Linear Algebra Collection of sample exams

14.2 QR Factorization with Column Pivoting

Lecture 3: QR-Factorization

Dot product and linear least squares problems

MA 265 FINAL EXAM Fall 2012

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

This can be accomplished by left matrix multiplication as follows: I

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Chapter 3 Transformations

Lecture 4: Applications of Orthogonality: QR Decompositions

Notes on Householder QR Factorization

Matrices, Moments and Quadrature, cont d

Basic Elements of Linear Algebra

MATH2071: LAB #7: Factorizations

Linear least squares problem: Example

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Sparse BLAS-3 Reduction

Math 3191 Applied Linear Algebra

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Orthogonalization and least squares methods

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Lecture 6, Sci. Comp. for DPhil Students

Foundations of Matrix Analysis

Linear Algebra Massoud Malek

Solving large scale eigenvalue problems

MAT 610: Numerical Linear Algebra. James V. Lambers

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra March 16, 2019

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

The Singular Value Decomposition and Least Squares Problems

Cheat Sheet for MATH461

Matrix Factorization and Analysis

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Linear Algebra. Session 12

7. Dimension and Structure.

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Lecture 22: Section 4.7

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MATH 235: Inner Product Spaces, Assignment 7

Numerical Linear Algebra Chap. 2: Least Squares Problems

Introduction to Numerical Linear Algebra II

AMS526: Numerical Analysis I (Numerical Linear Algebra)

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Block Bidiagonal Decomposition and Least Squares Problems

Introduction to Linear Algebra. Tyrone L. Vincent

Properties of Matrices and Operations on Matrices

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

B553 Lecture 5: Matrix Algebra Review

A Vector Space Justification of Householder Orthogonalization

MATH 350: Introduction to Computational Mathematics

Notes on Eigenvalues, Singular Values and QR

MATH 3511 Lecture 1. Solving Linear Systems 1

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Linear Least Squares Problems

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Linear Least squares

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Numerical Methods - Numerical Linear Algebra

18.06 Quiz 2 April 7, 2010 Professor Strang

ORIE 6300 Mathematical Programming I August 25, Recitation 1

5.6. PSEUDOINVERSES 101. A H w.

Lecture 1: Review of linear algebra

Conceptual Questions for Review

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Transcription:

Lecture # 5 The Linear Least Squares Problem Let X R m n,m n be such that rank(x = n That is, The problem is to find y LS such that We also want Xy =, iff y = b Xy LS 2 = min y R n b Xy 2 2 (1 r LS = b Xy LS Our approach, compute the Q R decomposition, that is, ( n R X = Q, m n where Q R m m is orthogonal and R R n n is upper triangular and nonsingular That yields a procedure of the form 1 Use [Q,R] = qr(x to get Q and R 2 Compute ( n c c = 1 = Q T b m n 3 Solve and compute Ry LS = c 1, ( r LS = Q These come from the following MATLAB commands 1

[m,n]=size(x; % Find dimensions of X [Q,R]=qr(X; % Compute orthogonal factorization c = Q b; % c = Q T b yls = R(1:n, 1:n\c(1:n; % Solve Ry LS = c 1 if m > n rls = Q [zeros(n, 1;c(n + 1:m]; % More efficient ( is rls = Q(:,n + 1:mc(n + 1:m if we actually have Q % r LS = Q else rls = zeros(m, 1; end; Note that X T r LS = ( R T ( Q T Q = ( R T ( = Thus r LS is orthogonal to the columns of X There are three well known ways to construct a Q R decomposition: Householder Golub factorization Modified Gram Schmidt orthogonalization Givens Q R factorization We will say a lot about the first two and just touch on the third MATLAB uses the first and we will start with that The matrix H = I 2ww T, w 2 = 1 is called a Householder transformation Often it is defined in terms of any nonzero vector v as H = I 2vv T /(v T v As you will show, H = H T, H T H = I, H 2 = I 2

It is common to choose H such that for a given vector x, Since H is orthogonal, Hx = αe 1 = α Hx 2 = x 2 = α e 1 2 = α We use this transformation to insert zeros into a matrix The choice of w is We note that w = (x αe 1 / x αe 1 2 x αe 1 2 2 = x T x + α 2 e T 1 e 1 2αx T e 1 = x T x + α 2 e T 1 e 1 2αx 1 To prevent cancellation in the first entry of the Householder vector, it is recommended to choose so that α = sign(x 1 x 2, x αe 1 2 2 = 2 x 2 2 + 2 x 2 x 1 If you go to my notes for CSE/Math 55, it shows a way to allow you to choose α to have the opposite sign http://wwwcsepsuedu/~barlow/cse55/chap4pdf Show for yourself that Hx = (I 2ww T x = αe 1 To apply a Householder transformation, that is to compute C = HB we use C = (I 2ww T B = B 2ww T B = B wf T 3

where f = 2B T w Thus a Householder transformation is the result of a matrix vector product and an outer product The latter computes the components of C from c ij = b ij w i f j Although this is 4mn arithmetic operations, thus much less work than multiplying two matrices Never apply a Householder transformation any other way To compute the Q R factorization of X, let X = (x 1,,x n Choose H 1, a Householder transformation such that H 1 x 1 = r 11 e 1, thus where X 1 = H 1 X = ( 1 n 1 1 r 11 R 12 m 1 X1 X 1 = (x (1 2,,x (1 n Choose H 2 such that Then let H 2 = H 2 x (1 2 = r 22 e 1 ( 1 m 1 1 1 m 1 H2 which leaves the first row unaffected Then ( 2 n 2 2 R (2 11 R (2 12 X 2 = H 2 X 1 = H 2 H 1 X = m 2 X2 4

Suppose X k 1 = H k 1 H 1 X = ( k 1 n k + 1 k 1 R (k 1 11 R (k 1 12 m k + 1 Xk 1 where Choose and let Then X k 1 = (x (k 1 k,,x (k 1 n H k = H k x (k 1 k = r kk e 1 ( Ik 1 Hk X k = H k X k 1 = H k H 1 X = ( k n k k R (k 1 11 R (k 1 12 m k Xk 1 At k = n we have X n = H n H 1 X = ( n R m n where R is upper triangular and nonsingular Thus and Q T = H n H 1 Q = H 1 H n (2 MATLAB explicitly computes Q, but that is not necessary It is done to conform with a software philosophy If we were to write a routine in C + + or some compiled language for this, we might prefer to store the vectors w 1,,w n that define H k = I 2w k w T k 5

Since the first k 1 components of w k are zero, in many codes, X is overwritten by (in a 4 3 case w 11 r 12 r 13 w 21 w 22 r 23 w 31 w 32 w 33 w 41 w 42 w 43 The diagaonal of R can be stored in an extra vector (r 11,r 22,r 33 T, say Now, we can fill in some blanks If Q is as in (2, we compute c from c = H n H 1 b = Q T b and r LS from r LS = H 1 H n ( Then we recover the solution from Ry LS = c 1 It turns out the the Householder Q-R decomposition gives us left orthogonal basis matrices for two important subspaces Range and Null Spaces The range of a matrix X R m n is the set Range(X = {Xy : y R n } The range is a subspace of R m The rank of X is the dimension of its range space and is written rank(x Clearly, rank(x min{m, n} The span of a set of vectors x 1,,x n is given by span{x 1,,x n } = Range(X where X = (x 1,,x n If rank(x = n, then x 1,,x n is a basis for Range(X and X is a basis matrix A matrix X is rank deficient if rank(x < min{m,n}, is said to have full column rank if rank(x = n, and is said to have full row rank if rank(x = m The null space of X is the linear subspace of R n given by Null(X = {y R n : Xy = } 6

The columns of Q contain bases for two important subspaces Let Q = ( n m n Q 1 Q 2 The matrices Q 1 R m n and Q 2 R m m n are left orthogonal matrices satisfying Q T 1 Q 2 = Since X = Q 1 R it is easily verified that Range(Q 1 = Range(X That is, the columns of Q 1 are an orthonormal basis for Range(X Since then one can show that X T Q 2 = R T Q T 1 Q 2 = Range(Q 2 = Range(X = Null(X T From these two matrices we get orthogonal projections P 1 = Q 1 Q T 1, P 2 = Q 2 Q T 2 are projections on the spaces Range(X and Null(X T However, since if we have one, we have the other! P 2 = I P 1, P 1 = I P 2 7