Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Size: px
Start display at page:

Download "Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)"

Transcription

1 Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to the hyperplane span{w}. w 9-1 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-1

2 A few simple properties: For real w: P is symmetric It is also orthogonal (P T P = I). In the complex case P = I 2ww H is Hermitian and unitary. P can be written as P = I βvv T with β = 2/ v 2 2, where v is a multiple of w. [storage: v and β] P x can be evaluated x β(x T v) v (op count?) Similarly: P A = A vz T where z T = β v T A NOTE: we work in R m, so all vectors are of length m, P is of size m m, etc. 9-2 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-2

3 Problem 1: Given a vector x 0, find w such that (I 2ww T )x = αe 1, where α is a (free) scalar. Writing (I βvv T )x = αe 1 yields β(v T x) v = x αe 1. Desired w is a multiple of x αe 1, i.e., we can take v = x αe 1 To determine α we just recall that (I 2ww T )x 2 = x 2 As a result: α = x 2, or α = ± x TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-3

4 Should verify that both signs work, i.e., that in both cases we indeed get P x = αe 1 [exercise] Which sign is best? To reduce cancellation, the resulting x αe 1 should not be small. So, α = sign(ξ 1 ) x 2. v = x + sign(ξ 1 ) x 2 e 1 and β = 2/ v 2 2 v = ˆξ 1 ξ 2. ξ m 1 with ˆξ1 = { ξ1 + x 2 if ξ 1 > 0 ξ 1 x 2 if ξ 1 0 ξ m OK, but will yield a negative multiple of e 1 if ξ 1 > TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-4

5 .. Show that (I βvv T )x = αe 1 when v = x αe 1 and α = ± x 2. Solution: Equivalent to showing that x (βx T v)v = αe 1 i.e., x αe 1 = (βx T v)v but recall that v = x αe 1 so we need to show that βx T 2x T v v = 1 i.e., that = 1 x αe Denominator = x α2 2αe T 1 x = 2( x 2 2 αet 1 x) Numerator = 2x T v = 2x T (x αe 1 ) = 2( x 2 2 αxt e 1 ) Numerator/ Denominator = TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-5

6 Alternative: Define σ = m i=2 ξ2 i. Always set ˆξ 1 = ξ 1 x 2. Update OK when ξ 1 0 When ξ 1 > 0 compute ˆx 1 as ˆξ 1 = ξ 1 x 2 = ξ2 1 x 2 2 ξ 1 + x 2 = σ ξ 1 + x 2 So: ˆξ1 = { σ ξ 1 + x 2 if ξ 1 > 0 ξ 1 x 2 if ξ 1 0 It is customary to compute a vector v such that v 1 = 1. So v is scaled by its first component. If σ == 0, wll get v = [1; x(2 : m)] and β = TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-6

7 Matlab function: function [v,bet] = house (x) %% computes the householder vector for x m = length(x); v = [1 ; x(2:m)]; sigma = v(2:m) * v(2:m); if (sigma == 0) bet = 0; else xnrm = sqrt(x(1)^2 + sigma) ; if (x(1) <= 0) v(1) = x(1) - xnrm; else v(1) = -sigma / (x(1) + xnrm) ; end bet = 2 / (1+sigma/v(1)^2); v = v/v(1) ; end 9-7 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-7

8 Problem 2: Generalization. Given an m n matrix X, find w 1, w 2,..., w n such that (I 2w n w T n ) (I 2w 2w T 2 )(I 2w 1w T 1 )X = R where r ij = 0 for i > j First step is easy : select w 1 so that the first column of X becomes αe 1 Second step: select w 2 so that x 2 has zeros below 2nd component. etc.. After k 1 steps: X k P k 1... P 1 X has the following shape: 9-8 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-8

9 X k = x 11 x 12 x 13 x 1n x 22 x 23 x 2n x 33 x 3n.... x kk. x k+1,k x k+1,n... x m,k x m,n. To do: transform this matrix into one which is upper triangular up to the k-th column while leaving the previous columns untouched. 9-9 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-9

10 To leave the first k 1 columns unchanged w must have zeros in positions 1 through k 1. P k = I 2w k w T k, w k = v v 2, where the vector v can be expressed as a Householder vector for a shorter vector using the matlab function house, v = ( ) 0 house(x(k : m, k)) The result is that work is done on the (k : m, k : n) submatrix TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-10

11 ALGORITHM : 1 Householder QR 1. For k = 1 : n do 2. [v, β] = house(x(k : m, k) 3. X(k : m, k : n) = (I βvv T )X(k : m, k : n) 4 If (k < m) 5 X(k + 1 : m, k) = v(2 : m k + 1) 6 end 7 end In the end: X n = P n P n 1... P 1 X = upper triangular 9-11 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-11

12 Yields the factorization: X = QR where Q = P 1 P 2... P n and R = X n Example: Reduce the system of vectors: X = [x 1, x 2, x 3 ] = Answer: 9-12

13 x 1 = 1 1, x 1 2 = 2, v 1 = P 1 = I 2w 1 w1 T = , , w 1 = TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-13

14 2 1 2 P 1 X = 0 1/ /3 2 Next stage: 0 2/ x 2 = 1/3 2/3, x 2 2 = 1, v 2 = 1/ /3, 2/3 2/ P 2 = I 2 v2 2v T v 2 v2 T = , TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-14

15 2 1 2 P 2 P 1 X = x 3 = 0 2, x 3 2 = 13, v 1 = 3 P 2 = I 2 v T 3 v 3v 3 v T 3 = Last stage: , , TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-15

16 2 1 2 P 3 P 2 P 1 X = = R, P 3 P 2 P 1 = So we end up with the factorization X = P 1 P 2 P 3 }{{} Q R 9-16 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-16

17 MAJOR difference with Gram-Schmidt: Q is m m and R is m n (same as X). The matrix R has zeros below the n-th row. Note also : this factorization always exists. Cost of Householder QR? Compare with Gram-Schmidt Question: How to obtain X = Q 1 R 1 where Q 1 = same size as X and R 1 is n n (as in MGS)? 9-17 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-17

18 Answer: simply use the partitioning X = ( ) ( ) R Q 1 Q X = Q 1 R 1 Referred to as the thin QR factorization (or economy-size QR factorization in matlab) How to solve a least-squares problem Ax = b using the Householder factorization? Answer: no need to compute Q 1. Just apply Q T to b. This entails applying the successive Householder reflections to b 9-18 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-18

19 The rank-deficient case Result of Householder QR: Q 1 and R 1 such that Q 1 R 1 = X. In the rank-deficient case, can have span{q 1 } = span{x} because R 1 may be singular. Remedy: Householder QR with column pivoting. Result will be: AΠ = Q ( ) R11 R R 11 is nonsingular. So rank(x) = size of R 11 = rank(q 1 ) and Q 1 and X span the same subspace. Π permutes columns of X TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-19

20 Algorithm: At step k, active matrix is X(k : m, k : n). Swap k-th column with column of largest 2-norm in X(k : m, k : n). If all the columns have zero norm, stop. X(k:m, k:n) Swap with column of largest norm 9-20 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-20

21 Practical Question: How to implement this??? Suppose you know the norms of each column of X at the start. What happens to each of the norms of X(2 : m, j) for j = 2,, n? Generalize this to step k and obtain a procedure to inexpensively compute the desired norms at each step TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-21

22 Properties of the QR factorization Consider the thin factorization A = QR, (size(q) = [m,n] = size (A)). Assume r ii > 0, i = 1,..., n 1. When A is of full column rank this factorization exists and is unique 2. It satisfies: span{a 1,, a k } = span{q 1,, q k }, k = 1,..., n 3. R is identical with the Cholesky factor G T of A T A. When A in rank-deficient and Householder with pivoting is used, then Ran{Q 1 } = Ran{A} 9-22 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-22

23 Givens Rotations Matrices of the form G(i, k, θ) = c s s c i k with c = cos θ and s = sin θ represents a rotation in the span of e i and e k TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-23

24 Main idea of Givens rotations consider y = Gx then y i = c x i + s x k y k = s x i + c x k y j = x j for j i, k Can make y k = 0 by selecting s = x k /t; c = x i /t; t = x 2 i + x2 k This is used to introduce zeros in the first column of a matrix A (for example G(m 1, m), G(m 2, m 1) etc..g(1, 2) ).. See text for details 9-24 TB: 10,19; AB: 2.3.3;GvL 5.1 HouQR 9-24

25 Orthogonal projectors and subspaces Notation: Given a supspace X of R m define X = {y y x, x X } Let Q = [q 1,, q r ] an orthonormal basis of X How would you obtain such a basis? Then define orthogonal projector P = QQ T 9-25 AB: 2.4.4;GvL 5.4 URV 9-25

26 Properties (a) P 2 = P (b) (I P ) 2 = I P (c) Ran(P ) = X (d) Null(P ) = X (e) Ran(I P ) = Null(P ) = X Note that (b) means that I P is also a projector Proof. (a), (b) are trivial (c): Clearly Ran(P ) = {x x = QQ T y, y R m } X. Any x X is of the form x = Qy, y R m. Take P x = QQ T (Qy) = Qy = x. Since x = P x, x Ran(P ). So X Ran(P ). In the end X = Ran(P ). (d): x X (x, y) = 0, y X (x, Qz) = 0, z R r (Q T x, z) = 0, z R r Q T x = 0 QQ T x = 0 P x = AB: 2.4.4;GvL 5.4 URV 9-26

27 (e): Need to show inclusion both ways. x Null(P ) P x = 0 (I P )x = x x Ran(I P ) x Ran(I P ) y R m x = (I P )y P x = P (I P )y = 0 x Null(P ) Result: Any x R m can be written in a unique way as x = x 1 + x 2, x 1 X, x 2 X Proof: Just set x 1 = P x, x 2 = (I P )x Called the Orthogonal Decomposition 9-27 AB: 2.4.4;GvL 5.4 URV 9-27

28 Orthogonal decomposition In other words R m = P R m (I P )R m or: R m = Ran(P ) Ran(I P ) or: R m = Ran(P ) Null(P ) or: R m = Ran(P ) Ran(P ) Can complete basis {q 1,, q r } into orthonormal basis of R m, q r+1,, q m {q r+1,, q m } = basis of X. dim(x ) = m r AB: 2.4.4;GvL 5.4 URV 9-28

29 Four fundamental supspaces - URV decomposition Let A R m n and consider Ran(A) Property 1: Ran(A) = Null(A T ) Proof: x Ran(A) iff (Ay, x) = 0 for all y iff (y, A T x) = 0 for all y... Property 2: Ran(A T ) = Null(A) Take X = Ran(A) in orthogonal decomoposition 9-29 AB: 2.4.4;GvL 5.4 URV 9-29

30 Result: R m = Ran(A) Null(A T ) R n = Ran(A T ) Null(A) 4 fundamental subspaces Ran(A) Null(A), Ran(A T ) Null(A T ) 9-30 AB: 2.4.4;GvL 5.4 URV 9-30

31 Express the above with bases for R m : and for R n [u } 1, u 2, {{, u } r, u } r+1, u r+2 {{,, u m} ] Ran(A) Null(A T ) [v } 1, v 2, {{, v } r, v } r+1, v r+2 {{,, v n} ] Ran(A T ) Null(A) Observe u T i Av j = 0 for i > r or j > r. Therefore ( ) C 0 U T AV = R = C R r r 0 0 m n A = URV T General class of URV decompositions 9-31 AB: 2.4.4;GvL 5.4 URV 9-31

32 Far from unique. Show how you can get a decomposition in which C is lower (or upper) triangular, from the above factorization. Can select decomposition so that R is upper triangular URV decomposition. Can select decomposition so that R is lower triangular ULV decomposition. SVD = special case of URV where R = diagonal How can you get the ULV decomposition by using only the Householder QR factorization (possibly with pivoting)? [Hint: you must use Householder twice] 9-32 AB: 2.4.4;GvL 5.4 URV 9-32

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Numerical Linear Algebra Chap. 2: Least Squares Problems

Numerical Linear Algebra Chap. 2: Least Squares Problems Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0 Lecture # 5 The Linear Least Squares Problem Let X R m n,m n be such that rank(x = n That is, The problem is to find y LS such that We also want Xy =, iff y = b Xy LS 2 = min y R n b Xy 2 2 (1 r LS = b

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

Important Matrix Factorizations

Important Matrix Factorizations LU Factorization Choleski Factorization The QR Factorization LU Factorization: Gaussian Elimination Matrices Gaussian elimination transforms vectors of the form a α, b where a R k, 0 α R, and b R n k 1,

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Dimension and Structure

Dimension and Structure 96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

The Gram Schmidt Process

The Gram Schmidt Process The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

The Gram Schmidt Process

The Gram Schmidt Process u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

1. Search Directions In this chapter we again focus on the unconstrained optimization problem. lim sup ν

1. Search Directions In this chapter we again focus on the unconstrained optimization problem. lim sup ν 1 Search Directions In this chapter we again focus on the unconstrained optimization problem P min f(x), x R n where f : R n R is assumed to be twice continuously differentiable, and consider the selection

More information

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Linear least squares problem: Example

Linear least squares problem: Example Linear least squares problem: Example We want to determine n unknown parameters c,,c n using m measurements where m n Here always denotes the -norm v = (v + + v n) / Example problem Fit the experimental

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

The Linear Least Squares Problem

The Linear Least Squares Problem CHAPTER 3 The Linear Least Squares Problem In this chapter we we study the linear least squares problem introduced in (4) Since this is such a huge and important topic, we will only be able to briefly

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Fundamentals

Linear Algebra Fundamentals Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

More information

1 0 1, then use that decomposition to solve the least squares problem. 1 Ax = 2. q 1 = a 1 a 1 = 1. to find the intermediate result:

1 0 1, then use that decomposition to solve the least squares problem. 1 Ax = 2. q 1 = a 1 a 1 = 1. to find the intermediate result: Exercise Find the QR decomposition of A =, then use that decomposition to solve the least squares problem Ax = 2 3 4 Solution Name the columns of A by A = [a a 2 a 3 ] and denote the columns of the results

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information