LU Factorization Choleski Factorization The QR Factorization
LU Factorization: Gaussian Elimination Matrices Gaussian elimination transforms vectors of the form a α, b where a R k, 0 α R, and b R n k 1, to those of the form a α. 0 This is accomplished by left matrix multiplication as follows: I k k 0 0 a a 0 1 0 α = α 0 α 1 b I (n k 1) (n k 1) b 0.
LU Factorization: Gaussian Elimination Matrices Gaussian elimination transforms vectors of the form a α, b where a R k, 0 α R, and b R n k 1, to those of the form a α. 0 This is accomplished by left matrix multiplication as follows: I k k 0 0 a a 0 1 0 α = α 0 α 1 b I (n k 1) (n k 1) b 0 The matrix on the left is called a Gaussian elimination matrix..
Gaussian Elimination Matrices The matrix I k k 0 0 0 1 0 0 α 1 b I (n k 1) (n k 1) has ones on the diagonal and so is invertible. Indeed, I k k 0 0 0 1 0 0 α 1 b I (n k 1) (n k 1) 1 = I k k 0 0 0 1 0 0 α 1 b I (n k 1) (n k 1). Also note that I k k 0 0 0 1 0 0 α 1 b I (n k 1) (n k 1) x 0 y = x 0 y.
LU Factorization Suppose [ a1 v1 T A = u 1 Ã 1 ] C n m, with 0 a 1 C, u 1 C m 1, v 1 C n 1, and Ã1 C (m 1) (n 1).
LU Factorization Suppose [ a1 v1 T A = u 1 Ã 1 ] C n m, with 0 a 1 C, u 1 C m 1, v 1 C n 1, and Ã1 C (m 1) (n 1). Then [ ] [ 1 0 a1 v T ] [ ] 1 C I n m a1 v = 1 T, (*) u 1 Ã 1 0 A 1 u1 a 1 where A 1 = Ã1 u 1 v T 1 /a 1.
LU Factorization Suppose [ a1 v1 T A = u 1 Ã 1 ] C n m, with 0 a 1 C, u 1 C m 1, v 1 C n 1, and Ã1 C (m 1) (n 1). Then [ ] [ 1 0 a1 v T ] [ ] 1 C I n m a1 v = 1 T, (*) u 1 Ã 1 0 A 1 u1 a 1 where A 1 = Ã1 u 1 v1 T /a 1. Repeat m times to get L 1 m 1 L 1 2 L 1 1 A = U m 1 = U is upper triangular, so A = LU where L is lower triangular with ones on the diagonal.
Cholesky Factorization Suppose M R n n, symmetric and positive definite has LU factorization M = LU.
Cholesky Factorization Suppose M R n n, symmetric and positive definite has LU factorization M = LU. Then L 1 ML T = UL T is an upper triangular symmetric matrix.
Cholesky Factorization Suppose M R n n, symmetric and positive definite has LU factorization M = LU. Then L 1 ML T = UL T is an upper triangular symmetric matrix. That is, UL T = D, where D is diagonal.
Cholesky Factorization Suppose M R n n, symmetric and positive definite has LU factorization M = LU. Then L 1 ML T = UL T is an upper triangular symmetric matrix. That is, UL T = D, where D is diagonal. Since M is psd, D has positive diagonal entries, so M = LDL T = ˆLˆL T where ˆL = LD 1/2.
Cholesky Factorization Suppose M R n n, symmetric and positive definite has LU factorization M = LU. Then L 1 ML T = UL T is an upper triangular symmetric matrix. That is, UL T = D, where D is diagonal. Since M is psd, D has positive diagonal entries, so M = LDL T = ˆLˆL T where ˆL = LD 1/2. This is called the Cholesky Factorization of M.
The QR Factorization: Householder Reflections Given w R n we can associate the matrix U = I 2 ww T w T w which reflects R n across the hyperplane Span{w}. The matrix U is call the Householder reflection across this hyperplane.
The QR Factorization: Householder Reflections Given w R n we can associate the matrix U = I 2 ww T w T w which reflects R n across the hyperplane Span{w}. The matrix U is call the Householder reflection across this hyperplane. Given a pair of vectors x and y with x 2 = y 2, and x y, there is a Householder reflection such that y = Ux: (x y)(x y)t U = I 2 (x y) T (x y).
The QR Factorization: Householder Reflections Given w R n we can associate the matrix U = I 2 ww T w T w which reflects R n across the hyperplane Span{w}. The matrix U is call the Householder reflection across this hyperplane. Given a pair of vectors x and y with x 2 = y 2, and x y, there is a Householder reflection such that y = Ux: (x y)(x y)t U = I 2 (x y) T (x y). Householder reflections are symmetric unitary tranformations: U 1 = U T = U.
The QR Factorization Given A R m n write [ α0 a A 0 = 0 T ] b 0 A 0 and ν 0 = ( α0 b 0) 2.
The QR Factorization Given A R m n write [ α0 a A 0 = 0 T ] b 0 A 0 and ν 0 = ( α0 b 0) 2. Set H 0 = I 2 ww T w T w where w = ( α0 b 0 ) ν 0 e 1 = ( α0 ν 0 b 0 ).
The QR Factorization Given A R m n write [ α0 a A 0 = 0 T ] b 0 A 0 and ν 0 = ( α0 b 0) 2. Set H 0 = I 2 ww T w T w Then where w = H 0 A = ( α0 b 0 [ ν0 a1 T ]. 0 A 1 ) ν 0 e 1 = ( α0 ν 0 b 0 ).
QR Factorization H 0 A = [ ν0 a1 T ]. 0 A 1
QR Factorization Repeat to get H 0 A = [ ν0 a1 T ]. 0 A 1 Q T A = H n 1 H n 2... H 0 A = R, where R is upper triangular and Q is unitary.
QR Factorization Repeat to get H 0 A = [ ν0 a1 T ]. 0 A 1 Q T A = H n 1 H n 2... H 0 A = R, where R is upper triangular and Q is unitary. The A = QR is called the QR factorization of A.
Orthogonal Projections Suppose A R m n with m > n, then A =
Orthogonal Projections Suppose A R m n with m > n, then A = The QR factorization of A looks like [ ] R A = [Q 1, Q 2 ] = Q 0 1 R where the columns of Q 1 and Q 2 for an orthonormal basis for R m.
Orthogonal Projections Suppose A R m n with m > n, then A = The QR factorization of A looks like [ ] R A = [Q 1, Q 2 ] = Q 0 1 R where the columns of Q 1 and Q 2 for an orthonormal basis for R m. The columns of Q 1 form and orthonormal basis for the range of A with Q 1 Q1 T = the orthogonal projector onto Ran(A) and I Q 1 Q T 1 = Q 2 Q T 2 = the orthogonal projector onto Ran(A)
Orthogonal Projections Similarly, if A R m n with m < n, then A T =
Orthogonal Projections Similarly, if A R m n with m < n, then A T = The QR factorization of A T looks like [ ] A T R = [Q 1, Q 2 ] = Q 0 1 R where the columns of Q 1 and Q 2 for an orthonormal basis for R m.
Orthogonal Projections Similarly, if A R m n with m < n, then A T = The QR factorization of A T looks like [ ] A T R = [Q 1, Q 2 ] = Q 0 1 R where the columns of Q 1 and Q 2 for an orthonormal basis for R m. The columns of Q 1 form and orthonormal basis for the range of A T with and Q 1 Q T 1 = the orthogonal projector onto Ran(A T ) I Q 1 Q T 1 = Q 2 Q T 2 = the orthogonal projector onto Ran(A T ) = Nul(A)