AMS526: Numerical Analysis I (Numerical Linear Algebra)

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Matrix Multiplication Chapter IV Special Linear Systems

Algebra C Numerical Linear Algebra Sample Exam Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

This can be accomplished by left matrix multiplication as follows: I

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computational Linear Algebra

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Lecture 3: QR-Factorization

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

9. Numerical linear algebra background

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Gaussian Elimination and Back Substitution

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Matrix Factorization and Analysis

Scientific Computing

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Intrinsic products and factorizations of matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 7: Positive Semidefinite Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Math Matrix Algebra

Numerical Linear Algebra

9. Numerical linear algebra background

Matrix decompositions

The QR Factorization

5.6. PSEUDOINVERSES 101. A H w.

ELE/MCE 503 Linear Algebra Facts Fall 2018

Linear Algebra, part 3 QR and SVD

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

Lecture 10 - Eigenvalues problem

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Solution of Linear Equations

Homework 2 Foundations of Computational Math 2 Spring 2019

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

ACM106a - Homework 2 Solutions

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

3 (Maths) Linear Algebra

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

Review problems for MA 54, Fall 2004.

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Positive Definite Matrix

Numerical Methods - Numerical Linear Algebra

The Solution of Linear Systems AX = B

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

B553 Lecture 5: Matrix Algebra Review

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra Solutions 1

Linear Algebra and Matrix Inversion

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

AMS526: Numerical Analysis I (Numerical Linear Algebra)

ECE133A Applied Numerical Computing Additional Lecture Notes

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Symmetric matrices and dot products

G1110 & 852G1 Numerical Linear Algebra

1 Last time: least-squares problems

Solving Dense Linear Systems I

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Review of Linear Algebra

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

The System of Linear Equations. Direct Methods. Xiaozhou Li.

Numerical Methods I: Numerical linear algebra

Solving linear systems (6 lectures)

Solving Linear Systems Using Gaussian Elimination. How can we solve

EECS 275 Matrix Computation

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

1 Positive definiteness and semidefiniteness

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

MAT 2037 LINEAR ALGEBRA I web:

I. Multiple Choice Questions (Answer any eight)

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solving linear equations with Gaussian Elimination (I)

Chapter 3 Transformations

Transcription:

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11

Symmetric Positive-Definite Matrices Symmetric matrix A R n n is symmetric positive definite (SPD) if x T Ax > 0 for x R n \{0} Hermitian matrix A C n n is Hermitian positive definite (HPD) if x Ax > 0 for x C n \{0} SPD matrices have positive real eigenvalues and orthogonal eigenvectors Note: Most textbooks only talk about SPD or HPD matrices, but a positive-definite matrix does not need to be symmetric or Hermitian! A real matrix A is positive definite iff A + A T is SPD If x T Ax 0 for x R n \{0}, then A is said to be positive semidefinite Xiangmin Jiao Numerical Analysis I 2 / 11

Properties of Symmetric Positive-Definite Matrices SPD matrix often arises as Hessian matrix of some convex functional E.g., least squares problems; partial differential equations If A is SPD, then A is nonsingular Let X be any n m matrix with full rank and n m. Then X T X is symmetric positive definite, and XX T is symmetric positive semidefinite If A is n n SPD and X R n m has full rank and n m, then X T AX is SPD Any principal submatrix (picking some rows and corresponding columns) of A is SPD and a ii > 0 Xiangmin Jiao Numerical Analysis I 3 / 11

Cholesky Factorization If A is symmetric positive definite, then there is factorization of A A = R T R where R is upper triangular, and all its diagonal entries are positive Note: Textbook calls is Cholesky decomposition, but Cholesky factorization is a better term Key idea: take advantage and preserve symmetry and positive-definiteness during factorization Eliminate below diagonal and to the right of diagonal [ a11 b A = T ] [ ] [ r11 0 r11 b = T ] /r 11 b K b/r 11 I 0 K bb T /a 11 [ ] [ ] [ r11 0 1 0 r11 b = T ] /r 11 b/r 11 I 0 K bb T = R1 T A /a 11 0 I 1 R 1 where r 11 = a 11, where a 11 > 0 K bb T /a 11 is principal submatrix of SPD A 1 = R T therefore is SPD, with positive diagonal entries 1 AR 1 1 and Xiangmin Jiao Numerical Analysis I 4 / 11

Cholesky Factorization Apply recursively to obtain ( ) A = R1 T R2 T Rn T (R n R 2 R 1 ) = R T R, r jj > 0 which is known as Cholesky factorization How to obtain R from R n,..., R 2, R 1? [ ] [ ] [ r11 0 1 0 r11 s A = T ] s I 0 A 1 0 I [ ] [ ] [ ] [ r11 0 1 0 1 0 r11 s = T s I 0 R T 0 R 1 0 I [ ] [ r11 0 r11 s T ] = s R T = R 0 R T R ] R is union of kth rows of R k (R T is union of columns of R T k ) Matrix A 1 is called the Schur complement of a 11 in A Xiangmin Jiao Numerical Analysis I 5 / 11

Existence and Uniqueness Every SPD matrix has a unique Cholesky factorization Exists because algorithm for Cholesky factorization always works for SPD matrices Unique because once α = a11 is determined at each step, entire column w/α is determined Question: How to check whether a symmetric matrix is positive definite? Answer: Run Cholesky factorization and it succeeds iff the matrix is positive definite. Xiangmin Jiao Numerical Analysis I 6 / 11

Algorithm of Cholesky Factorization Factorize SPD matrix A R n n into A = R T R Algorithm: Cholesky factorization R = A for k = 1 to n for j = k + 1 to n r j,j:n r j,j:n (r kj /r kk )r k,j:n r k,k:n r k,k:n / r kk Note: r j,j:n denotes subvector of jth row with columns j, j + 1,..., n Operation count n n k=1 j=k+1 2(n j) 2 n k=1 j=1 k j n k=1 k 2 n3 3 In practice, R overwrites A, and only upper-triangular part is stored. Xiangmin Jiao Numerical Analysis I 7 / 11

Bordered Form of Cholesky Factorization Another version of Cholesky factorization is the bordered form Let A j denote the j j leading principle submatrix of A. [ Aj 1 c A j = c T a jj ] [ R T = j 1 0 h T r jj ] [ Rj 1 h 0 r jj where A j 1 = Rj 1 T R j 1, c = Rj 1 T h, and a jj = h T h + rjj 2 Therefore, h = R T j 1 c, r jj = a jj h T h. h is solved for using forward substitution ], Xiangmin Jiao Numerical Analysis I 8 / 11

Notes on Cholesky Factorization Implementations Different versions of Cholesky factorization can all use block-matrix operators to achieve better performance, and actual performance depends on the sizes of blocks Different versions may have different amount of parallelism Hermitian positive definite (HPD) matrices Cholesky factorization A = R R, exists for HPD matrices, where R is upper-triangular and its diagonal entries are positive real values Question: What happens if A is symmetric but not positive definite? Xiangmin Jiao Numerical Analysis I 9 / 11

LDL T Factorization Cholesky factorization is sometimes given by A = LDL T where D is diagonal matrix and L is unit lower triangular matrix This avoids computing square roots Symmetric indefinite systems can be factorized with PAP T = LDL T, where P is a permutation matrix D is block diagonal with 1 1 and 2 2 blocks its cost is similar to Cholesky factorization Xiangmin Jiao Numerical Analysis I 10 / 11

Banded Positive Definite Systems A matrix A is banded if there is a narrow band around the main diagonal such that all of the entries of A outside of the band are zero If A is n n, and there is an s n such that a ij = 0 whenever i j > s, then we say A is banded with bandwidth 2s + 1 For symmetric matrices, only half of band is stored. We say that A has semi-bandwidth s. Theorem Let A be a banded, symmetric positive definite matrix with semi-bandwidth s. Then its Cholesky factor R also has semi-bandwidth s. This is easy to prove using bordered form of Cholesky factorization Total flop count of Cholesky factorization is only ns 2 However, A 1 of a banded matrix may be dense, so it is not economical to compute A 1 Xiangmin Jiao Numerical Analysis I 11 / 11