April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

Similar documents
1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

MATH 3511 Lecture 1. Solving Linear Systems 1

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Review of Linear Algebra

Computational Linear Algebra

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

2.1 Gaussian Elimination

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Linear Algebra Review. Vectors

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

Matrix Algebra: Summary

CS 246 Review of Linear Algebra 01/17/19

Fundamentals of Engineering Analysis (650163)

1 Inner Product and Orthogonality

Math Linear Algebra Final Exam Review Sheet

Matrix Factorization and Analysis

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Gaussian Elimination and Back Substitution

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

The Solution of Linear Systems AX = B

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Solution of Linear Equations

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Linear Independence x

2. Every linear system with the same number of equations as unknowns has a unique solution.

V C V L T I 0 C V B 1 V T 0 I. l nk

A Review of Matrix Analysis

5.6. PSEUDOINVERSES 101. A H w.

Matrix Representation

Eigenvalues and Eigenvectors

LINEAR ALGEBRA SUMMARY SHEET.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Linear Algebra Solutions 1

Solving Dense Linear Systems I

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le


Linear Algebra and Matrix Inversion

Matrix equation Ax = b

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Applied Linear Algebra

1 9/5 Matrices, vectors, and their applications

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Review of Basic Concepts in Linear Algebra

Solving Linear Systems Using Gaussian Elimination. How can we solve

CHAPTER 6. Direct Methods for Solving Linear Systems

This can be accomplished by left matrix multiplication as follows: I

Numerical Linear Algebra

Algebra C Numerical Linear Algebra Sample Exam Problems

18.S34 linear algebra problems (2007)

Eigenvalues and Eigenvectors

Final Exam Practice Problems Answers Math 24 Winter 2012

Solving Linear Systems of Equations

Linear Algebra V = T = ( 4 3 ).

MA2501 Numerical Methods Spring 2015

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Matrices. Chapter Definitions and Notations

ELE/MCE 503 Linear Algebra Facts Fall 2018

Matrix decompositions

I = i 0,

Cheat Sheet for MATH461

Solving linear equations with Gaussian Elimination (I)

Linear Algebra: Matrix Eigenvalue Problems

Math 2331 Linear Algebra

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Diagonalizing Matrices

1 Last time: least-squares problems

Math113: Linear Algebra. Beifang Chen

HOMEWORK 9 solutions

2 Two-Point Boundary Value Problems

Math Matrix Algebra

Properties of Matrices and Operations on Matrices

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

G1110 & 852G1 Numerical Linear Algebra

II. Determinant Functions

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Numerical Linear Algebra

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

LINEAR SYSTEMS (11) Intensive Computation

MTH 464: Computational Linear Algebra

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

EIGENVALUES AND EIGENVECTORS 3

Linear Systems of n equations for n unknowns

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Lecture Notes in Linear Algebra

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Transcription:

Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19

Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n. Such naturally arise in a variety of math, science, and engineering applications. For example, The Hamiltonian operator of a physical system The Hessian of a convex function The covariance matrix of linearly independent random variables UCB 2/19

Symmetric positive-definite II Theorem The following statements are equivalent: 1 A is symmetric positive definite. 2 (x, y) := x T Ay is an inner product on R n. 3 All eigenvalues are positive and eigenvectors can be constructed to form an orthonormal basis. 4 A is symmetric with positive leading principal minors. 5 The of A exists. UCB 3/19

Existence I To prove the existence of the for symmetric positive definite, we apply induction. Induction hypothesis: Given any spd A n 1 R n 1 n 1 assume there exists lower-triangular L n 1 where. A n 1 = L n 1 L T n 1 Base case: A 1 = [a 1,1 ] = [ a 1,1 ][ a 1,1 ] T. Consistency: Show that for any spd A n R n n there exists L n such that A n = L n L T n. UCB 4/19

Existence II Let us represent A n as [ α ] c T A n = c B where α := a 1,1 is a scalar, c := a 2:n,1 is a column vector, and B = a 2:n,2:n is the remaining R n 1 n 1 lower-right sub-matrix. Then we have A n = [ ][ ][ ] α 0 1 0 α α 1 c T 1 α c I n 1 0 B 1 α cct 0 I n 1 UCB 5/19

Existence III Let y be an arbitrary nonzero column vector in R n 1. We can solve It follows [ ] α α 1 c T 0 I n 1 x = [ ] 0 y y T (B 1 α cc T )y = x T A n x > 0 Thus B 1 α cct is spd. From the induction hypothesis. B 1 α cct = L n 1 L T n 1 UCB 6/19

Existence IV Putting it all together, we have [ ][ ][ ] α 0 1 0 α α 1 c T 1 A n = α c I n 1 0 B 1 α cct 0 I n 1 = = [ ][ ][ ] α 0 1 0 α α 1 c T 1 α c I n 1 0 L n 1 L T n 1 0 I n 1 [ ][ ] α 0 α α 1 c T 1 α c L n 1 0 L T n 1 = L n L T n UCB 7/19

Implementation I Note that the proof of existence was constructive. That means we can follow the same procedure to calculate the. 1 Loop over columns. for k=1:n-1 2 Set diagonal element of L. L(k,k)=sqrt(A(k,k)) 3 Finish column. L(k+1:n,k)=A(k+1:n)/L(k,k) 4 Update lower right sub-matrix of A. A(k+1:n,k+1:n)=A(k+1:n,k+1:n)- L(k+1:n,k)*L(k+1:n,k) T 5 End for loop. end UCB 8/19

I We can similarly take advantage of the stucture of band to optimize the calculation of the LU. Definition A band matrix A R n n of band width 2m + 1 has nonzero entries only on the main diagonal, m super-diagonals, and m sub-diagonals. UCB 9/19

II For example, a tridiagonal matrix (band width 3) has the form: a 1,1 a 1,2 0 a 2,1 a 2,2 a 2,3 a 3,2 a 3,3 a 3,4. A = a 4,3 a.. 4,4...... an 1,n 0 a n,n 1 a n,n UCB 10/19

III We can easily show that the LU will have the form: 1 0 u 1,1 u 1,2 0 l 2,1 1. u.. 2,2 A =......... un 1,n 0 l n,n 1 1 0 u n,n Because we know so many entries must be zero, we can optimize the algorithm to avoid calculating them. Similarly, the procedures for forward and backward substitution can also be optimized in these cases. UCB 11/19

Diagonal dominance I Partial pivoting can be implemented, but it will increase the band width of U from m + 1 to 2m + 1. However, pivoting can sometimes be avoided. Definition A matrix A R n n is column diagonally dominant iff a j,j > i j a i,j We can show that a column diagonally dominant matrix never requires pivoting. UCB 12/19

Diagonal dominance II Proof: Let a (i) k,j represent element k, j after zeroing column i the standard LU algorithm. Assume the induction hypothesis: j,j > n k=i+1, j k,j When column i + 1 is zeroed below the diagonal, the elements of A update as follows: a (i+1) k,j = a (i) k,j a(i) k,i+1 a(i) i+1,j a (i) i+1,i+1 UCB 13/19

Diagonal dominance III a (i+1) j,j = j,j a(i) j,i+1 a(i) i+1,j a (i) i+1,i+1 j,j a(i) j,i+1 a(i) i+1,i+1 n > k,j a(i) = k=i+1, j n k=i+2, j i+1,j j,i+1 a(i) i+1,j i+1,i+1 k,j + a(i) i+1,j (1 a(i) j,i+1 i+1,i+1 ) UCB 14/19

Diagonal dominance IV To make further progress we observe: a (i+1) k,j k,j + a(i) k,i+1 a(i) i+1,j i+1,i+1 k,j a(i+1) k,j a(i) k,i+1 a(i) i+1,j i+1,i+1 UCB 15/19

Diagonal dominance V Together this gives: a (i+1) j,j > > n k=i+2, j n k=i+2, j a (i+1) k,j + a (i+1) k,j i+1,j 1 n k=i+2 k,i+1 i+1,i+1 This shows consistency for the induction hypothesis and the base case easily follows from the definition of diagonal dominance. The desired result immediate follows since the diagonal elements are always largest in magnitude. UCB 16/19

Implementation I We can follow the standard LU procedure and simply avoid elements that do not update. If A has band width 2m + 1, we have 1 Loop over columns. for k=1:n-1 2 Construct column k of L. L(k+1:k+m,k)=A(k+1:k+m,k)/A(k,k) 3 Set column k of A to zero. A(k+1:k+m,k)=0 4 Update lower right sub-matrix of A. A(k+1:k+m,k+1:k+m)=A(k+1:k+m,k+1:k+m)- L(k+1:k+m,k)*A(k,k+1:k+m) 5 End for loop. Identify U. end; U=A UCB 17/19

Implementation II In the case of a tridiagonal matrix, the procedure becomes exceptionally simple and efficient. Let us write the diagonals of A as vectors. Then L and U can be written as: d 1 e 1 0. c 1 d.. 2 A =...... en 1 0 c n 1 d n 1 0 u 1 e 1 0 l 1 1. u.. 2 =......... en 1 0 l n 1 1 0 u n UCB 18/19

Implementation III Follow the same procedure, but avoid updating A. Build U directly. Note that the super-diagonal doesn t change and can be copied directly into U. 1 Set first row of U u(1)=d(1) 2 Loop over columns. for k=1:n-1 3 Construct column k of L. l(k)=c(k)/u(k) 4 Update A and place result directly in U. u(k+1)=d(k+1)-l(k)*e(k) 5 End for loop. end UCB 19/19