Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Similar documents
Lecture 3: QR-Factorization

Computational Linear Algebra

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

CS 246 Review of Linear Algebra 01/17/19

Fundamentals of Engineering Analysis (650163)

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Linear Algebra March 16, 2019

Notes on basis changes and matrix diagonalization

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Matrix Factorization and Analysis

Linear Algebra, part 3 QR and SVD

5.6. PSEUDOINVERSES 101. A H w.

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Numerical Linear Algebra

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Quantum Computing Lecture 2. Review of Linear Algebra

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

I = i 0,

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Maths for Signals and Systems Linear Algebra in Engineering

Matrix decompositions

G1110 & 852G1 Numerical Linear Algebra

Linear Least squares

Roundoff Error. Monday, August 29, 11

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Matrices and Vectors

Numerical Methods - Numerical Linear Algebra

Image Registration Lecture 2: Vectors and Matrices

Numerical Linear Algebra

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Multivariate Statistical Analysis

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Linear Equations and Matrix

Linear Algebra Review. Vectors

Math 291-2: Lecture Notes Northwestern University, Winter 2016

MAA507, Power method, QR-method and sparse matrix representation.

11.1 Jacobi Transformations of a Symmetric Matrix

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Notes on Solving Linear Least-Squares Problems

Lecture 3: Matrix and Matrix Operations

Matrices and systems of linear equations

lecture 2 and 3: algorithms for linear algebra

B553 Lecture 5: Matrix Algebra Review

the centre for theoretical physics

Linear Algebra (Review) Volker Tresp 2018

Numerical Methods I Non-Square and Sparse Linear Systems

1 Last time: least-squares problems

Linear Algebra and Matrices

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Solution of Linear Equations

Topic 15 Notes Jeremy Orloff

Linear Algebra in Actuarial Science: Slides to the lecture

Properties of Matrices and Operations on Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Algebra C Numerical Linear Algebra Sample Exam Problems

LINEAR SYSTEMS (11) Intensive Computation

Numerical Linear Algebra

Important Matrix Factorizations

Numerical Linear Algebra

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Matrix Representation

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra. Carleton DeTar February 27, 2017

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Math Lecture 26 : The Properties of Determinants

Phys 201. Matrices and Determinants

CHAPTER 6. Direct Methods for Solving Linear Systems

Example Linear Algebra Competency Test

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Course Notes: Week 1

Lecture notes: Applied linear algebra Part 1. Version 2

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Notes on Eigenvalues, Singular Values and QR

Linear Algebra: Matrix Eigenvalue Problems

lecture 3 and 4: algorithms for linear algebra

Conceptual Questions for Review

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Dot Products, Transposes, and Orthogonal Projections

Review of similarity transformation and Singular Value Decomposition

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

THE QR METHOD A = Q 1 R 1

Maths for Signals and Systems Linear Algebra in Engineering

7.2 Steepest Descent and Preconditioning

Lecture 4: Applications of Orthogonality: QR Decompositions

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB

Next topics: Solving systems of linear equations

Transcription:

Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation

Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition in which U=L T, or U ij =L ji. One therefore has, a ii = N k= L ik U ki = N k= L ik L ik = i k= L 2 ik i = k= 2 ik Lii L + Or, a ij = N k= L ik U kj = N k= L ik L jk = i k= L ik L jk i = k= L L + ik jk L ii L If you look at these carefully, you ll see that the L s on the RHS that are needed are already computed. O(N 3 ) ji

The Cholesky decomposition is quite stable without pivoting. If it fails, it means that your matrix was not (within roundoff accuracy) positive definite. In fact, it can be used as a quick method to determine if a matrix is positive definite!

QR decomposition: A=QR Where Q is orthogonal Q T.Q= and R is upper triangular. The solution of the system is obtained by rewriting it like this, and then backward substitute. This decomposition actually exists for rectangular matrices as well.

The way to perform a QR decomposition is through a method we will discuss soon in the context of eigensystems called the Householder transformation. This transformation consists in multiplying a matrix times an orthogonal factor that can be chosen in such a way as to zero out all elements in a column of a matrix below a given one. So we a perform Householder transformation that zeroes out all elements below the top leftmost one. Then we perform another zeroing out all elements below a 22 and so on. Since the product of the Householder matrices is orthogonal, we end up with a formula, HH2... HN A= H A= R A= QR With Q=H T.

Another nice feature of the QR decomposition is that there are many procedures for zeroing out components of matrices that operate through an orthogonal matrix. One we will discuss soon is the Jacobi rotation. This one can be used effectively when one requires repeated solution of systems of equations where the matrix A changes a little in the sense of, One then converts R to upper triangular via Jacobi rotations, which get bundled up in the orthogonal matrix Q.

Eigensystems Terminology: A is an NxN matrix, x is its eigenvector and lambda the eigenvalue associated with the eigenvector. Such system of equations only has solution if, This is an Nth order equation, so in principle there can be up to N different eigenvalues. Not used numerically in practice. One can add x times a constant to both members of the equation, therefore shifting the value of the eigenvalue without changing the eigenvector. This is useful numerically. It also highlights that there is nothing special about a zero eigenvalue, any eigenvalue can be shifted to zero.

Symmetric: A=A T Hermitian: A=A =(A T ) * Orthogonal: A.A T =A T.A= Unitary: A - =A Normal: A. A =A.A Hermitian matrices have real eigenvalues. Real symmetric matrices therefore also have real eigenvalues. Any matrix whose columns (or rows) are made of an orthonormal basis of vectors is orthogonal. Normal matrices are important because if they have N distinct eigenvalues, their eigenvectors form a complete orthogonal basis. Even if they are not distinct they can be made orthogonal (Gram-Schmidt). The matrix of eigenvectors can therefore be made unitary.

Right eigenvectors: Left eigenvectors: The transpose of a right eigenvector of a given matrix is a left eigenvector of the transpose of that matrix. Eigenvalues are the same. Let X R be the matrix formed by the right eigenvectors and X L the matrix of left eigenvectors. Then we have, If we left-multiply the left equation by X L and right-multiply the right one by X R we have, That means that the matrix of product of eigenvectors commutes with the diagonal matrix of eigenvalues. This means it is diagonal. Therefore left eigenvectors are orthogonal to right ones with different eigenvalues. This is true even for non-normal matrices.

From the previous formulae, we have that, This is just a particular case of a similarity transformation These transformations are important because they leave eigenvalues unchanged, The main strategy for solving eigensystems is to use similarity transformations to nudge matrices towards diagonal form.

There are two strategies to implement this idea for diagonalization. Most canned routines like EISPACK use a combination of both of them. The first strategy is to use a finite number of similarity transformations designed to achieve a specific goal, for instance zeroing out a certain off-diagonal component. In general a finite sequence of these operations cannot diagonalize the matrix. One then either uses them to take the matrix to a simple form (e.g. tri-diagonal), or applies the operation repeated times until the off diagonal elements are small enough. The second strategy is called factorization methods. Suppose the matrix can be factorized into a right and left factor, Or, We will come back to the QR method which uses this idea.

Jacobi transformations: Are just plane rotations designed to zero out some element of the matrix of interest. When one applies a second Jacobi transformation the element one managed to zero out before will not be zero anymore. However, successive applications of the transformation manage to make the off-diagonal elements smaller and smaller.

If one writes out explicitly the transformed matrix elements, We now choose the angle of the rotation to zero out a pq One can rearrange equations a bit, And then one can easily see that if one computes the sum of the squares of the off-diagonal components, To see this notice (through a little of 2 trig algebra) that a ' + a' = a + a rq rp 2 rq 2 rp 2

One eventually obtains a matrix that is therefore diagonal up to machine precision. Better yet, since one obtained it through a set of orthogonal transformations (and the product of orthogonal matrices is orthogonal), The only thing left for practical implementation is to decide which element to rotate. Jacobi s original prescription was to sweep the upper triangular portion of the matrix and pick the largest element. For large matrices this is expensive. So another possibility is to go element by element. Numerical implementations also set to zero elements that are smaller than the diagonal by a certain amount and then test and skip such elements in further sweeps. In practice, one needs about 6-0 sweeps, or 3-5n 2 rotations, each of about 4n operations, so one is looking to 2-20n 3 operations.

The QR and QL algorithms: As we discussed before, any matrix can be decomposed into a product of an orthogonal and an upper (or lower) triangular matrix, Consider the matrix formed by inverting the factors, Since Q is orthogonal, we can solve for R in the first equation and we get, Similar considerations hold for a QL decomposition. The algorithm consists in forming a sequence, A (long) theorem (see Bulirsch and Stoer) shows that A s tends asymptotically to a triangular matrix. Therefore the eigenvalues are trivial.

The rate at which the off-diagonal elements converges to zero is To accelerate the convergence, one can use the technique of shifting : the matrix has eigenvalues λ i -k. Therefore if one decomposes such matrix instead of A, one would have convergence, Ideally, one would like to choose k s close to the first eigenvalue, then one would quickly zero out the first row of the matrix, and so on. In practice, one does not know such eigenvalue. A good guess is to consider the 2x2 matrix formed by a,a 2,a 2,a 22 and compute its eigenvalues as a guidance. For subsequent eigenvalues one takes the corresponding 2x2 sub-matrix.

Summary Cholesky and QR decompositions can be used to solve systems of equations. Jacobi transformations can be used to zero out elements of matrices. QL and QR algorithms efficiently yield a triangular matrix, and therefore the eigenvalues.