Note Set 3 Numerical Linear Algebra

Size: px
Start display at page:

Download "Note Set 3 Numerical Linear Algebra"

Transcription

1 Note Set 3 Numerical Linear Algebra 3.1 Overview Numerical linear algebra includes a class of problems which involves multiplying vectors and matrices, solving linear equations, computing the inverse of a matrix, and computing various matrix decompositions. It is one of the areas where knowledge of appropriate numerical methods can lead to rather substantial speed improvements. 3. ypes of Matrices In numerical linear algebra, efficient algorithms will take advantage of the special structure of a matrix. In addition to the rectangular matrix, special matrix types include symmetric matrices, symmetric positive definite matrices, band-diagonal matrices, blockdiagonal matrices, lower and upper triangular matrices, lower and upper band diagonal matrices, oeplitz matrices, and sparse matrices. 1

2 Each of these matrices is stored in a different way, generally in an array in memory. A general matrix is usually stored as one row after another in memory. A symmetric matrix can be stored in either packed or unpacked storage, and we may store either the lower triangle, the upper triangle, or both. Suppose that n is the dimension of a symmetric matrix. Unpacked storage requires n locations in memory, while packed storage requires nn+ ( 1)/ units. In both cases, we typically store symmetric matrices row after row. Lower and upper triangular matrices may also be stored in both packed and unpacked form. Figures 3.1 and 3. illustrate the storage of lower, upper, and symmetric matrices in packed and unpacked storage. Bold number illustrate a location in memory that contains an element of the matrix and non-bold elements contain a location in memory which contains garbage. he advantage of packed storage is, of course, efficient use of memory. he advantage of unpacked storage is that this storage usually allows for faster computation. his occurs for a number of reasons, but one important reasons is that block versions of common algorithms can be applied only if matrices are stored in unpacked form. Band diagonal matrices are stored in a different fashion. Let k l denote the number of lower bands, let k u denote the number of upper bands, and let d = kl + ku + 1. A band diagonal matrix is stored in an nd element arrays. Each d units in the array correspond to the nonzero elements of a row of A. For example, consider the following example of a 5x5 band diagonal matrix with k l = and k u = 1. Figure 3.3 illustrates the storage of such a matrix. he advantage of storing the matrix this way is both efficient use of memory and computational speed.

3 Figure 3.1: Storage of Lower, Upper, and Symmetric Matrices (Unpacked Storage) Figure 3.: Storage of Lower, Upper, and Symmetric Matrices (Packed Storage) Figure 3.3: Storage of Band Matrix 3

4 A sparse matrix is a large matrix that contains few non-zero elements. Sparse matrices are most often stored in compressed row format. Compressed row format requires 3 arrays, which we label r, c, and v. he array v contains the values of all of the nonzero elements in the matrix. he array c contains column in which each corresponding nonzero element is located. he array r points to the location in c which contains the elements that correspond to a row. For example, consider the following sparse matrix, We could store this matrix as, r = (1,3,3,6,8) c = (1,3,1,,5,1,5) v = ( 1,,5,1,3,, ) he final type of matrix (which is often useful in time-series econometrics applications) is a oeplitz matrix. A oeplitz matrix has the form, a1 a a3 a4 a5 a5 a1 a a3 a 4 a4 a5 a1 a a 3 a3 a4 a5 a1 a a a3 a4 a5 a 1 A oeplitz matrix can be stored as, ( a1, a, a3, a4, a 5). 3.3 Elementary Linear Algebra and the BLAS 4

5 Elementary linear algebra operations include operations such as y Ax, C A B, and y 1 L x. Often, these operations can be implemented in a few lines of code. here are, however, more efficient ways to implement these types of algorithms. his differences most important in matrix-matrix computation (e.g. C A B ), but can occasionally be useful in matrix-vector computations (e.g. y Ax, y 1 L x). he Basic Linear Algebra Subroutines (BLAS) contains efficient implementations of these and many other algorithms. he BLAS are a set of FORRAN subroutines, but CBLAS ports these subroutines to C using an automatic converter called FortranC. In order to apply the CBLAS, you must be relatively familiar with pointers. Consider GEMM (which is the routine for multiplying two rectangular matrices). 1 he routine computes C αop( A) op( B) + βc, where op( A) = A or op( A) = A. int dgemm_(char *transa, char *transb, integer *m, integer * n, integer *k, doublereal *alpha, doublereal *a, integer *lda, doublereal *b, integer *ldb, doublereal *beta, doublereal *c, integer *ldc) Notice that every variable here is a pointer. he variable transa determines where op( A) = A or op( A) = A. he variables m and n are the dimensions of C. he variable k is the number of columns or A, which is also equal to the number of rows in B. he variables lda, ldb, and ldc, store the leading dimension of A, B, and C. his option allows the algorithm to refer to the following types of matrices, 1 A quick reference for the BLAS can be found at, 5

6 In this case, we have a 4 by 5 matrix, with a leading dimension of 8, starting at location 3. Notice that this allows us the call the BLAS algorithms on sub-matrices without reallocating memory. BLAS is divided in to level-1, level-, and level-3 algorithms. Level-1 algorithms perform operations on vectors (and are On ( ) ), level- algorithms operate on a vector and a matrix (and are On ( ) ), and level-3 algorithms operate on two matrices (and are 3 On ( )). 3.4 Matrix Decompositions Any matrix A can be composed into the product of an upper and unit lower triangular matrix, A = LU. his decomposition has a number of advantages. First, we can solve the linear system, Lx = a and Uy = b in On ( ) operations using a relatively simple algorithm. Second, we can compute the inverse A = U L (in those cases where A in invertible). hird, we can compute the condition number and the determinant of the matrix. First, let us consider the problem of computing the LU decomposition. Notice that, A Unit Lower riangular matrix is a lower triangular matrix that has ones along the diagonal. 6

7 U11 U1 U13 U14 L U U U L L U U L41 L4 L U 44 U11 U1 U13 U14 LU LU + U LU + U LU + U = LU LU LU 3 LU LU U33 LU LU U34 LU LU + LU LU + LU + LU LU + LU + LU + U We have, U U U U A A A A A A A A = A31 A3 A33 A A A A A L = A / U, L = A / U, = A = A = A = A U = A L U U = A L U U = A L U,,, L = ( A L U )/ U, U = A L U L U U = A L U L U L = A / U, L = ( A L U )/ U, L,,,, = ( A L U L U )/ U, U = A L U L U L U Clearly, we have a relatively simple algorithm for computing the LU decomposition, 7

8 for(int i=0; i<n; i++) { for(int j=0; j<n-i+1; j++) { U(i,j) = A(i,j); for(int k=0; k<i-1; k++) U(i,j) -= L(i,k) * U(k,j); } for(int k=i; k<n; k++) { L(i,j) = A(i,j); for(int k=0; k<i-1; k++) L(i,j) -= L(i,k) * U(k,i); L(i,j) = L(i,j) / A(j,j); } } We can rewrite this algorithm in a way that factorizes A in place. We have, for(int i=0; i<n; i++) { for(int j=0; j<n; j++) { for(int k=0; k<i-1; k++) A(i,j) -= A(i,k) * A(k,j); } for(int k=i; k<n; k++) A(i,j) = A(i,j) / A(j,j); } Notice that this algorithm is 3 On ( ). In general, this algorithm is not numerically stable. o make this algorithm more effective, we can employ pivoting. We consider the decomposition, A= PLU, where P is a permutation matrix. Special types of matrices admit special types of decompositions. Suppose that A is symmetric and positive definite. hen there exists a lower triangular matrix L such that A = LL. he algorithm for computing L is called the Cholesky decomposition. he Cholesky Decomposition is stable without pivoting. he Cholesky decomposition algorithm is quite easy to derive as well. Let, 8

9 A A11 A1... An 1 A A... A, An1 An... Ann 1 n = L L L L Ln 1 Ln... Ln3 1 = Multiplying L with L yields the following system of equations, L L11 L1... Ln 1 L 11 L1 L L... L n L1L11 L1 + L = L L... L L n1 n n3 n3 his indicates that we can follow the following process to determine L ij. L = A L = 1 A1 L11 L = A L 1 L = 31 A31 L11 L 3 = A3 L11 A31 L L = A L L and so on. Once we have determined the Cholesky factor L or a matrix A, we can solve the linear system Ax hen Ly solving = b in On ( ) operations. We can write Ax = LL x = b. Define = b. We can thus solve the system by first solving Ly = b for y and then Lx = yfor x. Each of these systems can be solved in On ( ). y = L x. Now, suppose we want to invert a general or symmetric positive definite matrix. Computing the inverse then simply requires inverting upper and lower-triangular 9

10 matrices. he inverse of as lower triangular matrix will be lower triangular itself. Let A be a lower triangular matrix and let B be its inverse. hen we have, b a b1 b... 0 a1 a = b b... b a a... a n1 n nn n1 n nn We can solve this system recursively as before. he final case is the symmetric indefinite matrix. Here, the Bunch-Kaufman algorithm provides one fast algorithm for matrix decomposition, which is implemented in LAPACK. his algorithm is hard to get a hold of, but good implementations compare in speed to the Cholesky Decomposition algorithms, are faster that the LU decomposition by about a factor of 3. Figure 3.4 reports the running time for each of these three algorithms on a number of large symmetric positive definite matrix. I tested the CLAPACK version of all three algorithms. I tested the NR C++ version of the LU and Cholesky decomposition algorithms, as well as NR s recommend Guass Jordan algorithm for computing a matrix inverse. Finally, I tested LU and Cholesky algorithms that I optimized for my machine (which probably won t be as fast as properly optimized algorithms). For both the LU and Cholesky decompositions, the un-optimized CLAPACK implementations are faster that NR C++ code by a factor of 3. My optimized code is somewhat faster than the LAPACK implementations. he LU Decompositions takes about twice as long as the BK and Cholesky Decompositions (which take about an equal amount of time). he Guass Jordan inverse is substantially slower. 10

11 able 3.4 Computation ime for the Inverse of a Symmetric Positive Definite Matrix Algorithm Implementation My Desktop (AMD) My Laptop (Intel) n=500 n=1,000 n=500 n=1,000 Guass Jordan Elimination NR C LU Decomposition NR C CLAPACK My Optimizations BK Decomposition LAPACK Cholesky Decomposition NR C LAPACK MY Optimizations All three matrix decomposition algorithms can be further specialized. For example, we can develop versions of the LU, Bunch-Kaufman, and Cholesky decompositions for band diagonal matrices that requires Onk ( ) operations (here, k is the number of band in the matrix). For block diagonal matrices, we can simply apply each of these algorithms for each block at a time. he Cholesky decomposition can be specialized to oeplitz matrices provides an algorithm that requires Onk ( ) operations. 11

12 implements an he decomposition can be used to perform a number of operations. he BLAS On ( ) algorithm for solving y = Lx, where L is a lower triangular matrix. A similar algorithm applies to an upper triangular matrix. Now, given the LU decomposition A = LU, we can solve Ax = y by solving Lz = y and then solving Ux = z. We can compute the determinant of a matrix by computing the product of the diagonal terms of L. We can compute the inverse of a matrix and condition number of a matrix using related algorithms. Specialized algorithms exist to factorize a matrix that is subject to a rank-one update. A rank one update to the matrix A is a matrix of the form A' = A+ uv. If we already have an LU of Cholesky Decomposition of A, we can determine the decomposition of A ' in On ( ) operations. his type of update is used extensively in optimization and nonlinear equations, where the BFGS and Broyden updates are rank two updates. Sparse matrices have their own special algorithms. Let us let s denote the number of non-zero elements in a sparse matrix. he a matrix-vector multiply requires Os () operations. Algorithms for performing the decomposition of sparse matrices fall into two classes. he first class of algorithms rely on direct factorization. hese algorithms rely on the fact that if the matrix is sparse, them it may be possible to generate a decomposition that preserves this sparsity. Pivoting is used here are a way to ensure sparsity of the decomposition rather than ensure numerical stability. he second class of algorithms are iterative in nature. hey rely on the fact that the operation y Ax is relatively cheap. 1

13 3.5 he Schur and Singular Value Decompositions A related problem is computing the eigenvalues of a matrix. Computing the eigenvalues of a matrix is more complicated than solving a set of linear equations. We can immediately see that no algorithm will be able to exactly compute the eigenvalues of a matrix. his is a direct consequence of Galois heory, which proves that the problem of finding the roots of a polynomial of order 5 or larger does not admit a closed form expression in radicals. Now, recall that the eigenvalues of an n -dimensional matrix can be found by solving an n th order polynomial. Hence, we cannot expect an algorithm of the type we used for the LU and Cholesky decompositions. Actual algorithms for computing the eigenvalues and singular value decomposition are iterative in nature. he actual algorithms for both cases are quiet complicated. Numerical recipes actually suggests that this is one of the few areas where relying on black-box code is actually recommended. It able 3.5, we summarize the various linear algebra problems for a number of different types of matrices. 3.6 Estimation of Linear ARMA Models Consider the following time series regression model, y = β ' x + σε where t t t εt = ut + ρut 1 and ut ~ N (0,1). his process for ε t is called a first-order moving average process, or an MA1 process. Notice that, 13

14 able 3.5 Matrix Algorithms Factor Solve Linear System Compute Inverse Compute Eigenvalues Determinant Condition Number General Matrix LU ri. Solve ri. Inverse Schur Gen. Multiply Diag. O(n^3) O(n^) given LU O(n^3) given LU Iterative O(n) given LU O(n^) given LU dgetrf dgetrs dgetri dgeev dgecon Sym. Indef. BK Spec. Spec. Schur. Symmetric??? O(n^3) O(n^) given BK O(n^3) given BK Iterative O(n^) given BK dsytrf dsytrs dsytri dsyev dsycon Sym. Pos. Def. Cholesky ri. Solve Cholesky Schur. Symmetric Multiply Diag. O(n^3) O(n^) given Chold. O(n^3) given Chold. Iterative O(n) given Chold. O(n^) given Chold. dpotrf dpotrs dpotri dsyev dpocon Lower/Upper N/A ri. Solve ri. Inverse Direct Direct Direct O(n^) O(n^3) O(n) O(n) O(n) dtrsv (BLAS) dtrtri dtrcon Band Band LU ri. Band Solve Schur Gen. Band LU Band LU O(n*k^) O(n*k) given LU O(n^*k) given LU Iterative O(n) given LU O(n*k) given LU dgbtrf dgbtrs dgbtri dgeev dgbcon Sym. Indef. Band Band LU ri. Band Solve Schur Band Sym. Band LU Band LU O(n*k^) O(n*k) given LU O(n^*k) given LU Iterative O(n) given LU O(n*k) given LU dgbtrf dgbtrs dgbtri dsbev dgbcon Sym. Pos. Def. Band Band Cholesky Band Cholesky Band Cholesky Schur Band SPD Band Cholesky Band Cholesky O(n*k^) O(n*k^) O(n^*k) Iter. O(n^*k) O(n^*k) dpbtrf dpbtrs dpbtri dsbev dpbcon Lower/Upper Band N/A ri. Solve??? Direct Direct Direct O(n*k) O(n) O(n) O(n) dtbsv (BLAS) dtbcon Block Diagonal LU by Block ri. Solve by Block ri. Inv. By Block??? Multiply Diag. LU by Block O(b*nb^3) O(b*nb^) O(b*nb^3) O(n) O(b*nb^) oeplitz SPD eop Chold. O(n^) Band oeplitz SPD Band eop Chold. O(n*k) 14

15 Y X ' β ~ N(0, Ω ) where, σ (1 + ρ ) σ ρ σ ρ σ(1 + ρ ) σ ρ 0... Ω= 0 σ ρ σ(1 + ρ ) σ ρ σ ρ σ(1 + ρ ) he maximum likelihood estimator is given by, ˆ β ˆ ρ = β Ω β (, ) argmax( Y X ' )' 1 ( Y X ' ) ( β, ρ) A naive approach to computing this estimator would form 1 Ω using a standard algorithm (e.g. the Cholesky decomposition). his system is special in a number of ways. First, the matrix Ω is band diagonal. A band diagonal matrix can be decomposed in Onk ( ) operations instead of if bands. Second, forming 3 On ( ) where n is the number of rows and k is the number 1 Ω using the band-cholesky decomposition is not efficient either because the inverse of a band matrix is not necessarily a band matrix with the same number of bands. More generally, sparse matrices do not have sparse inverse. Instead, we can take the following approach. First, from the Cholesky decomposition Ω= LL using the band Cholesky decomposition. his decomposition preserves the banded structure. Next, compute v= Y X ' β. hird, solve Lx = v for x. his can be preformed in Onk ( ) operations. Finally, evaluate the likelihood function as x ' x. All the operations together mean that the likelihood can be evaluated in On ( ) operations, since k is fixed at. 15

16 Now consider the more general problem of the ARMA(1,1) model. We have yt = β ' xt + σεt where εt = ρεt 1+ ut + γ ut 1 and ut ~ N (0,1). In this case, we have, ˆ β ˆ ργˆ = β Ω β (,, ) argmax( Y X ' )' 1 ( Y X ' ) ( β, ρ, γ) where, ( ρ+ γ) ( ρ+ γ) ρ ( ρ+ γ) ρ ( ρ+ γ) ρ σ (1 + ) σ ( ρ+ γ + ) σ ρ( ρ + γ + ) σ ρ ( ρ+ γ + )... 1 ρ 1 ρ 1 ρ 1 ρ ( ρ+ γ) ρ ( ρ+ γ) ( ρ+ γ) ρ ( ρ+ γ ) ρ σ ( ρ+ γ + ) σ (1 + ) σ ( ρ+ γ + ) σ ρ( ρ+ γ + )... 1 ρ 1 ρ 1 ρ 1 ρ Ω= ( ρ+ γ) ρ ( ρ+ γ) ρ ( ρ+ γ) σ ρ( ρ + γ + ) σ ( ρ+ γ + ) σ ( ( ρ+ γ) ρ 1 + ) σ ( ρ+ γ + )... 1 ρ 1 ρ 1 ρ 1 ρ ( ρ+ γ) ρ ( ρ+ γ) ρ ( ρ+ γ ) ρ ( ρ+ γ ) σ ρ ( ρ + γ + ) σ ρ( ρ+ γ + ) σ ( ρ + γ + ) σ (1 + )... 1 ρ 1 ρ 1 ρ 1 ρ his matrix is no longer a band diagonal matrix. It is, however, a oeplitz matrix. Hence, we can apply a similar procedure to the one above. In this case, we have an On ( ) algorithm for the Cholesky factorization. When is quite large, then the elements far away from the diagonal will get quite small. Suppose that we trim all elements that are less than In this case, we will have a band oeplitz matrix. We can then compute the Cholesky decomposition in Onk ( ) operations. his is a huge difference from the naïve 3 On ( )! For example, when n = 1,000 and k = 30 3 n, then 1111 nk. 16

17 3.7 Solving Linear Differential Equations Initial Value Problems A linear first order differential equation can be written as, dy dx = f ( xy ) + gx ( ) he simplest method of solving such an equation is to use finite difference methods. We compute a solution y k on a grid of equally spaced points [ y1, y,..., y K ] with a spacing of δ. We can then approximate dy using y k+ 1 yk dx. Using this, we can write, δ y y δ k+ 1 k = fkyk + gk his system will be pinned down if we specify y 0. hen, we can compute all future values using yk+ 1 = yk + δ fkyk + δ gk. his type of problem is called an initial value problem, because it can be solved by iteration. Boundary Value Problems he above example is somewhat atypical. In most social science applications, we have a second order derivative as follows, d y dx dy = f ( x) + g( x) y+ h( x) dx We can similarly use a finite difference scheme to give, 17

18 We can rearrange this to give, yk+ 1 yk + yk 1 yk+ 1 yk = f + gy + h δ δ k k k k y = δ( y y ) f + δ g y + δ h y + y k+ 1 k+ 1 k k k k k k 1 k If we have the values y 0 and y 1, then we can solve the system by iteration. Usually, this is not the case, however. In most real problems, we have boundary value problems. Suppose that we have y 0 and y K. It turns outs that combining the above equation with the equation y0 = a and yk = b, we have a tri-diagonal system of linear equations. his turns out to be a rather generally property of boundary value problems- they can be solved by applying band-diagonal factorizations. 3.8 References [1] Numerical Recipes in C. [] Golub and Van Loan. Matrix Computations. [3] Hamilton- ime Series Analysis 18

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University Lecture 13 Stability of LU Factorization; Cholesky Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa

More information

CS 542G: Conditioning, BLAS, LU Factorization

CS 542G: Conditioning, BLAS, LU Factorization CS 542G: Conditioning, BLAS, LU Factorization Robert Bridson September 22, 2008 1 Why some RBF Kernel Functions Fail We derived some sensible RBF kernel functions, like φ(r) = r 2 log r, from basic principles

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations. Contents

Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations. Contents Linear Equations Module Contents Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations nag sym lin sys provides a procedure for solving real or complex, symmetric or Hermitian systems of linear

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

7.2 Linear equation systems. 7.3 Linear least square fit

7.2 Linear equation systems. 7.3 Linear least square fit 72 Linear equation systems In the following sections, we will spend some time to solve linear systems of equations This is a tool that will come in handy in many di erent places during this course For

More information

V C V L T I 0 C V B 1 V T 0 I. l nk

V C V L T I 0 C V B 1 V T 0 I. l nk Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Porting a sphere optimization program from LAPACK to ScaLAPACK

Porting a sphere optimization program from LAPACK to ScaLAPACK Porting a sphere optimization program from LAPACK to ScaLAPACK Mathematical Sciences Institute, Australian National University. For presentation at Computational Techniques and Applications Conference

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Matrix Assembly in FEA

Matrix Assembly in FEA Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Numerical linear algebra

Numerical linear algebra Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 27, 2015 Outline Linear regression Ridge regression and Lasso Time complexity (closed form solution) Iterative Solvers Regression Input: training

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2 Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Lecture: Numerical Linear Algebra Background

Lecture: Numerical Linear Algebra Background 1/36 Lecture: Numerical Linear Algebra Background http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s and Michael Grant s lecture notes

More information

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that 1 Problem 1 Solution In order to show that the matrix L k is the inverse of the matrix M k, we need to show that Since we need to show that Since L k M k = I (or M k L k = I). L k = I + m k e T k, M k

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Eliminations and echelon forms in exact linear algebra

Eliminations and echelon forms in exact linear algebra Eliminations and echelon forms in exact linear algebra Clément PERNET, INRIA-MOAIS, Grenoble Université, France East Coast Computer Algebra Day, University of Waterloo, ON, Canada, April 9, 2011 Clément

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information