A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

Size: px
Start display at page:

Download "A Divide-and-Conquer Algorithm for Functions of Triangular Matrices"

Transcription

1 A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon Technical Report, June 1996 Abstract We propose a divide-and-conquer algorithm for computing arbitrary functions of upper triangular matrices, which requires approximately the same number of arithmetic operations as Parlett s algorithm. However, the new algorithm has better performance on computers with two levels of memory due to its block structure and thus, less memory-cache traffic requirements. Like Parlett s algorithm, the new algorithm also requires that the eigenvalues (main diagonal elements) of the input matrix be distinct, and computes the matrix function nearly as accurately. 1 Introduction Let A be an n n matrix with entries from the real or complex field, and f( ) beananalytic function. We are interested in computing the matrix function f(a). Specific methods have been developed for specific functions, e.g., matrix square-root, matrix sign function, matrix exponential, and so on. The preferred approach for computing an arbitrary function of a square matrix is via Schur decomposition: We first decompose A as A = QT Q H, and then compute f(a) =Qf(T )Q H, where T is an upper triangular matrix [3]. This way the computation of f(a) for an arbitrary matrix A is reduced to the computation of F = f(t ) for an upper triangular matrix T.Parlett has given an O(n 3 ) algorithm for computing arbitrary functions of upper triangular matrices [5]. In fact, to the best of our knowledge, Parlett s algorithm is the only algorithm known for performing this task. Parlett s algorithm is derived using the property that the matrices T and F commute, i.e., FT = TF. (1) Parlett shows that by expanding the matrix multiplication and solving for f ij in the above, we obtain the summation formula f ij = t ij f jj f ii t jj t ii + 1 t jj t ii j 1 k=i+1 (t ik f kj f ik t kj ). (2) Parlett s algorithm starts with computing the main diagonal entries of F. Since the main diagonal entries t ii are the eigenvalues of T, f ii is calculated by applying f to each t ii, i.e., f ii = f(t ii ). This work is supported in part by the National Science Foundation under grant ECS

2 This step requires Kn arithmetic operations, assuming a single scalar function evaluation requires K arithmetic operations. After computing the main diagonal entries, the algorithm computes the superdiagonals one at a time, using the summation expression (2). The Lth superdiagonal contains n L elements for L =1, 2,...,n 1. Since the computation of each superdiagonal element requires 4L arithmetic operations, the number of arithmetic operations for computing F is found as n 1 Kn + (n L)(4L) = Kn (n3 n). (3) L=1 We must remark that when T has repeated (or very close) eigenvalues, i.e., t ii = t jj (or t ii t jj ) for some i j, Parlett s algorithm cannot be used (or will give inaccurate results). Alternative techniques for the repeated eigenvalue case are discussed in [5, 3]. In this paper we provide a divide-and-conquer algorithm as an alternative to Parlett s algorithm, which is also derived from the commutativity relationship (1). The new algorithm is of the same order of complexity as Parlett s algorithm, however, it seems to have some advantages. 2 Derivation of the Algorithm Let n =2k and the matrices T and F be partitioned as [ ] [ T1 T T = 2 F1 F and F = 2 0 T 3 0 F 3 ], respectively. Here T 1,F 1 C k k and T 3,F 3 C k k are upper triangular, and T 2,F 2 C k k are full matrices. Here we use the commutativity relationship (1), and expand the matrix equation FT = TF in terms of the products of the matrix blocks as T 1 F 1 = F 1 T 1, T 3 F 3 = F 3 T 3, T 1 F 2 + T 2 F 3 = F 1 T 2 + F 2 T 3. Since T 1 and T 3 are upper triangular, we have F 1 = f(t 1 ) and F 3 = f(t 3 ). Assuming F 1 and F 2 have already been computed, we calculate C = F 1 T 2 T 2 F 3, and proceed to solve the matrix equation T 1 F 2 F 2 T 3 = C (4) in order to compute F 2. This matrix equation is known as the Sylvester equation [3]. Let λ i and µ i for i =1, 2,...,k be the eigenvalues (diagonal elements) of T 1 and T 3, respectively. The Sylvester equation (4) has a unique solution F 2 if and only if λ i µ j for all i and j. This unique solution can be found using the Bartels-Stewart algorithm [1]. Thus, the divide-and-conquer matrix function evaluation algorithm first calls itself twice in order to compute the half-sized matrices F 1 = f(t 1 ) and F 3 = f(t 3 ), and then solves a Sylvester s equation using the Bartels-Stewart algorithm in order to compute F 2. In Figure 1, we give the recursive matrix function evaluation algorithm as a Matlab routine, which accepts the matrix T of size n (which is not required be a power of 2) and the function f( ), and computes the upper triangular matrix F = f(t ). 2

3 Figure 1: Recursive matrix function evaluation. function f = tfun(t,fun); [n,mm] = size(t); if n < 2 f = feval(fun,t); else m = floor(n/2); u = 1:m; v = m+1:n; f1 = tfun(t(u,u),fun); f3 = tfun(t(v,v),fun); f2 = sylvester(t(u,u),t(v,v),f1*t(u,v)-t(u,v)*f3); f = [f1 f2;zeros(n-m,m) f3]; The subroutine sylvester in the above Matlab routine solves the Sylvester equation AX XB = C using the Bartels-Stewart algorithm, where A C k k and B C m m are upper triangular matrices and C C k m is a full matrix. Let C i and X i be the ith rows of the matrices C and X, respectively. The Bartels-Stewart algorithm first solves the lower triangular system (a kk I m B T )X T k = C T k, (5) and obtains X k, i.e., the last row of X. The remaining rows of X are obtained by applying block back-substitution as k (a ii I m B T )Xi T = C i T a ij Xj T (6) j=i+1 for i = k 1,k 2,...,1. We give the Matlab routine in Figure 2. Figure 2: The Bartels-Stewart algorithm for the Sylvester equation. function x = sylvester(a,b,c); [k,kk] = size(a); [m,mm] = size(b); b = -b ; x = zeros(k,m); x(k,:) = ((b+a(k,k)*eye(m))\(c(k,:)) ) ; for i=k-1:-1:1 t = zeros(m,1); for j=i+1:k t = t + a(i,j)*(x(j,:)) ; r = (c(i,:)) - t; x(i,:) = ((b+a(i,i)*eye(m))\r) ; The new matrix function evaluation algorithm as given in Figure 1 is a recursive algorithm, however, it can be unrolled to obtain a non-recursive algorithm. Let n be apower of2, i.e., n =2 d. The non-recursive algorithm first applies the function f to the main diagonal. It then goes through d steps for i =1, 2,...,d. Prior to the ith step the evaluation of n/2 i 1 matrix blocks (of 3

4 dimension 2 i 1 2 i 1 )inthe main diagonal has been completed. During the ith step, the algorithm uses these n/2 i 1 matrix blocks in pairs, and solves n/2 i Sylvester equations in order to obtain n/2 i matrix blocks (of dimension 2 i 2 i ) required for the next step. The non-recursive algorithm is illustrated in Figure 3 for n = 8 and the square-root function. Figure 3: Computation of the square-root of an 8 8 matrix We give the non-recursive algorithm in Figure 4 as a Matlab routine. This routine accepts the upper triangular matrix T of any size and the function f( ), and computes the upper triangular matrix F = f(t ). Figure 4: The non-recursive matrix function evaluation. function f = tfun(t,fun); [n,mm] = size(t); f = diag(feval(fun,diag(t))); d = log(n)/log(2); 4

5 for i = 1 : d s = 2^i; for j = 0 : n/s -1 u = j*s+1 : j*s + s/2 ; v = j*s + s/2 + 1 : (j+1)*s ; f(u,v) = sylvester(t(u,u),t(v,v),f(u,u)*t(u,v)-t(u,v)*f(v,v)); if mod(n, s) ~= mod(n, 2*s) & i ~= d u = n - s - mod(n,s) + 1 : n - mod(n, s); v = n - mod(n, s) + 1 : n; f(u,v) = sylvester(t(u,u),t(v,v),f(u,u)*t(u,v)-t(u,v)*f(v,v)); In the above routine, the function mod(a,b) returns the remainder of a divided by b, and can be implemented in Matlab as function m = mod(a,b) m = a - floor(a/b)*b; 3 Computational Complexity In this section we analyze the computational complexity of the new algorithm for computing an arbitrary function of an n n triangular matrix T.Wewill assume n =2 d for simplicity of analysis although the algorithm is suitable for any n. Asseen in the Matlab routine given in Figure 1, we apply the matrix function evaluation algorithm to each of the half-sized blocks F 1 = f(t 1 ) and F 3 = f(t 3 ), and then solve a Sylvester matrix equation in order to compute F 2.Thus, the number of arithmetic operations required to compute F = f(t ) for an n n matrix is given as T (n) =2T (n/2) + U(n/2) + S(n/2), where S(k) isthe number of arithmetic operations required for solving a Sylvester matrix equation of size k, and U(k) isthe number of arithmetic operations needed to compute the k k matrix C using C = F 1 T 2 T 2 F 3, which is easily found to be U(k) = 2k 3 + k 2. When n = 1, the algorithm performs a scalar function evaluation f( ), which we assume takes K arithmetic steps, i.e., T (1) = K. The Bartels-Stewart algorithm solves the Sylvester matrix equation by first obtaining X k as given by Equation (5). The algorithm then proceeds to solve the remaining X i for i = k 1,k 2,...,1 using Equation (6). As seen from the two nested loops in Figure 2, there are (k i) scalarvector products, (k i) vector sums, and a single scalar addition to the main diagonal of the matrix B T. Finally, a lower triangular system of size k is solved. Thus, S(k) can be given as k 1 S(k) =L(k)+k + [2k(k i)+k + L(k)] = k 3 + kl(k), i=1 where L(k) is the number of arithmetic operations required to solve a lower triangular system of size k, which is easily found as L(k) =k 2, and thus, S(k) =2k 3. Therefore, the divide-and-conquer 5

6 algorithm requires T (n) =2T (n/2) + n3 2 + n2 4 arithmetic operations with the initial condition T (1) = K. The solution of this recursion is found as T (n) =Kn + 2n3 3 + n2 2 7n 6. (7) This analysis applies to both versions (recursive and non-recursive) of the new algorithm. Comparing this figure to that of Parlett s algorithm given by (3), we conclude that the new algorithm requires approximately the same number of arithmetic operations. Although we found the arithmetic complexity of the two methods to be the same, we were surprised to observe that the new algorithm has a much better performance than Parlett s algorithm when implemented on a scientific workstation. Table 1 gives the timing results of Parlett s algorithm, the new recursive algorithm, and its non-recursive version for computing the square-root of matrices of size ranging from 8 to We used the Matlab function funm.m, which implements Parlett s algorithm, and the routines given in Figures 1, 2, and 3. The Matlab (Version 4.1) routines were run on an HP Apollo Workstation Model 735 with 256KB instruction and 256KB data caches, and 144 MB main memory. The clock speed of the processor is 99 MHz. Table 1: Timing and speedup values for the algorithms. Parlett Recursive Non-Recursive Size Time (ms) Time (ms) Speedup Time (ms) Speedup As can be seen from Table 1, the recursive and non-recursive versions of the new algorithm is up to 3 times faster than Parlett s algorithm. The reason behind this mysterious speedup is due to the block structure of the new algorithm. Scientific workstations (and most computers) come with two levels of memory: the cache and the main memory. The cache is the smaller and faster of these two, and if an element is not found in the cache, a whole block of data containing this element is brought from the main memory to the cache. If there is a large amount of data swapping between the cache and the main memory, then the computer sps much of its time performing these operations, and the performance is degraded. Thus, it is crucial that we use the data in the cache as much as possible. Parlett s algorithm computes the elements of the matrix F one superdiagonal element at a time, and requires a large number of data swaps due to its data depency requirements. 6

7 The recursive and non-recursive algorithms presented in this paper, on the other hand, are block algorithms, and t to use the data much longer before requiring a new data block. It was pointed out by Golub and van Loan [3, Page 47] that... computers having a cache t to perform better on block algorithms. In Appix I, we give a simplified analysis of data swaps between the cache and the main memory for Parlett s algorithm and the new algorithm. This analysis shows that the divide-and-conquer algorithm requires fewer data swaps than Parlett s algorithm, and thus, is expected to run faster. Furthermore, the non-recursive algorithm has better performance than the recursive algorithm, since the overhead of recursive function calls are avoided. 4 Numerical Experiments We have performed some numerical experiments comparing the results of the divide-and-conquer algorithm to those of Parlett s algorithm In the first experiment, we have computed the square-root, cube-root, exponent, and logarithms of randomly generated upper-triangular matrices T with a selected eigenvalue separation min t ii t jj for 1 i, j 64. Let ˆF and F be the matrices computed by the divide-and-conquer and Parlett s algorithms, respectively. Table 2 shows the relative error values computed by ˆF F / F, where denotes the 2-norm of a matrix. Table 2: Error values for some matrix functions and eigenvalue separations. min t ii t jj square-root cube-root logarithm exponent Also in Table 3, we compare the new algorithm to Parlett s algorithm for computation of squareroot and cube-root of upper triangular matrices. Here we calculate the relative error terms using ˆF 2 T / T and F 2 T / T for the square-root function, and ˆF 3 T / T and F 3 T / T for the cube-root function. Here ˆF and F are the matrices computed by the divide-and-conquer and Parlett s algorithms, respectively. Table 3: Relative error for the square-root and cube-root functions. min t ii t jj square-root Parlett New cube-root Parlett New Examining the tables, we conclude that the new algorithm computes these matrix functions almost as accurately as Parlett s algorithm, perhaps slightly less. Parlett s algorithm and the new algorithm both produce poor results when the matrix T has close eigenvalues. 7

8 The numerical problems in the new algorithm are due to the solution of the Sylvester equation. It is shown in [2] that the error in the computed solution of the Sylvester equation satisfies ˆF 2 F 2 f F 2 f 4u( T 1 f + T 3 f ) φ 1, (8) where u denotes the unit roundoff, f is the Frobenius matrix norm, and φ 1 = ( min X 0 T 1 X XT 3 f X f ) 1. It can be shown that T 1 X XT 3 f min min λ µ, X 0 X f where λ σ(t 1 ) and µ σ(t 3 ). Thus, the error in the computed solution of the Sylvester equation ˆF 2 grows as the eigenvalues of T come close. 5 Conclusions We have given a divide-and-conquer algorithm for computing functions of upper triangular matrices. The algorithm requires approximately the same number of arithmetic operations as Parlett s algorithm, however, runs up to 3 times faster on computers having a cache due to its block structure. The numerical properties of the algorithm seem to be similar to those of Parlett s algorithm. Finally we note that the divide-and-conquer algorithm can be parallelized to compute an arbitrary function of an n n triangular matrix in O(log 3 n) time using O(n 6 ) processors [4]. References [1] R. H. Bartels and G. W. Stewart. Solution of the matrix equation AX + XB = C. Communications of the ACM, 15(9): , [2] G. H. Golub, S. Nash, and C. F. van Loan. A Hessenger-Schur method for the problem AX + XB = C. IEEE Transactions on Automatic Control, 24(6): , December [3] G. H. Golub and C. F. van Loan. Matrix Computations. Baltimore, MD: The Johns Hopkins University Press, 3rd edition, [4] Ç. K. Koç and B. Bakkaloǧlu. A parallel algorithm for functions of triangular matrices. Computing, 57(1):85 92, [5] B. N. Parlett. A recurrence among the elements of functions of triangular matrices. Linear Algebra and its Applications, 14: ,

9 Appix I Our analysis is similar to that of Golub and van Loan [3]. We partition the matrices T and F into blocks of rows such that each block contains m rows. We assume that the cache can hold approximately 2m +1 rows, thus, only 1 block of T and 1 block of F is present in the cache at a given time. If an element of T or F is not found in the cache, then a whole block (m rows) is loaded from the main memory. This operation is called a swap. In the following analysis we count the total number of swaps required by Parlett s and the divide-and-conquer algorithms. Parlett s algorithm first computes the main diagonal entries of F, and proceeds by computing the superdiagonals one at a time for L =1, 2,...,n 1. An element on the Lth superdiagonal requires its horizontal and vertical neighbors [5]. In order to obtain the vertical neighbors, the algorithm requires approximately L/m swap operations. Since there are (n L) elements on the Lth superdiagonal, the total number of swaps is calculated as n 1 L=1 (n L) L m = n3 n 6m n3 6m. (9) On the other hand, the divide-and-conquer algorithm goes through d = log 2 (n) steps for i = 1, 2,...,d. During the ith step the algorithm performs 2 (n/2 i )=2 d i+1 matrix products and n/2 i =2 d i calls to the subroutine sylvester with matrices of size 2 i 1 2 i 1. Let τ 1 (k) and τ 2 (k) bethe number of swap operations required by the matrix product and Sylvester routines, respectively. Then, the number of swap operations required by the non-recursive matrix function evaluation algorithm is found as d [2 d i+1 τ 1 (2 i 1 )+2 d i τ 2 (2 i 1 )]. i=1 It is shown in [3] that τ 1 (k) =2k/m + k 2 /m 2. In order to calculate τ 2 (k), we take a closer look at the Matlab subroutine sylvester given in Figure 2. First, a scalar is added to the diagonal elements of a k k matrix. A single swap is required to obtain the scalar element a(k, k), and k/m swaps are required to add it to the diagonal of the matrix b. We use a single swap operation to obtain a row of c. while k/m swaps are required to solve the lower triangular system. Therefore, the solution of the first system requires 2k/m + 2swap operations. Then, k 1 such systems is solved. The j loop needs k i rows of x (k i)/m swap operations. There is a single swap operation to obtain a(i, j) for all j. Similarly, there is a single swap operation to obtain the ith row of c. Finally, 2k/m + 2swap operations are required to obtain the solution of the lower triangular system. Thus, the total number of swap operations is found as τ 2 (k) =2+ 2k k 1 m + ( ) k i m +4+2k = 5k2 k +4(k 1)+2. m 2m i=1 The total number of swap operations required by the non-recursive matrix function evaluation is then calculated as 5m +4 4m 2 n 2 8m2 +5m +4 4m 2 n + 8m2 +7m 4m 2 n log(n)+2 5n2 4m. (10) Comparing (9) to (10), we conclude that the divide-and-conquer algorithm requires fewer swap operations than Parlett s algorithm. 9

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44) 1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix

More information

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS AZIM RIVAZ 1 AND FATEMEH SALARY POUR SHARIF ABAD 2 1,2 DEPARTMENT OF MATHEMATICS, SHAHID BAHONAR

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections ) c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Direct Methods for Matrix Sylvester and Lyapunov Equations

Direct Methods for Matrix Sylvester and Lyapunov Equations Direct Methods for Matrix Sylvester and Lyapunov Equations D. C. Sorensen and Y. Zhou Dept. of Computational and Applied Mathematics Rice University Houston, Texas, 77005-89. USA. e-mail: {sorensen,ykzhou}@caam.rice.edu

More information

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.

More information

Parallel Scientific Computing

Parallel Scientific Computing IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.

More information

Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures, Divide and Conquer Albert-Ludwigs-Universität Freiburg Prof. Dr. Rolf Backofen Bioinformatics Group / Department of Computer Science Algorithms and Data Structures, December

More information

ICS 6N Computational Linear Algebra Matrix Algebra

ICS 6N Computational Linear Algebra Matrix Algebra ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation. 1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL

More information

ARecursive Doubling Algorithm. for Solution of Tridiagonal Systems. on Hypercube Multiprocessors

ARecursive Doubling Algorithm. for Solution of Tridiagonal Systems. on Hypercube Multiprocessors ARecursive Doubling Algorithm for Solution of Tridiagonal Systems on Hypercube Multiprocessors Omer Egecioglu Department of Computer Science University of California Santa Barbara, CA 936 Cetin K Koc Alan

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Matrix Multiplication

Matrix Multiplication 3.2 Matrix Algebra Matrix Multiplication Example Foxboro Stadium has three main concession stands, located behind the south, north and west stands. The top-selling items are peanuts, hot dogs and soda.

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

Randomized algorithms for the low-rank approximation of matrices

Randomized algorithms for the low-rank approximation of matrices Randomized algorithms for the low-rank approximation of matrices Yale Dept. of Computer Science Technical Report 1388 Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations Max Planck Institute Magdeburg Preprints Martin Köhler Jens Saak On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Chinese Remainder Theorem

Chinese Remainder Theorem Chinese Remainder Theorem Çetin Kaya Koç koc@cs.ucsb.edu Çetin Kaya Koç http://koclab.org Winter 2017 1 / 16 The Chinese Remainder Theorem Some cryptographic algorithms work with two (such as RSA) or more

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

Polynomial evaluation and interpolation on special sets of points

Polynomial evaluation and interpolation on special sets of points Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Computing the pth Roots of a Matrix. with Repeated Eigenvalues Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

arxiv: v1 [cs.sc] 29 Jul 2009

arxiv: v1 [cs.sc] 29 Jul 2009 An Explicit Construction of Gauss-Jordan Elimination Matrix arxiv:09075038v1 [cssc] 29 Jul 2009 Yi Li Laboratory of Computer Reasoning and Trustworthy Computing, University of Electronic Science and Technology

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

N. J. Higham. Numerical Analysis Report No January Manchester Centre for Computational Mathematics Numerical Analysis Reports

N. J. Higham. Numerical Analysis Report No January Manchester Centre for Computational Mathematics Numerical Analysis Reports UMIST ISSN 1360-1725 A New sqrtm for Matlab N. J. Higham Numerical Analysis Report No. 336 January 1999 Manchester Centre for Computational Mathematics Numerical Analysis Reports DEPARTMENTS OF MATHEMATICS

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Fast reversion of power series

Fast reversion of power series Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

William Stallings Copyright 2010

William Stallings Copyright 2010 A PPENDIX E B ASIC C ONCEPTS FROM L INEAR A LGEBRA William Stallings Copyright 2010 E.1 OPERATIONS ON VECTORS AND MATRICES...2 Arithmetic...2 Determinants...4 Inverse of a Matrix...5 E.2 LINEAR ALGEBRA

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Chinese Remainder Algorithms. Çetin Kaya Koç Spring / 22

Chinese Remainder Algorithms.   Çetin Kaya Koç Spring / 22 Chinese Remainder Algorithms http://koclab.org Çetin Kaya Koç Spring 2018 1 / 22 The Chinese Remainder Theorem Some cryptographic algorithms work with two (such as RSA) or more moduli (such as secret-sharing)

More information