Solving Updated Systems of Linear Equations in Parallel

Size: px
Start display at page:

Download "Solving Updated Systems of Linear Equations in Parallel"

Transcription

1 Solving Updated Systems of Linear Equations in Parallel P. Blaznik a and J. Tasic b a Jozef Stefan Institute, Computer Systems Department Jamova 9, 1111 Ljubljana, Slovenia polona.blaznik@ijs.si b Faculty of Elect. Eng. and Comp. Science, Ljubljana, Slovenia Technical Report CSD951 August 1995

2 Solving Updated Systems of Linear Equations in Parallel P. Blaznik a and J. Tasic b a Jozef Stefan Institute, Computer Systems Department Jamova 9, 1111 Ljubljana, Slovenia polona.blaznik@ijs.si b Faculty of Elect. Eng. and Comp. Science, Ljubljana, Slovenia Abstract In this paper, updating algorithms for solving linear systems of equations are presented using a systolic array model. First, a parallel algorithm for computing the inverse of rankone modied matrix using the ShermanMorrison formula is proposed. This algorithm is then extended to solving the updated systems of linear equations on a linear systolic array. Finally, the generalisation to the updates of higher rank is shown. Keywords: Matrix updating, Linear systems, Systolic arrays 1 Introduction In many signal processing applications, we need to solve a sequence of linear systems in which each successive matrix is closely related to the pervious matrix. For example, we have to solve a recursive process where the matrix is modied by lowrank, typically rankone, updates at each iteration, i.e., A k = A k1 + u k1v T k1: Clearly, we should like to be able to solve the system A k x k = b by modifying A k1 and x k1 without computing a complete refactorisation of A k which is too costly. This work has been supported by Ministry of Science and Technology of the Republic of Slovenia under Grant Number J188. This report will be published in the Proc. of the Parallel Numerics 95 Workshop, Sorrento, Italy, September 9, 1995 Technical Report CSD951 August 1995

3 We choose systolic arrays (H.T. Kung and C. Leiserson, 198) as a parallel computing model to describe our algorithms. Despite the fact that so far systolic arrays have not really made their impact into many practical applications, the systolic description still reveals the fundamental parallelism which is available in an algorithm. Therefore, it provides useful information when this algorithm has to be implemented on an available parallel architecture. Updating techniques Techniques for updating matrix factorisations play an important role in modern linear algebra and optimisation. We often need to solve a sequence of linear systems in which each successive matrix is closely related to the previous matrix. By using A and A, systems of the form (A + A)x = b can be solved in time of order n ops, rather than order n ops. In this section, we rst restrict ourselves to the rankone modication. First, a systolic version of ShermanMorrison formula for computing the inverse of the modied matrix using the inverse of the original matrix is described. Then, we discuss its application in solving linear systems of equations with one or more righthand sides. Finally, we present the systolic array for the ranktwo modied inverse and solving ranktwo modied systems of equations as possible generalisations..1 Rankone modication When A is equal to A plus a rankone matrix uvt, A = A + uv T ; we say A is a rankone modication of A. Standard operations, such as, column and row replacement are special cases of rankone modication. Let A be an n n nonsingular matrix, and let u and v be nvectors. We want to nd the inverse matrix of the rankone modied matrix A = A + uv T. The matrix A + uv T is nonsingular if and only if 1 + v T A 1 u =. Its inverse is then (A + uv T ) 1 = A v T A 1 u A1 uv T A 1 : (1) This the well known ShermanMorrison formula for computing the inverse of the rankone modied matrix (Gill et al., 1991).

4 . Systolic algorithm { SASM To derive a systolic array for the evaluation of the ShermanMorrison formula (SASM), we would like to make use of already known systolic designs that solve some basic matrix problems. Let us dene the following matrix transformation on the compound matrix given below (Megson, 1991): A 11 A 1 5! MA 11 MA 1 MA 5! 11 MA 1 5 ; () A 1 A A 1 A A 1 + NMA 11 A + NMA 1 where M is selected so that the matrix MA 11 is triangular, and N is chosen to annihilate A 1. Applying the Faddeev algorithm (Faddeev and Faddeeva, 19), M and N can be easily constructed using elementary row operations on the compound matrix. It follows that A 1 = NMA 11 and thus N = A 1 A 1 11 M 1, so that the bottom right partition A + NMA 11 is given by A + A 1 A 1 11 A 1. Now we reformulate (1), using (), as a sequence of the following transformations: I A1 u 5! I A1 u 5 ; () v T v T A 1 u I A1 v T 1 + vt A 1 u v T A 1 A 1 u A 1 5! I A1 5 ; () v T A 1 5! 1 + vt A 1 u v T A 1 A v A 1 u A1 uv T A 1 T 5 : (5) Equations () { () describe Gaussian elimination steps, where the multipliers v T are known in advance. Therefore, no explicit computation of multipliers is required in the array, and we do not need the part of the array concerned with the computation and pipelining of multipliers. Hence, a rectangular array of n (n + 1) inner product cells is sucient (Figure.).

5 Before describing the systolic array, we introduce the following representation: I A 1 u A 1 I A 1 u A 1 v T 1! 1 + v T A 1 u v T A 1 ; () 5 5 I A 1 u A 1 It is evident that the computation of () can be done on a n(n+1) rectangular array. The cells are IPS (inner product step) processors (Figure.1) accepting a multiplier from the left and updating the elements moving vertically. Each cell has two modes of operation, a load state and a normal computation state. During the load state, the matrices A 1 u and A 1 are input in a row ordered format, suitably delayed to allow synchronisation of the multipliers input on the left boundary. During that phase, the two matrices are loaded one element per cell, and become stationary. The next stage can be described in two phases. First, the vector [1... ] is input on the top boundary of the array, and v T on the left boundary. The components of v T are used as multipliers to compute 1 + v T A 1 u and v T A 1. The data is non stationary, and leaves the array on the south boundary. Second, the null matrix is input on the top boundary and matrix I on the left. This forces the computation of A 1 u and A 1. All the phases can be overlapped, so that the total computation time is T = (n + 1) + (n + 1) + n = n + inner product steps. x :=. for i=1 to total.time if init x.out := x + m.in*x.in x := x.in else x.out := x.in + m.in*x m:in x:in x m:in x:out Fig. 1. The PE denition for a IPS cell of rectangular array Once the transformation () is known, we use Gaussian elimination to evaluate the transformation (5). Because 1 + v T A 1 u is a scalar value, only a single column elimination is required. A linear array of n + 1 cells corresponding to one row of the triangular array for LU decomposition (Wan and Evans, 199 ) is sucient. Again, the cells have two modes of operation, a load state and a normal computation state. One cell is a divider cell, its function is described in Figure., others perform the same operations as cells of the rectangular part of the array (Figure.1). The delay through this extra array is a single elimination step. 5

6 x :=. for i=1 to total.time if init if x.in <>. m.out := (x/x.in) else m.out :=. x := x.in else if x <>. m.out := (x.in/x) else m.out :=. x:in Fig.. The PE denition for a divider cell. x m:in I v T I A 1 1 A 1 u Fig.. Systolic array SASM for ShermanMorrison formula (n = ). To sum up, the rankone modication of the matrix inverse using the Sherman Morrison formula can be computed on an (n + 1) (n + 1) mesh of cells (Figure.) in n + inner product steps.. Solving the updated linear systems Solving the updated systems of linear equations is a more important application than nding the inverse of the modied matrix. In this section, we will show how to use the equation (1) implicitly in solving the updated systems of equations without computing the inverse of the modied matrix.

7 Let A be a n n nonsingular matrix, and b a vector of dimension n. Let us assume we know the solution x of Ax = b: We want to nd a solution x of the rankone modied system (A + uv T )x = b: Now using the ShermanMorrison formula, it follows x = (A + uv T ) 1 b = (A v T A 1 u A1 uv T A 1 )b = A 1 b v T A 1 u A1 uv T A 1 b = x w 1 + v T w vt x; () where w is a solution of the system Aw = u. To derive the systolic array, we follow a similar procedure as before. We dene the following Gaussian transformations: I w 5! I w 5 ; (8) v T v T w I x 5! I x 5 ; (9) v T v T x 1 + vt w v T x 5! 1 + vt w v T x w x x 1+v w 5 : (1) w T vt x The evaluation of (8 9) can be performed on a n rectangular array of IPS cells (Figure.1). Since 1 + v T w is a scalar, we need to eliminate only one column. Therefore, the systolic array in Figure. of (n + 1) cells gives us the result in n + inner product steps Recall, that in general solving linear equations using the Faddeev array (Blaznik, 1995) takes 5n + 1 inner product steps on an array of n(n + 1)= + n cells. On some specic arrays this can be done faster but it is still not competitive with the array in Figure..

8 ..1 Numerical example I v T I 1 x w x Fig.. Systolic array for updating linear systems (n = ). The algorithm was simulated using Occam. A numerical example for n = is given below. Given the linear system Ax = b where with known inverse 1 : 1 1 :8 A = ; b = ; x = : : :8 : : : : 1: :8 : A 1 = : : :8 1: : 5 : : : :8 Then the solution of the system Ax = b, where 1 1 A = ;

9 which diers from A by the (1,) element, can be obtained from equation () and by choosing u T = 1 v T = and forming A = (A + uvt ) which results in x T = 1 : :5 1: :5 :.. Successive updates Our next aim is to perform successive updates. For example, if the changes in the matrix occur always in the same row, we can proceed as follows. On the kth step of the computation, we want to nd the solution x (k) of the system A (k) x (k) = b: We know the solution x (k1) of the system A (k1) x (k1) = b, and the relation between A (k1) and A (k), Using (), we can write x (k) = x (k1) A (k) = A (k1) + uv (k)t : v A (k)t (k1)1 u A(k1) uv (k) T x (k1) : On every step k, we therefore need the previous solution x (k1), the value of A (k1)1 u and the value of v (k)t. The array in Figure.5 can handle the successive rank one updates. It is important that all data arrives in the appropriate order. To assure this, we use so called switch cells introduced by D.J. Evans and C.R. Wan in (Evans and Wan, 199). They function as a data interface to rearrange the data which are results from the previous computation in the right order (Blaznik, 1995). The proposed data interface is shown in Figure.. The array needs the original data and the output from the previous iteration. The desired input is selected as shown in Figure. according to the processing phase of the cell. The second column of IPS cells is used for the evaluation of A (k)1 u on the kth step. The result is then fed back to the top of the array to be used at the next computation. 9

10 1 A (1)1 u A (1)1 u x (1) 1 A 1 u A 1 u x Iv (1)T I I v T I Fig. 5. Systolic array for successive updates of linear systems. for i=1 to n+1 x.out := x.in x:in for j=1 to no.of.updates1 for i=1 to n x.out := z.in for i=1 to n+1 x.out := x.in, sink z.in x:out. Ranktwo modication z:in Fig.. Data interface. The idea of rankone modication can be extended further to rankm modication. The result is the ShermanMorrisonWoodbury formula (Gill et al., 1991) (A + UV T ) 1 = A 1 A 1 U(I + V T A 1 U) 1 V T A 1 ; (11) where U and V are n m matrices. It is obvious that when m = 1, this reduces to the ShermanMorrison formula (1) with I + V T A 1 U a scalar quantity. We want to derive a systolic version of the ShermanMorrisonWoodbury formula for ranktwo modication, i.e., m =. The transformations ()(5) are in this case the 1

11 following: I n A 1 U V T I I n A 1 V T 5! I n A 1 U I + V T A 1 U 5 ; (1) 5! I n A 1 5 ; (1) V T A 1 where U; V are n matrices. Then I + V T A 1 U V T A 1 A 1 U A 1 5! M(I + V T A 1 U) MV T A 1 A 1 A 1 U(I + V T A 1 U) 1 V T A 1 5 ; (1) where matrix M is chosen so that M(I + V T A 1 U) is an upper triangular matrix. The computation of equations (1) { (1) can be done by one transformation to the matrix of size (n + ) (n + ). I n A 1 U A 1 V T I 5 I n! I n A 1 U A 1 I + V T A 1 U V T A 1 : (15) 5 A 1 U A 1 These computations can be performed on a n (n + ) rectangular array. The cells are inner product step processors accepting multipliers from the left and updating the elements moving vertically (Figure.1). Since I + V T A 1 U is a matrix, then to evaluate the transformation (1) two column eliminations are required. Thus, we use the two rows of the triangular array for LU decomposition. We need two divider cells (their function is described in Figure.) with the other cells as IPS cells. The systolic array in Figure. computes the ranktwo modied inverse of a n n matrix A in n + inner product steps. 11

12 I V T I A 1 I A 1 U Fig.. Systolic array for rank modication of matrix inverse...1 Solving ranktwo modied linear systems Let A be a n n nonsingular matrix, and b a vector of dimension n. Let us assume we know the solution x of Ax = b: For solving the ranktwo modied system of linear equations Ax = (A + UV T )x = b; we need to evaluate the following transformations. I n A 1 U V T I 5! I n A 1 U I + V T A 1 U 5 ; (1) I n x 5! I n x 5 ; (1) V T V T x where U; V are n matrices. 1

13 Then I + V T A 1 U V T x 5! A 1 U x M(I + V T A 1 U) MV T x x A 1 U(I + V T A 1 U) 1 V T x 5 ; (18) where matrix M is chosen so that M(I + V T A 1 U) is an upper triangular matrix. The evaluation of equations (1) { (18) can be performed on an array of n+5 processors in n + inner product steps (Blaznik, 1995). Conclusions In this paper, we have presented some parallel updating techniques for solving rankone modied linear systems of equations. We have proposed a systolic algorithm for solving the rankone modied systems of linear equations. We have also described its generalisation to solving ranktwo modi ed systems. The algorithms were simulated on the Sequent Balance multiprocessor system. References [Bla95] P. Blaznik. Parallel Updating Methods in Multidimensional Filtering. PhD thesis, University of Ljubljana, [EW9] D.J. Evans and C.R. Wan. Systolic array for Schur complement computation. Intern. J. Computer Math., 8:15{11, 199. [FF] D.K. Faddeev and V.N. Faddeeva. W.H. Freeman and Company, 19. Computational methods of linear algebra. [GMW91] P.E. Gill, W. Murray, and M.H. Wright. Numerical linear algebra and optimization, volume 1. AddisonWesley, [KL8] H.T. Kung and C.E. Leiserson. Systolic arrays for VLSI. In I.A. Du and G.W. Stewart, editors, Proc. Sparse Matrix Symp., pages 5{8. SIAM, 198. [Meg91] G.M. Megson. Systolic rank updating and the solution of nonlinear equations. In Proc. 5 th International parallel processing symposium, pages {5. IEEE press,

14 [WE9] C. Wan and D.J. Evans. Systolic array architecture for linear and inverse matrix systems. Parallel Computing, 19:{,

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a) EE5120 Linear Algebra: Tutorial 1, July-Dec 2017-18 1. Solve the following sets of linear equations using Gaussian elimination (a) 2x 1 2x 2 3x 3 = 2 3x 1 3x 2 2x 3 + 5x 4 = 7 x 1 x 2 2x 3 x 4 = 3 (b)

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Solving Ax = b w/ different b s: LU-Factorization

Solving Ax = b w/ different b s: LU-Factorization Solving Ax = b w/ different b s: LU-Factorization Linear Algebra Josh Engwer TTU 14 September 2015 Josh Engwer (TTU) Solving Ax = b w/ different b s: LU-Factorization 14 September 2015 1 / 21 Elementary

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

EA = I 3 = E = i=1, i k

EA = I 3 = E = i=1, i k MTH5 Spring 7 HW Assignment : Sec.., # (a) and (c), 5,, 8; Sec.., #, 5; Sec.., #7 (a), 8; Sec.., # (a), 5 The due date for this assignment is //7. Sec.., # (a) and (c). Use the proof of Theorem. to obtain

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9], Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;

More information

A new interpretation of the integer and real WZ factorization using block scaled ABS algorithms

A new interpretation of the integer and real WZ factorization using block scaled ABS algorithms STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, September 2014, pp 243 256. Published online in International Academic Press (www.iapress.org) A new interpretation

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Factorization of singular integer matrices

Factorization of singular integer matrices Factorization of singular integer matrices Patrick Lenders School of Mathematics, Statistics and Computer Science, University of New England, Armidale, NSW 2351, Australia Jingling Xue School of Computer

More information

Math 60. Rumbos Spring Solutions to Assignment #17

Math 60. Rumbos Spring Solutions to Assignment #17 Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 5 Eigenvalue Problems Section 5.1 Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Solution of Linear Systems

Solution of Linear Systems Solution of Linear Systems Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 12, 2016 CPD (DEI / IST) Parallel and Distributed Computing

More information

9.1 - Systems of Linear Equations: Two Variables

9.1 - Systems of Linear Equations: Two Variables 9.1 - Systems of Linear Equations: Two Variables Recall that a system of equations consists of two or more equations each with two or more variables. A solution to a system in two variables is an ordered

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Gaussian Elimination -(3.1) b 1. b 2., b. b n Gaussian Elimination -() Consider solving a given system of n linear equations in n unknowns: (*) a x a x a n x n b where a ij and b i are constants and x i are unknowns Let a n x a n x a nn x n a a a

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo 35 Additive Schwarz for the Schur Complement Method Luiz M. Carvalho and Luc Giraud 1 Introduction Domain decomposition methods for solving elliptic boundary problems have been receiving increasing attention

More information

Chapter 3 - From Gaussian Elimination to LU Factorization

Chapter 3 - From Gaussian Elimination to LU Factorization Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

Lecture 7: Introduction to linear systems

Lecture 7: Introduction to linear systems Lecture 7: Introduction to linear systems Two pictures of linear systems Consider the following system of linear algebraic equations { x 2y =, 2x+y = 7. (.) Note that it is a linear system with two unknowns

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Math 314 Lecture Notes Section 006 Fall 2006

Math 314 Lecture Notes Section 006 Fall 2006 Math 314 Lecture Notes Section 006 Fall 2006 CHAPTER 1 Linear Systems of Equations First Day: (1) Welcome (2) Pass out information sheets (3) Take roll (4) Open up home page and have students do same

More information

1 Implementation (continued)

1 Implementation (continued) Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

immediately, without knowledge of the jobs that arrive later The jobs cannot be preempted, ie, once a job is scheduled (assigned to a machine), it can

immediately, without knowledge of the jobs that arrive later The jobs cannot be preempted, ie, once a job is scheduled (assigned to a machine), it can A Lower Bound for Randomized On-Line Multiprocessor Scheduling Jir Sgall Abstract We signicantly improve the previous lower bounds on the performance of randomized algorithms for on-line scheduling jobs

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices. MATH 2050 Assignment 8 Fall 2016 [10] 1. Find the determinant by reducing to triangular form for the following matrices. 0 1 2 (a) A = 2 1 4. ANS: We perform the Gaussian Elimination on A by the following

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX Iranian Journal of Fuzzy Systems Vol 5, No 3, 2008 pp 15-29 15 SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX M S HASHEMI, M K MIRNIA AND S SHAHMORAD

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Lesson 6: Linear independence, matrix column space and null space New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Two linear systems:

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Section 5.6. LU and LDU Factorizations

Section 5.6. LU and LDU Factorizations 5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed! Math 415 Exam I Calculators, books and notes are not allowed! Name: Student ID: Score: Math 415 Exam I (20pts) 1. Let A be a square matrix satisfying A 2 = 2A. Find the determinant of A. Sol. From A 2

More information

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram The determinant Motivation: area of parallelograms, volume of parallepipeds Two vectors in R 2 : oriented area of a parallelogram Consider two vectors a (1),a (2) R 2 which are linearly independent We

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang Gaussian Elimination CS6015 : Linear Algebra Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang The Gaussian Elimination Method The Gaussian

More information

APPARC PaA3a Deliverable. ESPRIT BRA III Contract # Reordering of Sparse Matrices for Parallel Processing. Achim Basermannn.

APPARC PaA3a Deliverable. ESPRIT BRA III Contract # Reordering of Sparse Matrices for Parallel Processing. Achim Basermannn. APPARC PaA3a Deliverable ESPRIT BRA III Contract # 6634 Reordering of Sparse Matrices for Parallel Processing Achim Basermannn Peter Weidner Zentralinstitut fur Angewandte Mathematik KFA Julich GmbH D-52425

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information