Updating and downdating an upper trapezoidal sparse orthogonal factorization

Size: px
Start display at page:

Download "Updating and downdating an upper trapezoidal sparse orthogonal factorization"

Transcription

1 IMA Journal of Numerical Analysis (2006) 26, 1 10 doi: /imanum/dri027 Advance Access publication on July 20, 2005 Updating and downdating an upper trapezoidal sparse orthogonal factorization ÁNGEL SANTOS-PALOMO AND PABLO GUERRERO-GARCÍA Departamento de Matemática Aplicada, University of Málaga, Complejo Tecnológico, Campus Teatinos, Málaga, Spain [Received on 20 August 2001; revised on 22 February 2005] We describe how to update and downdate an upper trapezoidal sparse orthogonal factorization, namely the sparse QR factorization of A k, where A k is a tall and thin full column rank matrix formed with a subset of the columns of a fixed matrix A. In order to do this, we have adapted Saunders techniques of the early 1970s for square matrices, to rectangular matrices (with fewer columns than rows) by using the static data structure of George and Heath of the early 1980s but allowing row downdating on it. An implicitly determined column permutation allows us to dispense with computing a new ordering after each update/downdate; it fits well into the LINPACK downdating algorithm and ensures that the updated trapezoidal factor will remain sparse. We give all the necessary formulae even if the orthogonal factor is not available, and we comment on our implementation using the sparse toolbox of MATLAB 5. Keywords: sparse linear algebra; orthogonalization; static structure; factorization update and downdate. 1. Motivation and related work In certain non-simplex active-set methods (also currently known as basis-deficiency-allowing simplex variations) for linear programming we need to solve a sequence of sparse compatible systems of the form: A A k x = b and A k y = c or k O c d =, 1 where b and c are iteration-dependent vectors, and x, y and d are unknown vectors. Furthermore, the matrix A k R n m k is a tall and thin iteration-dependent full column rank matrix with m k n and rank(a k ) = m k. Such a matrix A k is a subset of the columns of a fixed matrix A R n m with m n and rank(a) = n, and A k+1 is obtained by appending/deleting columns to/from A k with rank(a k+1 ) = rank(a k ) ± 1; when exchanging, deletion is done before appending and rank(a k+1 ) = rank(a k ). In this paper we describe how to update and downdate the sparse orthogonal QR factorization of A k. Saunders (1972) used it in his implementation of the simplex method for linear programming (hence with m k = n for all k), and what we do here is to adapt Saunders methodology to the case m k < n (in fact, his proposal to do an exchange was to delete before adding, and he proclaimed the numerical stability of this exchange in spite of the rank reduction after the deletion). We use the data structure of George and Heath (George & Heath, 1980; Heath, 1982), but we allow row downdating on such a static structure. As we work on top of a static structure, we do not have to deal with intricate data structures Abridged version of a talk presented at the 19th Biennial Conference on Numerical Analysis, Dundee, Scotland, June 2001, and at the 5th FoCM Workshop on Numerical Linear Algebra, Santander, Spain, July santos@ctima.uma.es Corresponding Author. pablito@ctima.uma.es c The author Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

2 2 Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA as in dynamic approaches like Davis & Hager (1999) and Edlund (2002). Furthermore, small non-zeros which actually would be structural zeros (Barlow, 1990) can be avoided. As pointed out by a referee, the algorithms proposed depend on knowing the structure of all possible updates (i.e. the structure of the larger fixed matrix A) in advance, and thus the algorithms are not applicable to situations where potential updates are not known in advance. The interested reader can find a more detailed survey of related work in Santos & Guerrero (2003, Section 1). As we shall see in the following sections, the key point is the management of an implicitly defined column permutation in order to dispense with determining a new ordering in each step; we shall also show that it fits well into the LINPACK downdating algorithm and this ensures that the updated trapezoidal factor will remain sparse. We also provide mathematically rigorous descriptions of both the updating and downdating processes, which are usually avoided when dealing with static structures in sparse settings, as well as illustrative examples. 2. Initial factorization and sparse issues Let us consider the sparse QR factorization of a short and fat matrix A k, e.g. with m k = 3rowsand n = 7 columns. When we compute this factorization with the MATLAB 5 qr black-box we would obtain, say, the Wilkinson diagram qr = O O. O O O O O A k Q k R k Π k Note that R k Π k is a staircase-shaped, permuted upper trapezoidal matrix, where the permutation Π k is implicitly defined by the staircase shape of the structure; in this case Π k = [e 1, e 3, e 6, e 2, e 4, e 5, e 7 ] or Π k = [e 1, e 4, e 2, e 5, e 6, e 3, e 7 ], where e i R 7 is the i-th column of the identity matrix I 7. It could even be the case that O O qr = O O O, O O O O O O A k Q k R k Π k Πk = [e 3, e 4, e 7, e 1, e 2, e 5, e 6 ] or Π k = [e 4, e 5, e 1, e 2, e 6, e 7, e 3 ]. In a more general framework, let A k R n m k be sparse and rank(a k ) = m k n. We know that there exists an implicitly defined permutation Πk of the columns of A k such that A k Π k = Q k R k. = Qk [R i R d ], Q k, R i R m k m k and R d R m k (n m k ), with R k upper trapezoidal, R i upper triangular and non-singular and Q k orthogonal. (We have used R i and R d rather than a more standard notation R 1 and R 2 to avoid subindex clash problems with R k.) Nevertheless, the permutation Πk is not explicitly performed, thus R k Π k is what we are going to

3 DOWNDATING A SPARSE ORTHOGONAL FACTORIZATION 3 maintain. Note also that a reordering of the columns of A k has no impact on R k, for S k A k Π k = (S k Q k)r k, thus only a reordering of the rows of Q k would take place. This factorization is computable with the black-box qr in MATLAB 5 (but not every A k can be factorized in this way with the sparse sqr package of Matstoms (1994), because it is intended for matrices with more rows than columns). However, this factorization is not updatable with qrupdate in MATLAB 5 because it only works with dense matrices. Also, there are no efficient ways yet to update this QR factorization using multifrontal techniques. Enlarging R k below with n m k zero rows we obtain the sparse Cholesky factor of Π k A k A k Π k 0 (see Higham, 2002, Theorem 10.9, p. 201). Computing this factor from the sparse QR given above makes it unnecessary to form the product A k A k and the complete pivoting that we would have to perform if a dense semidefinite Cholesky method on A k A k were used; in other words, our Π k is defined only for sparsity purposes rather than for numerical purposes. This is not expected to cause numerical difficulties. It is well known that when A k is a subset of the columns of a fixed matrix A R n m with m n and rank(a) = n, a static structure (i.e. a priori set up) can be used to try to minimize fill in the triangular factor. The sparse structure of A k A k is a subset of that of AA (George & Heath, 1980), hence an a priori permutation P (not related to Π k ) of the rows of A can be chosen to sparsify the Cholesky factor R of PAA P. Moreover, when a row is deleted, there must be space in the static structure, as hyperbolic rotations modify the structure in the same way as Givens rotations. Additional remarks on the suitability of the different row orderings that can be obtained with MATLAB 5 can be found in Santos & Guerrero (2003, Section 2). The column order of A does not affect the density of the Cholesky factor R of PAA P, but does affect the amount of intermediate work to compute it. When we have to refactorize A k (or to calculate an initial factorization of A 0 ), an a priori ordering of the columns of A can be beneficial (Björck, 1996, pp ). Since access to A and R is done by columns, they are stored by a simple compressed column scheme that allows us to take advantage of the locality of reference (Stewart, 1998, p. 110). As pointed out by Davis & Hager (1999, Section 7.1), in MATLAB every change in the structure of a sparse matrix would entail a new copy of the entire matrix. Hence, when MATLAB annihilates an element of a sparse matrix, the operating system does not deallocate the memory locations previously allocated for that element. Thus, rather than overlaying a data structure on the triangular factor (as Vempati et al., 1991, did), we mark with an special value (e.g. NaN in MATLAB) the elements not used in the structure previously set up, and we convert them into zeros just before selecting the submatrix to operate with. Thus, sparse numerical linear algebra primitives of the sparse toolbox (Gilbert et al., 1992) can be used, but we have to recover the structure after the algebraic operation involved, reinserting NaNs in those elements where fill-in did not occur yet; this overhead could be avoided with low-level programming. 3. Updating the factorization When we add the p-th constraint to the active set, we have to add a new column to the right of A k, i.e. we have to add a new row a p to the bottom of A k : A A k+1 = k a. p

4 4 Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA Then, given the QR factorization of A k Π k, we want to determine the new QR factorization after the addition of a p ; this problem is known as updating the QR factorization. Since A k = Q k R k Π k, we can write A A k+1 = k Qk O Rk Π k a = p O 1 a, p so we have Q k O A k Rk Π k O O O 1 a = p a = p O O O O O. (3.1) Let f j be the index of the column of Πk where e j appears; e.g. regarding the first Wilkinson diagram of Section 2, f 1 = 1, f 3 = 2, f 6 = 3 and f 2 = 4 > 3 =. m k ; or, with respect to the second Wilkinson diagram of Section 2, f 3 = 1, f 4 = 2, f 7 = 3 and f 1 = 4 > 3. = m k. For each j 1: n, we proceed to annihilate the j-th element (if non-zero) of the updated version of a p ; such an annihilation is done by a Givens rotation G( f j, m k +1) R (m k+1) (m k +1) (Golub & Van Loan, 1996, Section 5.1.8) only if f j m k, since otherwise the factorization is complete. Hence, after m k rotations (in the worst case) we have G. = G( fn, m k + 1) G( f 2, m k + 1) G( f 1, m k + 1), F G Rk Π k. a = R k+1 Π k+1, (3.2) p where Π k+1 is a permutation matrix such that R k+1 is trapezoidal with non-zero diagonal elements, and F is a suitable row permutation for R k+1 Π k+1 to be staircase shaped. Then, from (3.1) and (3.2), A k+1 Π k+1 = Qk O O GFF G Rk Π k 1 a p Π k+1. = Q k+1 R k+1. It is well known that in practical algorithms we can dispense with the orthogonal factor, since R k can be updated even if Q k is not available. Furthermore, as we shall illustrate in the following example, the static sparsity structure itself implicitly defines both the updated column permutation Π k+1 and the suitable row permutation F ; in fact, the actual Givens rotation used to annihilate the j-th element is G( j, n + 1) R (n+1) (n+1). This ensures that the updated trapezoidal factor will remain sparse. 3.1 Example of updating Let us consider the matrix whose structure is given in George et al. (1988, p. 104). In MATLAB, symbfact(a, col ) accomplishes the symbolic analysis of AA directly on A and consequently

5 DOWNDATING A SPARSE ORTHOGONAL FACTORIZATION 5 returns an upper bound structure for R : 1 1 A =, R = Note that we have considered R instead of R because of locality of reference issues (MATLAB stores the sparse matrices by columns). We have chosen to proceed row-wise in the mathematical description of the updating due to familiarity reasons; however, for the example to be as real as possible, we shall work column-wise, postmultiplying by Givens matrices rather than premultiplying by their transposes. We have rotated rows a 6 and a 7 into R and now we are going to rotate a 3 (i.e. k = 2, A k = [a 6 a 7 ] and A k+1 = [a 6 a 7 a 3 ], where a i is column i of A): 1 1 [R 1 1 a 3 ] = and [Π2 R 2 a ] =, [ ] where the working vector is located to the right of the vertical bar. Postmultiplying by 1/ 2 1/ 2 [ 1/ 2 1/ ] 2 and by 6/3 1/ 3 1/ 3, first columns 1 and 3 and then columns 2 and 3, the elements of the working 6/3 vector are annihilated downhill using Givens rotations: / 2 1 1/ and Π3 R 3 = 1/ 2 6/ /3 1/ /3 2/ 3 Note that the fill is restricted to the working vector and the column with respect to which we are rotating. The column just created has appeared in the last position (hence F = I ), but it does not happen in general. Although we have shown here only the columns of R that occur in Πk R k, we actually work within the whole structure to avoid the explicit computation of the permutations Π k and F, sothe

6 6 Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA mathematical application of F amounts to a simple copy of the working vector to the right position within the sparsity structure. Finally, the addition of the 5th constraint entails no rotations: Π4 R 4 = 1/ 2 6/ /3 1/ 3 0, /3 2/ 3 1 thus obtaining the permuted trapezoidal matrix for A 4 = [a 6 a 7 a 3 a 5 ] or any column permutation A 4 S Downdating the factorization When we delete the q-th constraint from the active set, we can assume that it is the first row of A k : a A k = q, A k+1 because if it were not the first row we only have to exchange beforehand the corresponding row and the first row in A k. Then, given the QR factorization of A k Π k, we want to determine the new QR factorization after aq is deleted; this problem is known as downdating the QR factorization. It can be shown (see Guerrero, 2002, Section 3.2.4) that the presence of the permutation matrix does not modify in essence the classical downdating algorithm, although it turns out not to be suitable to be combined with the static structure unless a separate working vector is introduced. As was pointed out by one of the referees, the computations of the classical downdating method could be reorganized to rotate into the bottom row (more precisely, into the row corresponding to the bottom-most non-zero element of the vector q R m k defined below), with that row being the working vector. What we are (equivalently) going to do here is to reorganize the computation to rotate into a new top row, with that row being the working vector, to illustrate the suitability of the proposal to be used in a LINPACK setting. In a sparse setting it is usual to dispense with Q k. Let us show how the implicitly determined column permutation fits well into the LINPACK algorithm (Saunders, 1972) to downdate the Cholesky factor, because the system Πk R k q = a q is also compatible with only one solution, and solving it we obtain q; furthermore, it does not destroy the predicted static structure. First we enlarge the matrix R k Π k with a suitable number δ n to be determined below: O O O O O O O. δn O R k = = q R k Π k O O. O O O O O Now if we apply to R k several Givens rotations G(1, j) R (m k+1) (m k +1) with j m k + 1: 1: 2 to annihilate the non-zeros of q, we obtain F G. R k = F G(1, 2) G(1, m k ) G(1, m k + 1). R k+1 R k = O,

7 DOWNDATING A SPARSE ORTHOGONAL FACTORIZATION 7 where R k+1 is the enlarged matrix corresponding to the new factor with R k+1 trapezoidal with non-zero diagonal elements:. α αv R Π k+1 k+1 = (α =±1), O R k+1 Π k+1 and F is the permutation needed to move the zero row just created to the m k + 1 position. G and F being orthogonal matrices implies that R k+1 R k+1 = R k R k : [ α O ] α αv Π k+1 δn q δn O απk+1 v =. Π k+1 R k+1 O R k+1 Π k+1 O q R k Π k Π k R k Multiplying out and comparing blocks we have that Πk+1 v = Π k R k q = a q and δn 2 = 1 q 2 2 = 0, because [ 1 v ] Π k+1 δ 2 n + q 2 2 q R k Π k Πk+1 v Π k+1 (R k+1 R k+1 + vv = )Π k+1 Πk R k q Π k R k R. kπ k Thus, Πk+1 R k+1 R k+1π k+1 = Πk R k R kπ k a q aq, which is what we wanted, for A k+1 A k+1 = Π k+1 R k+1 R k+1π k+1 = Π k R k R kπ k a q a q = A k A k a qa q. As in the algorithm given above, the Givens matrices are (m k + 1) (m k + 1). Using the additional working row we can get useful information to trigger the refactorization when Πk+1 v and a q differ substantially; furthermore, the data structure always has enough space allocated to work, and the fill-in is confined to the working row, as in the updating algorithm but not in the classical downdating algorithm (Guerrero, 2002, Section 3.2.4). There is a subtle difference with the m k > n LINPACK case, in which q is the first row of an orthonormal matrix (a matrix whose columns are orthonormal vectors, cf. Stewart, 1998, p. 56) and hence q 2 1 in general. On the other hand, we can ensure q 2 = 1 in exact arithmetic when m k n, since q is the first row of an orthogonal matrix. This case was excluded from LINPACK because it leads to a reduction in rank after the downdate but, as was pointed out in 1972 by Saunders (1972), numerical difficulties are not expected. 4.1 Example of downdating Let us go on with the example of Section 3.1; now we want to delete a7. First we solve the compatible system Π4 R 4 q = a 7 to obtain q = [0; 6/3; 1/ 3; 0]; as was pointed out by Heath (1982, p. 229) (with his R 1 being the same as our R i ), It is unnecessary to extract R 1 from the data structure or write a special back substitution routine to skip over the zero rows, since the same effect may be obtained by simply setting the zero diagonal elements equal to 1 and noting that the corresponding components of the right-hand sides will already be zero. and hence solving this system does not need any special back substitution routine, since we can proceed by overlaying Π4 R 4 on I 6 to obtain [0; 0; 6/3; 1/ 3; 0; 0] and then select the first, third, fourth

8 8 Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA and fifth components. Now we enlarge Π4 R 4 and rotate with respect to the first column to annihilate the non-zero elements of q westbound: 0 0 6/3 1/ R 4. δn q = O Π4 = R 0 1/ 2 6/2 0 0, /3 1/ /3 2/ 3 1 where the working vector is now located to the left of the vertical bar; postmultiplying by [ 0 1] 1 0 and [ 1/ 3 6/3 ] by, first columns 1 and 4 and then columns 1 and 3, we obtain 6/3 1/ 3 1/ 3 0 6/ / / / 3 0, [ R 5. O]F = 1 1/ 2 1/ / / / Note that the result is the same (barring signs and the last column) as just after having annihilated the first element when we were rotating a 3, that the vector a 7 has been recovered in the additional column, and that in this case α = 1. We have to reset to NaN the zeros shown in bold in order to conserve the predicted structure for R in the columns involved. The zero column just created has not appeared in the last position, hence F I in general. Once again, the work with the static structure avoids the explicit computation of the permutation F. We can check the result using MATLAB with [Q,R]= qr(sparse([ ; ; ])). 5. Conclusions and future work We have described how to update and downdate an upper trapezoidal sparse orthogonal factorization. We have adapted the techniques of Saunders (1972) for square matrices to rectangular matrices (with fewer columns than rows), by using the static data structure of George and Heath (George & Heath, 1980; Heath, 1982) but allowing sparse row downdating on it. An implicitly determined column permutation allows us to dispense with its recomputation; we have shown how it fits well into the LINPACK downdating algorithm and ensures that the updated trapezoidal factor will remain sparse. A FORTRAN implementation of the updating process (without the downdating) can be found in the SPARSPAK package, which is currently being translated into C++ as described by George & Liu

9 DOWNDATING A SPARSE ORTHOGONAL FACTORIZATION 9 (1999). The availability of this SPARSPAK++ package implies a rapid prototyping of a low-level implementation of the sparse proposal given here. Nowadays, the updatable sparse factorization described in this paper has been implemented in a MATLAB toolbox, which has been used to solve the sequence of sparse compatible systems needed to implement certain non-simplex active-set methods for linear programming. The formulae needed and the computational results obtained when solving the first 15 smallest NETLIB problems with a highly degenerate Phase I are the subject of a forthcoming article (Guerrero & Santos, 2003), in which we also comment on several advantages of using QR rather than LU, its suitability to a combined interior-point simplex methodology, and several parallelization issues. The numerical analysis of the behaviour in floating-point arithmetic when solving dense quadratic programs with an orthogonal factorization of A k similar to the one given in Section 2 was developed by Powell (1981, and references therein). When numerical difficulties with the rank arise, the matrix R k can be postprocessed as indicated by Heath (1982) to obtain a rank-revealing factorization. At present we deal only with problems in which we can choose an a priori permutation P of the rows of A such that the Cholesky factor of PAA P is sparse. When this cannot be done (e.g. when A has dense columns), it would be better to use dynamic techniques as in Davis & Hager (1999) and Edlund (2002), adapting them to the semidefinite case to work with A k A k or else regularizing the problem to work with σ I n + A k A k. Acknowledgements The authors thank the referees for having put a lot of work into producing the reports and marking up our previous manuscript, and for providing us with some references that we were not aware of. We also want to thank John Gilbert for confirming that the sparse orthogonal factorization in MATLAB 5isroworiented and Givens-based, and providing us technical details about the interface, as well as Christof Vömel at CERFACS for several remarks on the first sections. REFERENCES BARLOW, J. L. (1990) On the use of structural zeros in orthogonal factorization. SIAM J. Sci. Stat. Comput., 11, BJÖRCK, Å. (1996) Numerical Methods for Least Squares Problems. Philadelphia, PA: SIAM. DAVIS, T.& HAGER, W. (1999) Modifying a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl., 20, EDLUND, O. (2002) A software package for sparse orthogonal factorization and updating. ACM Trans. Math. Soft., 28, GEORGE, A.& HEATH, M. T. (1980) Solution of sparse linear least squares problems using Givens rotations. Linear Algebra Appl., 34, GEORGE, A.& LIU, J. (1999) An object-oriented approach to the design of a user interface for a sparse matrix package. SIAM J. Matrix Anal. Appl., 20, GEORGE, A., LIU, J.&NG, E. (1988) A data structure for sparse QR and LU factorizations. SIAM J. Sci. Stat. Comput., 9, GILBERT, J.R.,MOLER, C.B.&SCHREIBER, R. S. (1992) Sparse matrices in MATLAB: design and implementation. SIAM J. Matrix Anal. Appl., 13, GOLUB, G. H.& VAN LOAN, C. L. (1996) Matrix Computations. Baltimore, MD: North Oxford Academic Publishing. GUERRERO-GARCÍA, P. (2002) Range space methods for sparse linear programs. Ph.D. Thesis, Department of Applied Mathematics, University of Málaga. (In Spanish.)

10 10 Á. SANTOS-PALOMO AND P. GUERRERO-GARCÍA GUERRERO-GARCÍA, P.& SANTOS-PALOMO, Á. (2003) Solving a sequence of sparse compatible systems. Technical Report MA-02-03, Department of Applied Mathematics, University of Málaga. HEATH, M. T. (1982) Some extensions of an algorithm for sparse linear least squares problems. SIAM J. Sci. Stat. Comput., 3, HIGHAM, N. J. (2002) Accuracy and Stability of Numerical Algorithms, 2nd edn. Philadelphia, PA: SIAM. MATSTOMS, P. (1994) Sparse QR factorization with applications to linear least squares problems. Ph.D. Thesis, Department of Mathematics, Linköping University. POWELL, M. J. D. (1981) An upper triangular matrix method for quadratic programming 4. Nonlinear Programming, (O. L. Mangasarian, R. R. Meyer & S. M. Robinson eds). New York: Academic Press, pp SANTOS-PALOMO, Á. & GUERRERO-GARCÍA, P. (2003) Updating and downdating an upper trapezoidal sparse orthogonal factorization. Technical report MA-02-02, Department of Applied Mathematics, University of Málaga. SAUNDERS, M. A. (1972) Large-scale linear programming using the Cholesky factorization. Technical Report CS-252, Computer Science Department, Stanford University. STEWART, G. W. (1998) Matrix Algorithms I: Basic Decompositions. Philadelphia, PA: SIAM. VEMPATI, N., SLUTSKER, I. W.& TINNEY, W. F. (1991) Enhancements to Givens rotations for power system state estimation. IEEE Trans. Power Syst., 6,

GYULA FARKAS WOULD ALSO FEEL PROUD

GYULA FARKAS WOULD ALSO FEEL PROUD GYULA FARKAS WOULD ALSO FEEL PROUD PABLO GUERRERO-GARCÍA a, ÁNGEL SANTOS-PALOMO a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain (Received 26 September 2003; In final form

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

LU Preconditioning for Overdetermined Sparse Least Squares Problems

LU Preconditioning for Overdetermined Sparse Least Squares Problems LU Preconditioning for Overdetermined Sparse Least Squares Problems Gary W. Howell 1 and Marc Baboulin 2 1 North Carolina State University, USA gwhowell@ncsu.edu 2 Université Paris-Sud and Inria, France

More information

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates.

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates. ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 621 639 c 2005 Society for Industrial and Applied Mathematics ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract.

More information

arxiv:cs/ v1 [cs.ms] 13 Aug 2004

arxiv:cs/ v1 [cs.ms] 13 Aug 2004 tsnnls: A solver for large sparse least squares problems with non-negative variables Jason Cantarella Department of Mathematics, University of Georgia, Athens, GA 30602 Michael Piatek arxiv:cs/0408029v

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

On Hoffman s celebrated cycling LP example

On Hoffman s celebrated cycling LP example On Hoffman s celebrated cycling LP example Pablo Guerrero-García a, Ángel Santos-Palomo a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain. Sincerely dedicated to Alan J.

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 22, No. 4, pp. 997 1013 c 2001 Society for Industrial and Applied Mathematics MULTIPLE-RANK MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

An exact reanalysis algorithm using incremental Cholesky factorization and its application to crack growth modeling

An exact reanalysis algorithm using incremental Cholesky factorization and its application to crack growth modeling INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 01; 91:158 14 Published online 5 June 01 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.100/nme.4 SHORT

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

On the Iwasawa decomposition of a symplectic matrix

On the Iwasawa decomposition of a symplectic matrix Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,

More information

The design and use of a sparse direct solver for skew symmetric matrices

The design and use of a sparse direct solver for skew symmetric matrices The design and use of a sparse direct solver for skew symmetric matrices Iain S. Duff May 6, 2008 RAL-TR-2006-027 c Science and Technology Facilities Council Enquires about copyright, reproduction and

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Thái Anh Nhan and Niall Madden Abstract We consider the solution of large linear systems

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method

A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method TIMOTHY A. DAVIS University of Florida A new method for sparse LU factorization is presented that combines a column pre-ordering

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

Direct solution methods for sparse matrices. p. 1/49

Direct solution methods for sparse matrices. p. 1/49 Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

Generalized interval arithmetic on compact matrix Lie groups

Generalized interval arithmetic on compact matrix Lie groups myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Implementing QR Factorization Updating Algorithms on GPUs. Andrew, Robert and Dingle, Nicholas J. MIMS EPrint:

Implementing QR Factorization Updating Algorithms on GPUs. Andrew, Robert and Dingle, Nicholas J. MIMS EPrint: Implementing QR Factorization Updating Algorithms on GPUs Andrew, Robert and Dingle, Nicholas J. 214 MIMS EPrint: 212.114 Manchester Institute for Mathematical Sciences School of Mathematics The University

More information

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Reduced-Hessian Methods for Constrained Optimization

Reduced-Hessian Methods for Constrained Optimization Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 5 Eigenvalue Problems Section 5.1 Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe Rank Revealing QR factorization F. Guyomarc h, D. Mezher and B. Philippe 1 Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2 Situation

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 Jesús De Loera, UC Davis February 1, 2012 General Linear Systems of Equations (2.2). Given a system of m equations and n unknowns. Now m n is OK! Apply elementary

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

The System of Linear Equations. Direct Methods. Xiaozhou Li.

The System of Linear Equations. Direct Methods. Xiaozhou Li. 1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process

A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process 1 A QR-decomposition of block tridiagonal matrices generated by the block Lanczos process Thomas Schmelzer Martin H Gutknecht Oxford University Computing Laboratory Seminar for Applied Mathematics, ETH

More information

LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version

LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version 1 LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version Amal Khabou Advisor: Laura Grigori Université Paris Sud 11, INRIA Saclay France SIAMPP12 February 17, 2012 2

More information

Iterative Solution of Sparse Linear Least Squares using LU Factorization

Iterative Solution of Sparse Linear Least Squares using LU Factorization Iterative Solution of Sparse Linear Least Squares using LU Factorization Gary W. Howell Marc Baboulin Abstract In this paper, we are interested in computing the solution of an overdetermined sparse linear

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information