Exploiting off-diagonal rank structures in the solution of linear matrix equations

Size: px
Start display at page:

Download "Exploiting off-diagonal rank structures in the solution of linear matrix equations"

Transcription

1 Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg) and L. Robol (CNR of Pisa) Lyon, 15 May 2018 Speaker: Stefano Massei 1 / 35

2 Introduction In many different settings, such as problems of control and PDEs we deal with issues like solving AX + XA = C AX + XB = C Lyapunov equation, Sylvester equation, where A, B, C, X C m m. A Sylvester equation is equivalent to the m 2 m 2 linear system (I A + B T I)vec(X) = vec(c) Speaker: Stefano Massei 2 / 35

3 Introduction In the small scale scenario, the state of the art techniques, e.g. the Bartels and Stewart algorithm, require to compute the Schur forms of A and B by a QR method. This has a complexity O(m 3 ) for the flops and O(m 2 ) for the storage. Much better than O(m 6 ) that would be required by usual direct methods on the big linear system! In the case of large scale matrices (m 10 4 ) it is essential to exploit structure in the coefficients and in the solution X. Speaker: Stefano Massei 3 / 35

4 Introduction A favorable case is when the right hand side C has low rank and the spectra of A and B are well separated (for example separated by a line). In fact, in this situation the solution X exhibits a low numerical rank X UV T, U, V C m k, k m. In many applications matrices A and B are sparse and positive definite, which implies the separation of the spectra. Under these assumptions, we can employ low-rank iterative algorithms like Krylov methods which have O(m) cost in flops and storage. Speaker: Stefano Massei 4 / 35

5 Rank structure in the solution Singular values Singular values Singular values decay in the solution of AX + XB = C with rank(c) = 1, for two different configurations of the spectra of A and B. Speaker: Stefano Massei 5 / 35

6 Rank structure in the solution Theorem (Beckermann-Townsend) Let X be such that AX + XB = C where C has rank k and let A, B be normal matrices. If E and F are two sets which contain the spectra of A and B, respectively, then the singular values of X verify σ 1+lk (X) X 2 Z l (E, F ) := inf r R l,l max E r(z) min F r(z), l 1, where R l,l is the set of rational functions of degree at most (l, l). Z l (E, F ) are known in the literature as Zolotarev numbers Normal hypothesis on A and B can be relaxed, switching to numerical ranges If E and F are separated by a line this result ensures a fast decay in the singular values of the solution Exact rank in the right hand side can be replaced by numerical rank Speaker: Stefano Massei 6 / 35

7 Zolotarev numbers decay Z l (E, F ) := max E r(z) inf r R l,l min F r(z) If A = B, A symmetric positive definite then E = [a, b], F = [ b, a] and ( ) Z l ([a, b], [ b, a]) 4ρ 2l π 2, ρ = exp 2 log(4 b a ), 0 < a < b <. For more general cases, one can link the decay with logarithmic capacity of condenser plates: Z l (E, F ) 4ρ 2l, ρ = Cap(E, F ) 1. Not very informative unless you know something about E, F, but some cases can be solved explicitly, like E = S 1, F = R \ S 1. Speaker: Stefano Massei 7 / 35

8 Motivation Recently, some attention has been payed to the case in which A, B and C are banded [1,2]. In particular, it has been shown that if A and B are additionally well conditioned then the solution X of the Sylvester equation is numerically banded. [1] A. Haber, M. Verhaegen. Sparse solution of the Lyapunov equation for large-scale interconnected systems, Automatica [2] D. Palitta, V. Simoncini. Numerical methods for Lyapunov equations with banded symmetric data, Speaker: Stefano Massei 8 / 35

9 Band structure in the solution Log-scale plot of the solution of AX + XB = C. A, B R tridiagonal positive definite, κ(a), κ(b) < 50, C symmetric tridiagonal. Speaker: Stefano Massei 9 / 35

10 Band structure in the solution Consider the following experiment: m = 300, A = B = trid( 1, 2, 1) R m m, C random m m diagonal matrix, X solution of AX + XA = C. We study the decay in the bandwidth of X plotting the quantity max diag( X, l), l = 1,..., m. Moreover, we plot the distribution of the singular values σ l of the sub diagonal block X( m : m, 1 : m 2 ) Speaker: Stefano Massei 10 / 35

11 Band structure in the solution σ l decay in the band l Speaker: Stefano Massei 11 / 35

12 Quasiseparable matrices Definition A R m m has quasiseparable rank k if the maximum rank among the off diagonal submatrices of A is k. Properties: (i) q rank (A + B) q rank (A) + q rank (B) (ii) q rank (A B) q rank (A) + q rank (B) (iii) q rank (A) = q rank (A 1 ) Speaker: Stefano Massei 12 / 35

13 ɛ-quasiseparable matrices Definition A R m m has ɛ-quasiseparable rank k if for every off-diagonal block Y of A it holds σ k+1 (Y ) ɛ. Lemma Let A R m m be of ɛ-quasiseparable k. Then there exists δa such that q rank (A + δa) k, δa 2 2ɛ m. Speaker: Stefano Massei 13 / 35

14 Quasiseparable structure in the solution Assume A, B and C to have a low quasiseparable rank and consider the following partitioning for the Sylvester equation AX + XB = C [ ] [ ] [ ] [ ] [ ] A11 A 12 X11 X 12 X11 X + 12 B11 B 12 C11 C = 12. A 21 A 22 X 21 X 22 X 21 X 22 B 21 B 22 C 21 C 22 Looking at the (2, 1) block we get the equation A 22 X 21 + X 21 B 11 = C 21 A 21 X 11 X 22 B 21. X 21 solves another Sylvester equation with a low-rank RHS. The same holds for X 12. If diagonal blocks have well separated spectra then X 21, X 12 have low numerical ranks. Speaker: Stefano Massei 14 / 35

15 Quasiseparable structure in the solution [ ] [ ] [ ] [ ] [ ] A11 A 12 X11 X 12 X11 X + 12 B11 B 12 C11 C = 12 A 21 A 22 X 21 X 22 X 21 X 22 B 21 B 22 C 21 C 22 (1) Theorem Let A, B be Hermitian positive definite matrices with spectra contained in [a, b] and with quasiseparable rank k A and k B, respectively. Let C be quasiseparable of rank k C, then for the solution of (1) it holds σ 1+lk (X 21 ) X 21 2 Z l ([a, b], [ b, a]), k := k A + k B + k C. Speaker: Stefano Massei 15 / 35

16 10 1 Singular values of X 21 Bound from Zolotarev 10 5 σ Lyapunov equation AX + XA = C with matrices of size m = 300. A is random tridiagonal positive definite with spectrum in [0.9, 3.5], C is a random diagonal matrix. Speaker: Stefano Massei 16 / 35

17 Representing Quasiseparable matrices Every quasiseparable matrix can be block partitioned as: [ ] [ ] [ ] A11 A A = 12 A11 A = A 21 A 22 A 22 A 21 }{{}}{{} Block quasisep. low-rank Speaker: Stefano Massei 17 / 35

18 Representing Quasiseparable matrices Every quasiseparable matrix can be block partitioned as: [ ] [ ] [ ] A11 A A = 12 A11 A = A 21 A 22 A 22 A 21 }{{}}{{} Block quasisep. low-rank Simple idea: store low-rank blocks as outer products, and diagonal ones recursively (H-matrices, HODLR). More refined idea: Represent interactions between levels as well, by means of nested bases (H 2 -matrices, HSS). Speaker: Stefano Massei 17 / 35

19 Representing Quasiseparable matrices Every quasiseparable matrix can be block partitioned as: [ ] [ ] [ ] A11 A A = 12 A11 A = A 21 A 22 A 22 A 21 }{{}}{{} Block quasisep. low-rank Simple idea: store low-rank blocks as outer products, and diagonal ones recursively (H-matrices, HODLR). More refined idea: Represent interactions between levels as well, by means of nested bases (H 2 -matrices, HSS). Speaker: Stefano Massei 17 / 35

20 Representing Quasiseparable matrices Within these formats: Storage complexity is either O(m log m) (HODLR) or O(m) (HSS). All the matrix operations cost either O(m log α m) (HODLR) or O(m) (HSS). Operations can be performed adaptively into the rank of the off-diagonal blocks, by adding a re-compression stage. Re-compression does not change the complexity good for handling ɛ-quasiseparability. Speaker: Stefano Massei 18 / 35

21 Solving linear matrix equations If A, B are positive definite, an easy way to get a fast solver for AX + XB = C is to combine the fast HODLR/HSS arithmetic with the classical strategies: exploit the relation ([ ]) A C sign = 0 B [ ] I 2X, 0 I by using the matrix sequences arising from Newton s method A j+1 = 1 2 (A j+a 1 j ), B j+1 = 1 2 (B j+b 1 j ), C j+1 = 1 2 (C j+a 1 j C j B 1 j ). Discretize the closed formula X = + 0 e ta Ce tb dt s w j e θj A Ce θj B and evaluate the exponential functions with a rational approximant, e.g. Padé with scaling and squaring. j=1 Speaker: Stefano Massei 19 / 35

22 Laplace equation We consider the Laplace equation on the unit square: { 2 u x 2 u 2 y = f (x, y) (x, y) Ω, 2 u(x, y) = 0 (x, y) Ω., Ω = [0, 1]2, that provides the Lyapunov equation AX + XA = C, 2 1 A = h , C ij = f (x i, y j ). 1 2 We choose f (x, y) = log(0.1 + x y ) which generates a right hand side with low quasiseparable rank. Speaker: Stefano Massei 20 / 35

23 Laplace equation Time (s) Sign+HODLR Exp+HODLR lyap (dense) O(n log 2 n) n size q rank res.( ) Speaker: Stefano Massei 21 / 35

24 Room for improvement Even if the two methods scale nicely with the dimension, they rely heavily on the re-compression steps required by the fast HOLDR/HSS arithmetic. This suggests that we are not exploiting the information about the ɛ-quasiseparable rank of the final solution, that is provided by the theory. For this reason we came out with another idea that fits more naturally with the structure in the data. Speaker: Stefano Massei 22 / 35

25 Updating a linear matrix equation Suppose that we have already computed X 0 that solves A 0 X 0 + X 0 B 0 = C 0 and that we are interested in finding X which verifies (A 0 + δa)x + X(B 0 + δb) = C 0 + δc. Speaker: Stefano Massei 23 / 35

26 Updating a linear matrix equation Suppose that we have already computed X 0 that solves A 0 X 0 + X 0 B 0 = C 0 and that we are interested in finding X which verifies (A 0 + δa)x + X(B 0 + δb) = C 0 + δc. Can we do something better than starting the computation from scratch? Speaker: Stefano Massei 23 / 35

27 Updating a linear matrix equation Suppose that we have already computed X 0 that solves A 0 X 0 + X 0 B 0 = C 0 and that we are interested in finding X which verifies (A 0 + δa)x + X(B 0 + δb) = C 0 + δc. Can we do something better than starting the computation from scratch? If δa, δb and δc are rank structured then yes! Speaker: Stefano Massei 23 / 35

28 Updating a linear matrix equation Let us denote with δx := X X 0, then (A 0 + δa)(x 0 + δx) + (X 0 + δx)(b 0 + δb) = C 0 + δc. (2) By subtracting A 0 X 0 + X 0 B 0 = C 0 from equation (2), we get (A 0 + δa)δx + δx(b 0 + δb) = δc δax 0 X 0 δb. (3) Speaker: Stefano Massei 24 / 35

29 Updating a linear matrix equation Let us denote with δx := X X 0, then (A 0 + δa)(x 0 + δx) + (X 0 + δx)(b 0 + δb) = C 0 + δc. (2) By subtracting A 0 X 0 + X 0 B 0 = C 0 from equation (2), we get (A 0 + δa)δx + δx(b 0 + δb) = δc δax 0 X 0 δb. (3) If δa, δb and δc are low-rank matrices then the same hold for the right hand side UV := δc δax 0 X 0 δb, in particular, both U and V have at most rank δa + rank δb + rank δc columns. Finally, if A 0 + δa and (B 0 + δb) have well separated spectra then δx is numerically low-rank. Speaker: Stefano Massei 24 / 35

30 Updating a linear matrix equation Algorithm 1 Solving (A 0 + δa)(x 0 + δx) + (X 0 + δx)(b 0 + δb) = C + δc 1: X 0 solve Sylv(A 0, B 0, C 0 ) 2: Compute U, V such that δc δax 0 X 0 δb = UV 3: δx low rank Sylv(A 0 + δa, B 0 + δb, U, V ) 4: return X 0 + δx The procedure low rank Sylv can be any low-rank solver. For the experiments shown in this presentation we employed the Extended Krylov method which project the equation on the tensorized subspace U t V t where U t : = span{u, A 1 U, AU, A 2 U,..., A t 1 U, A t U}, V t : = span{v, B 1 V, BV, B 2 V,..., B t 1 U, B t U}, with A = A 0 + δa and B = B 0 + δb. Speaker: Stefano Massei 25 / 35

31 A divide and conquer method Suppose that every off-diagonal block of A, B and C has rank (at most) k and consider the following partitioning for the Sylvester equation AX + XB = C: [ ] [ ] [ ] [ ] [ ] A11 A 12 X11 X 12 X11 X + 12 B11 B 12 C11 C = 12. A 21 A 22 X 21 X 22 X 21 X 22 B 21 B 22 C 21 C 22 Splitting A, B and C into their block diagonal and antidiagonal parts, leads to two equations [ [ ] [ A11 B11 C11 A22]X 0 + X 0 =, B22 C22] [ ] [ ] [ ] C A δx + δx B = 12 A 12 B X C 21 A 0 X 12 0, 21 B 21 one with block diagonal coefficients and the other with low-rank right hand side. Speaker: Stefano Massei 26 / 35

32 A divide and conquer method In particular, the equation with block diagonal coefficients can be decoupled in two equations of dimension n 2 while the other provides a contribution of (numerical) low-rank. Expanding the recursion we get: Speaker: Stefano Massei 27 / 35

33 A divide and conquer method In particular, the equation with block diagonal coefficients can be decoupled in two equations of dimension n 2 while the other provides a contribution of (numerical) low-rank. Expanding the recursion we get: Speaker: Stefano Massei 27 / 35

34 A divide and conquer method In particular, the equation with block diagonal coefficients can be decoupled in two equations of dimension n 2 while the other provides a contribution of (numerical) low-rank. Expanding the recursion we get: Speaker: Stefano Massei 27 / 35

35 A divide and conquer method In particular, the equation with block diagonal coefficients can be decoupled in two equations of dimension n 2 while the other provides a contribution of (numerical) low-rank. Expanding the recursion we get: Speaker: Stefano Massei 27 / 35

36 A divide and conquer method In particular, the equation with block diagonal coefficients can be decoupled in two equations of dimension n 2 while the other provides a contribution of (numerical) low-rank. Expanding the recursion we get: Speaker: Stefano Massei 27 / 35

37 A divide and conquer method It is crucial to ensure the separation of the spectra up to the lower levels of recursion. This is given if the separation holds for the numerical ranges of A and B. The rank in the smallest off-diagonal blocks of the solution seems to grow logarithmically. This is not the case when the coefficients are quasiseparable, so it is advisable to re-compress after each sum. If A and B are sparse (e.g. banded) the Krylov subspaces can be generated using sparse arithmetic. Speaker: Stefano Massei 28 / 35

38 Algorithm 2 Solving AX + XB = C with A, B and C HODLR matrices 1: procedure D&C Sylv(A, B, C) 2: if A, B are small matrices then 3: return solve Sylv(A, B, C) 4: else 5: Decompose A = [ ] A δa, B = 0 A 22 [ ] [ ] B11 0 C δb, C = + δc 0 B 22 0 C 22 6: X 11 D&C Sylv(A 11, B 11, C 11 ) 7: X 22 D&C [ Sylv(A 22 ], B 22, C 22 ) X11 0 8: Set X 0 0 X 22 9: Compute U and V such that UV = δc δax 0 X 0 δb 10: δx low rank Sylv(A, B, U, V ) 11: return X 0 + δx 12: end if Speaker: Stefano Massei 29 / 35

39 Numerical results: convection diffusion We consider the convection-diffusion equation { u + v u = f (x, y) (x, y) Ω := [0, 1], u(x, y) = 0 (x, y) Ω where v = [10, 10] and f (x, y) = log(1 + x y ). A finite difference discretization leads to the Lyapunov equation AX + XA = C with the nonsymmetric matrix A = (n + 1) (n + 1) and the matrix C with off-diagonal blocks of numerically low-rank. Speaker: Stefano Massei 30 / 35

40 Numerical results: convection diffusion 10 4 Sign+HODLR D&C+HOLDR O(n log n) n Res Sign Res D&C rank , , , , , , , On the left, timings of the two methods with respect to the size of the coefficients. On the right, residual and maximal rank in the off-diagonal blocks of the solution. Speaker: Stefano Massei 31 / 35

41 Numerical results: temperature model We consider the Lyapunov equation AX + XA = C coming from a model describing the temperature change of a thermally actuated deformable mirror used in extreme ultraviolet lithography [1]. S m = trid(1, 0, 1) R m m, 1 m m = ones(m,m) A = I n ( 1.36 I S 6 ) S n I 6, C = I n ( I 6 ) 0.1 S n The coefficients are block tridiagonal, with bandwidth 6 and 11, respectively nz = nz = 820 [1] A. Haber, M. Verhaegen. Sparse solution of the Lyapunov equation for large-scale interconnected systems, Automatica Speaker: Stefano Massei 32 / 35

42 Numerical results: temperature model Time (s) CG D&C+HSS O(n log n) Memory (KB) Sparse HSS O(n) n n The CG (considered in [2]) exploits the sparsity of the coefficient matrix and of the RHS in (I A + A I)x = vec(c). Both methods are stopped when the relative residue is Speaker: Stefano Massei 33 / 35

43 Other applications Non local operators: with fractional derivatives, we swap banded matrices for rank structured ones no matter which discretization we choose: quasiseparable rank is log(m) log(ɛ 1 ). Rank structures give fast methods, especially when treating separable 2D problems. CAREs: One way for solving the continuous-time algebraic Riccati equation AX + XA XBX = C, is to apply the Newton s method. This provides the matrix sequence {X k } defined by the recurrence relation: (A X k B)X k+1 + X k+1 (A X k B) = C X k BX k. Under the common assumption that B is low-rank, we have a sequence of linear matrix equations with perturbed coefficients, and all the perturbations are low-rank. The updating approach manage to speed up the process after the first iteration. Speaker: Stefano Massei 34 / 35

44 Conclusions & References Under reasonable assumptions, off-diagonal rank structures in the coefficients are likely to be present in the solution of a linear matrix equation. Low rank perturbations in the coefficients of a linear matrix equation often translate into numerically low-rank variations of the solution. The use of low-rank updates can help in designing fast solvers for equation with hierarchically low-rank coefficients. Can we deal with 3D problems? Which tensorial format is the most suitable? Full stories: S. M., M. Mazza, L. Robol. Fast solvers for 2D fractional diffusion equations using rank structured matrices, ArXiv, D.Kressner, S.M., L. Robol. Low-rank updates and a divide and conquer method for linear matrix equations, ArXiv, S.M., D. Palitta, L.Robol. Solving rank structured Sylvester and Lyapunov equations, ArXiv, Speaker: Stefano Massei 35 / 35

arxiv: v2 [math.na] 22 Aug 2018

arxiv: v2 [math.na] 22 Aug 2018 SOLVING RANK STRUCTURED SYLVESTER AND LYAPUNOV EQUATIONS STEFANO MASSEI, DAVIDE PALITTA, AND LEONARDO ROBOL arxiv:1711.05493v2 [math.na] 22 Aug 2018 Abstract. We consider the problem of efficiently solving

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Low-rank updates and divide-and-conquer methods for quadratic matrix equations

Low-rank updates and divide-and-conquer methods for quadratic matrix equations Low-rank updates and divide-and-conquer methods for quadratic matrix equations Daniel Kressner Patrick Kürschner Stefano Massei Abstract In this work, we consider two types of large-scale quadratic matrix

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

1 Error analysis for linear systems

1 Error analysis for linear systems Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

FAST SOLVERS FOR 2D FRACTIONAL DIFFUSION EQUATIONS USING RANK STRUCTURED MATRICES

FAST SOLVERS FOR 2D FRACTIONAL DIFFUSION EQUATIONS USING RANK STRUCTURED MATRICES FAST SOLVERS FOR 2D FRACTIONAL DIFFUSION EQUATIONS USING RANK STRUCTURED MATRICES STEFANO MASSEI, MARIAROSA MAZZA, AND LEONARDO ROBOL Abstract. We consider the discretization of time-space diffusion equations

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Exploiting Low-Rank Structure in Computing Matrix Powers with Applications to Preconditioning

Exploiting Low-Rank Structure in Computing Matrix Powers with Applications to Preconditioning Exploiting Low-Rank Structure in Computing Matrix Powers with Applications to Preconditioning Erin C. Carson, Nicholas Knight, James Demmel, Ming Gu U.C. Berkeley SIAM PP 12, Savannah, Georgia, USA, February

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Risoluzione di equazioni matriciali con matrici infinite quasi-toeplitz

Risoluzione di equazioni matriciali con matrici infinite quasi-toeplitz Risoluzione di equazioni matriciali con matrici infinite quasi-toeplitz Leonardo Robol, ISTI-CNR, Pisa, Italy joint work with: D. A. Bini, B. Meini (UniPi) S. Massei (EPFL)

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Fast Direct Methods for Gaussian Processes

Fast Direct Methods for Gaussian Processes Fast Direct Methods for Gaussian Processes Mike O Neil Departments of Mathematics New York University oneil@cims.nyu.edu December 12, 2015 1 Collaborators This is joint work with: Siva Ambikasaran Dan

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization

An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization David Bindel Department of Computer Science Cornell University 15 March 2016 (TSIMF) Rank-Structured Cholesky

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place... They are all over the place... Numerisches Programmieren, Hans-Joachim ungartz page of 27 4.. Preliminary Remarks Systems of Linear Equations Another important field of application for numerical methods

More information

An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization

An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization An Efficient Solver for Sparse Linear Systems based on Rank-Structured Cholesky Factorization David Bindel and Jeffrey Chadwick Department of Computer Science Cornell University 30 October 2015 (Department

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini Computational methods for large-scale linear matrix equations and application to FDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it Joint work with: Tobias Breiten,

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Determinantal divisor rank of an integral matrix

Determinantal divisor rank of an integral matrix Determinantal divisor rank of an integral matrix R. B. Bapat Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in Abstract: We define the determinantal divisor rank of an integral

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Fast Structured Spectral Methods

Fast Structured Spectral Methods Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Bridging the gap between flat and hierarchical low-rank matrix formats: the multilevel BLR format

Bridging the gap between flat and hierarchical low-rank matrix formats: the multilevel BLR format Bridging the gap between flat and hierarchical low-rank matrix formats: the multilevel BLR format Amestoy, Patrick and Buttari, Alfredo and L Excellent, Jean-Yves and Mary, Theo 2018 MIMS EPrint: 2018.12

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Krylov subspace methods for linear systems with tensor product structure

Krylov subspace methods for linear systems with tensor product structure Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009 Outline 1 Introduction 2 Basic Algorithm 3 Convergence

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Outline A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Azzam Haidar CERFACS, Toulouse joint work with Luc Giraud (N7-IRIT, France) and Layne Watson (Virginia Polytechnic Institute,

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Thái Anh Nhan and Niall Madden Abstract We consider the solution of large linear systems

More information

Master Thesis Literature Review: Efficiency Improvement for Panel Codes. TU Delft MARIN

Master Thesis Literature Review: Efficiency Improvement for Panel Codes. TU Delft MARIN Master Thesis Literature Review: Efficiency Improvement for Panel Codes TU Delft MARIN Elisa Ang Yun Mei Student Number: 4420888 1-27-2015 CONTENTS 1 Introduction... 1 1.1 Problem Statement... 1 1.2 Current

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches)

Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches) Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches) Jaroslav Haslinger, Charles University, Prague Tomáš Kozubek, VŠB TU Ostrava

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Matrix Computations and Semiseparable Matrices

Matrix Computations and Semiseparable Matrices Matrix Computations and Semiseparable Matrices Volume I: Linear Systems Raf Vandebril Department of Computer Science Catholic University of Louvain Marc Van Barel Department of Computer Science Catholic

More information

Solving PDEs with CUDA Jonathan Cohen

Solving PDEs with CUDA Jonathan Cohen Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information