Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter

Size: px
Start display at page:

Download "Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter"

Transcription

1 Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter Quarter 2013 March 20, 2013 Joshua Zorn 1

2 2 Abstract: Jacobi, successive over relaxation (SOR), and incomplete LU factorization (ILU) preconditioner matrices are used to alter the convergence behavior of the generalized minimum residual (GMRES) algorithm. The use of these preconditioners significantly reduced the number of iterations needed to solve linear systems. The most consistent preconditioner observed is the ILU preconditioner; in the most extreme case. The ILU preconditioner reduced the number of iterations needed to solve a linear system by When useable, the SOR preconditioner is inconsistent, but performs just as well as the ILU on the SAYLR1 matrix. However, the Jacobi preconditioner performs the poorest, failing to reduce the number iterations needed to solve any system significantly. Introduction: The GMRES algorithm is a basic method for solving linear systems. The basic algorithm takes advantage of subspace projection theory to implicitly solve linear systems. Unfortunately for some linear systems, the convergence can be slow, and the benefit of using an implicit solver is lost. The issue is addressed through preconditioning. Often the convergence behavior is depent on condition number. A relatively large condition number is indicative of poor convergence behavior. A small condition number is indicative of good convergence behavior. Theory: The GMRES algorithm is developed to solve simple linear systems shown below in which A is a matrix, b is a vector of known quantities, and x is an unknown vector: A leftward preconditioner modifies this linear system in the manner seen in equation 2. Equation 1 Equation 2 (Bai, 2013) The ideally, the product of M and A is the Identity matrix. Unfortunately, if M where equal to A, the computational cost would be too expensive, and the linear system would be solved before the implicit algorithm is implemented. Therefore, approximate values of A are chosen for M. There are three types of preconditioners that can be applied to any linear system. The first and simplest is the Jacobi preconditioner. A Jacobi preconditioner is simply the diagonal of the matrix in the linear system, as seen in equation 3. This preconditioner fails in the event that any of the diagonal elements of the system matrix is zero. Equation 3 (Ferronato, 2012) The SOR preconditioner assumes that the system that the system matrix decomposes in the manner below in which D is a diagonal matrix, L is a lower triangular matrix, and U is an upper triangular matrix. Equation 4 (Ferronato, 2012) The SOR method uses the decomposition above to form the preconditioner below. This method also only works if all the diagonal elements are non-zero.

3 3 Equation 5 (Ferronato, 2012) The ILU preconditioner is based on the idea that each matrix has an LU decomposition; however, to form an LU decomposition, flops are needed. Therefore, another similar decomposition is used, shown in equation 6. In this decomposition L is a lower triangular matrix, U is an upper triangular matrix, and E is an error matrix off the true LU decomposition. This is a significantly faster algorithm that scales by. Equation 6 (Bai, 2013) The ILU method uses the decomposition above to form the decomposition below. This method works on any non-singular linear system. Algorithms: The algorithm to build the SOR preconditioner is fairly simple show below: 1: Take in parameter and matrix A 2: for i=1 to n 3: 4: for j=1, i-1 5: 6: for j 7: for i 8: 9: output Equation 7 (Bai, 2013) The algorithm above is implemented in the MATLAB code shown in appix C. A MATLAB function, luinc, is used to perform the incomplete LU factorization. The preconditioned GMRES algorithm is shown below. 1: Take in A, b form initial guess and 2: Loop for convergence or maximum iteration 3: 4: if, then exit convergence loop 5: 6: for j=1 to m 7: 8: for i=1 to j 9: 10: 11: for i 12: 13: if, then exit convergence loop

4 4 14: for j 15: solve 16: 17: convergence loop One may want to note that in order to implement a GMRES algorithm without preconditioning, one can use the identity matrix for. The algorithm above is implemented in MATLAB codes shown in appices A and B. The code in Appix A is the core of the algorithm. Appix B is a template which shows where the preconditioner matrix is built, and holds the outer loop. The program is written like this so it can be modified and readapted for different purposes. In its current form, the program can easily be altered to include restarts. Implementation: Please refer to appix D for all figures. Figure 1 shows the sparsity pattern of the four matrices observed, WEST0479, MAHINDAS, BFW398 and SAYLR1. WEST0479 and MAHINDAS were chosen due to their use in previous tests; however, the have zeros in their diagonals, which makes SOR and Jacobi preconditioners impossible to use. Due to their full diagonals, BFW398 and SAYLR1 are used to observe the behavior all three preconditioners. Despite their symmetric looking sparsity patterns, BFW398 and SAYLR1 are non-symmetric (National Institute of Standards and Technology, 2013). This asymmetry is verified by the expression below, equation 8, of the LU decomposition, which yields nonzero values for both test matrices. If the matrices were symmetric, equation 8 would yield a zero. Equation 8 Furthermore, the SAYRL1 and BFW398 are chosen for their condition numbers. SAYLR1 has a fairly large condition number of ; meanwhile BFW398 is chosen for its significantly smaller condition number of E3. Due to its small condition number, BFW398 should have a better convergence behavior than SAYLR1. Finally, the values for the vector, b, are chosen such the solution of the vector x m, is a vector full of ones. This is done to easily verify that the algorithm converged on the correct solution. Second, a guess vector is generated using MATLAB s rand function. Because this function only yield values between zero and one, output of the rand function is multiplied by 100 to yield bad guesses. This can be seen in appix B. Results: Figures 2, 3, and 4 show the convergence behaviors of the GMRES algorithm with ILU preconditioners with different drop tolerances for the WEST0497, MAHINDAS, and SAYLR1. One important feature of these graphs is the different curves for different drop tolerances. Typically the greater the drop tolerance, the slower the convergence, but the quicker the preconditioner is built. One issue that arises is that if the drop tolerance is too large, the preconditioner can become singular. This results in instantaneous divergence. Figure 2 shows a case in which a large drop tolerance results in slower convergence behavior. Figure 5 compares convergence behaviors of the SOR method for different values of on the SAYLR1 matrix. It is important that different values of where tested on BFW398; however little difference in convergence behavior is observed with changing values of. A divergence behavior is observed with a value greater than 10. For SAYLR1, the smaller the value of used, the faster the rate of convergence. is deceased by factors of 10 until convergence is obtained in one iteration, which happened to be =10-4. The benefit of the SOR method is that computational cost of building a

5 5 preconditioner is not changed as is changed, but the ideal is system depent. Figures 6 and 7 depict comparisons between the convergence behaviors of systems using different preconditioners. Finally, although BFW398 is a larger matrix with more non-zero values than SAYLR1, BFW398 converges in fewer iterations than SAYLR1 in the basic GMRES algorithm. BFW398 converges in approximately 40 iterations; meanwhile SAYRL1 converges in about 100 iterations. This is consistent with the condition numbers of the matrices. Conclusion: The most effective preconditioner is the ILU preconditioner. Unfortunately the ILU is the most expensive computationally. The cheapest preconditioner in flops and locations in memory is the Jacobi preconditioner. In MATLAB, the Jacobi Preconditioner is a matrix, but with effective programing, diagonal Jacobi can be reduced down to a vector. However, the Jacobi preconditioner does very little to improve performance. In linear systems that take a couple hundred of iterations to solve, the Jacobi preconditioner only reduced the number of necessary iterations to reach a solution by at most 30. A nice medium between the two is the SOR method, but the SOR method has the parameter,, which has problem depent effects. If one needs to solve a system fast without any knowledge about the system, the ILU preconditioner is the best. Finally, based on the convergence behavior of the basic GMRES algorithm, condition number is a significant indicator of convergence behavior. The matrices with the largest condition numbers, WEST0479 and MAHINDAS, converged the slowest and needed preconditioning for the basic GMRES algorithm to converge in a reasonable number of iterations. Meanwhile, SAYLR1 and BFW398, matrices with smaller condition numbers, converged with the basic GMRES algorithm without preconditioning. The fastest convergence behavior is observed with BFW398. Overall, the preconditioning can vastly improve the convergence behavior of GMRES when using matrices with very large condition numbers. This is observed with MAHINDAS and WEST0479. In the unconditioned algorithm, convergence is not obtained until the subspace size is approximately the same size as the subspace; however, with preconditioning, these systems converge on the second iteration. Bibliography Bai, Z. (2013, March 12). Preconditioning Techniques. Retrieved from ECS 231, Winter 2013: Ferronato, M. (2012). Preconditioning for Sparse Linear Systems at the Dawn of the 21st Century: History, Current Developments, and Future Perspectives. ISRN Applied Mathematics. National Institute of Standards and Technology. (2013, 3 20). Matrix Market. Retrieved from Math, Statistics, and Computational Science: Appix A: Preconditioned GMRES MATLAB codefunction [x_m,r,i]=mylpgmres(a,x_m,b,m_prime,m,eta,tol) %A=system matrix %b=system input %x_o=first guess %M_prime=Preconditioner: written as M^-1 in many algorithms %m=subspace size %eta is maximum sub iteration %tol=convergence tolerance %overall: M'Ax=M'b

6 6 %r=residual vector: used to plot convergence behavoir %x_s=final x from process %i=iteration of completion %allocation phase n=size(a); h=zeros(n(1)+1,n(1)); v=zeros(n(1),n(1)); v_mplus=zeros(n(1),n(1)+1); r=zeros(eta,1); %preliminary phase r_m=m_prime*(b-a*x_m); beta=norm(r_m,2); e1=zeros(n(1)+1,1); e1(1)=1; %convergence loop for i=1:eta %arnoldi procedure %not written as subroutine to %facilitate lucky breakdown v_mplus(:,1)=r_m/beta; for j=1:m v(:,j)=v_mplus(:,j); omega=m_prime*(a*v(:,j)); for k=1:j h(k,j)=v(:,k)'*omega; omega=omega-h(k,j)*v(:,k); h(j+1,j)=norm(omega,2); %lucky breakdown check if (h(j+1,j)==0) return; v_mplus(:,j+1)=omega/h(j+1,j); y=h\(beta*e1); x_m=x_m+v*y; r_m=m_prime*(b-a*x_m); beta=norm(r_m,2); r(1)=beta/norm(x_m,2); if (r(i)<tol) return; Appix B: Outer Loop and Preconditioner Build for Jacobi MATLAB CODE function [x_m,r,i]=mylpjgmres(a,b,tol) %my Jacobi GMRES: other programs have their inputs (omega, droptolerance) %input %A=system matrix %b=system input %m=subspace size %tol is solution tolerance

7 7 %output %x_m=final x from process %i=iteration of completion %M_prime=inverse(jacobi preconditioner) n=size(a); %this step is different for each %preconditioner type M_prime=(diag(diag(A)))^-1; r=zeros(n(1)-9,1); i=zeros(n(1)-10,1); %automatically generates first guess: attempt to be bad x_m=rand(n(1),1)*100; r(1)=norm(m_prime*(b-a*x_m),2)/norm(x_m,2); %calls gerneral left preconditioned GMRES %repeat loop until convergence reached %or subspace is system size for m=1:(n(1)-10) [x_m,r(m+1),i(m)]=mylpgmres(a,x_m,b,m_prime,10+m,1,tol); if(r(m+1)<tol) return; Appix C: SOR Preconditioner Build MATLAB code function [M_prime]=mySOR(A,omega) %builds preconditioner matrix using SOR n=size(a); M_prime=zeros(n(1),n(2)); D=zeros(n(1),n(2)); L=zeros(n(1),n(2)); for l=1:n(1) D(l,l)=A(l,l); for m=1:l-1 L(l,m)=-A(l,m); M_prime=((1/omega)*D-L)^-1;

8 8 Appix D: Figures Figure 1: Sparsity pattern of matrices used. SAYLR1 has a condition number of. WEST0479 has a condition number of E12. MAHINDAS has a condition number of E13. BFW398 has a condition number of E3 (National Institute of Standards and Technology, 2013)

9 9 Figure 2: ILU Convergence behavior of WEST0479 in comparison to basic GMRES performance. Blue is unconditioned GMRES baseline. Red is with a drop-tolerance of 0, magenta has drop tolerance of 10-7, green has drop-tolerance of 10-6, black shows behavior with a drop-tolerance of Figure 3: ILU convergence behavior of MAHINDAS in comparison to basic GMRES. Blue is unconditioned GMRES baseline. Red is with a drop-tolerance of 0, magenta has drop tolerance of 10-10, green has droptolerance of 10-9, black shows behavior with a drop-tolerance of Figure 4: ILU convergence behavior of SAYLR1 in comparison to basic GMRES. Blue is the unconditioned baseline GMRES behavior. Black is preconditioned with a drop tolerance of 10-1, magenta is

10 10 preconditioned with a drop tolerance to 10-2, and blue is preconditioned with a drop tolerance of Red is conditioned with a drop tolerance of zero. Figure 5: SOR convergence behavior of SAYLR1 in comparison to basic GMRES. Blue is unconditioned GMRES baseline. The rest are SOR convergence behaviors. =1 for red, =.1 for black, =.01 for magenta, =.001 for green, and =.0001 for cyan. Figure 6: Comparison of different preconditioner convergence behaviors with SAYLR1. Blue is unconditioned baseline GMRES. Black is convergence behavior with Jacobi Preconditioning. Green is convergence with SOR in which =.0001, Red is convergence with ILU preconditioning with no drop tolerance.

11 11 Figure 7: Comparison of different preconditioner convergence behaviors with BFW398. Blue is unconditioned GMRES baseline. Red is SOR with =1, magenta is the convergence behavior with a Jacobi preconditioned; green show ILU convergence behavior with drop-tolerance of 10-1, cyan shows behavior with a drop-tolerance of 10-2, Black shows an ILU preconditioned with drop tolerance of 0.

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

MATH 450 / 550 Homework 1: Due Wednesday February 18, 2015

MATH 450 / 550 Homework 1: Due Wednesday February 18, 2015 MATH / Homework MATH / Homework : Due Wednesday February, Answer the following questions to the best of your ability. Solutions should be typed. Any plots or graphs should be included with the question

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

High Performance Nonlinear Solvers

High Performance Nonlinear Solvers What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described

More information

Chapter 9 Implicit integration, incompressible flows

Chapter 9 Implicit integration, incompressible flows Chapter 9 Implicit integration, incompressible flows The methods we discussed so far work well for problems of hydrodynamics in which the flow speeds of interest are not orders of magnitude smaller than

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems Matlab s Krylov Methods Library For Large Sparse Ax = b Problems PCG Preconditioned Conjugate Gradients Method. X = PCG(A,B) attempts to solve the system of linear equations A*X=B for X. The N-by-N coefficient

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Matrix Reduction Techniques for Ordinary Differential Equations in Chemical Systems

Matrix Reduction Techniques for Ordinary Differential Equations in Chemical Systems Matrix Reduction Techniques for Ordinary Differential Equations in Chemical Systems Varad Deshmukh University of California, Santa Barbara April 22, 2013 Contents 1 Introduction 3 2 Chemical Models 3 3

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology, USA SPPEXA Symposium TU München,

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Direct and Incomplete Cholesky Factorizations with Static Supernodes

Direct and Incomplete Cholesky Factorizations with Static Supernodes Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Incomplete factorization preconditioners and their updates with applications - I 1,2

Incomplete factorization preconditioners and their updates with applications - I 1,2 Incomplete factorization preconditioners and their updates with applications - I 1,2 Daniele Bertaccini, Fabio Durastante Moscow August 24, 216 Notes of the course: Incomplete factorization preconditioners

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Chapter 5. Methods for Solving Elliptic Equations

Chapter 5. Methods for Solving Elliptic Equations Chapter 5. Methods for Solving Elliptic Equations References: Tannehill et al Section 4.3. Fulton et al (1986 MWR). Recommended reading: Chapter 7, Numerical Methods for Engineering Application. J. H.

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization. Laslo Tibor Diosady

A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization. Laslo Tibor Diosady A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization by Laslo Tibor Diosady B.A.Sc., University of Toronto (2005) Submitted to

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Numerical linear algebra

Numerical linear algebra Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6)

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6) Beräkningsvetenskap II/NV2, HT 2008 1 (6) Institutionen för informationsteknologi Teknisk databehandling Besöksadress: MIC hus 2, Polacksbacken Lägerhyddsvägen 2 Postadress: Box 337 751 05 Uppsala Telefon:

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Lecture 10 Preconditioning, Software, Parallelisation

Lecture 10 Preconditioning, Software, Parallelisation March 26, 2018 Lecture 10 Preconditioning, Software, Parallelisation A Incorporating a preconditioner We are interested in solving Ax = b (10.1) for x. Here, A is an n n non-singular matrix and b is a

More information

Sparse Matrices and Iterative Methods

Sparse Matrices and Iterative Methods Sparse Matrices and Iterative Methods K. 1 1 Department of Mathematics 2018 Iterative Methods Consider the problem of solving Ax = b, where A is n n. Why would we use an iterative method? Avoid direct

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Introduction to Environment System Modeling

Introduction to Environment System Modeling Introduction to Environment System Modeling (Finite element method~advanced topics) Department of Environment Systems, Graduate School of Frontier Sciences, the University of Tokyo Masaatsu AICHI Contents

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Multigrid Methods and their application in CFD

Multigrid Methods and their application in CFD Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential

More information

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science RANA03-15 July 2003 The application of preconditioned Jacobi-Davidson methods in pole-zero analysis by J. Rommes, C.W.

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information