Linear and Non-Linear Preconditioning
|
|
- Cornelia King
- 5 years ago
- Views:
Transcription
1 and Non- University of Geneva June 2015
2 Invents an Iterative Method in a Letter (1823), in a letter to Gerling: in order to compute a least squares solution based on angle measurements between the locations Berger Warte, Johannisberg, Taufstein and Milseburg:
3 Method
4 Method Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = e-17. xexact =
5 Calculations Make Happy! concludes his letter with the statement:... You will in the future hardly eliminate directly, at least not when you have more than two unknowns. The indirect procedure can be done while one is half asleep, or is thinking about other things.
6 also Invents an Iterative Method (1845): Über eine neue Auflösungsart der bei der Methode der kleinsten Quadrate vorkommenden lineären Gleichungen
7 s Method
8 Points out a Problem of the Method with a simple computation, the equations can be transformed into new ones, where the problem does not appear any more
9 Rotations Next, takes an example from Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium (1809)
10 does After preconditioning, it takes only three iterations to obtain three accurate digits!
11 Classical Stationary Iterative Methods For a linear system Au = f, one needs a splitting of A = M N and then iterates s Mu n+1 = Nu n +f : M = diag(a) -Seidel: M = tril(a) domain decomposition: M block diagonal Multigrid: M represents a V-cycle or W-cycle The iterative method u n+1 = M 1 Nu n +M 1 f = (I M 1 A)u n +M 1 f 1. converges fast if ρ(i M 1 A) is small. 2. and is cheap, if systems with M can easily be solved
12 Invention of Stiefel and Rosser 1951: Presentations at a Symposium at the National Bureau of Standards (UCLA) Hestenes 1951: Iterative methods for solving linear equations Stiefel 1952: Über einige Methoden der Relaxationsrechnung Hestenes and Stiefel 1952: Methods of for Solving Systems An iterative algorithm is given for solving a system Ax = k of n linear equations in n unknowns. The solution is given in n steps. Lanczos 1952: Solution of systems of linear equations by minimized iterations (see also 1950)
13 The Conjugate Gradient Method To solve approximately Au = f, A spd, CG finds at step n using the Krylov space K n (A,r 0 ) := {r 0,Ar 0,...,A n 1 r 0 }, r 0 := f Au 0 an approximate solution u n u 0 +K n (A,r 0 ) which satisfies u u n A min. Theorem With κ(a) the condition number of A, ( ) n κ(a) 1 u u n A 2 u u 0 A. κ(a)+1 The conjugate gradient method converges very fast, if the condition number κ(a) is not large.
14 of Minimum residual methods (MR): find u n u 0 +K n (A,r 0 ) such that f Au n 2 min. MINRES for symmetric problems GMRES for general problems QMR for general problems, only approximate minimization Methods based on orthogonalization (OR): find u n u 0 +K n (A,r 0 ) such that f Au n K n (A,r 0 ). SymmLQ for symmetric problems FOM for general problems BiCGstab for general problems, bi-orthogonalization All these methods converge well, if the spectrum of the matrix A is clustered around 1.
15 the System Find a matrix M such that the preconditioned system M 1 Au = M 1 f is easier to solve with a Krylov method. Two goals: 1. For CG: make κ(m 1 A) much smaller than κ(a) More generally: cluster spectrum of M 1 A close to one 2. It should be inexpensive to apply M 1 With very good intuition or experience, one can guess good preconditioners Additive by Dryja and Widlund 1987
16 FETI by Farhat and Roux 1991 (see already Destuynder and Roux 1988) Balancing Domain Decomposition by Mandel and Brezina 1993
17
18 In preconditioning, we search for M such that 1. the spectrum of M 1 A is clustered around one 2. it should be inexpensive to apply M 1 For stationary iterative methods, we needed M such that 1. the spectral radius ρ(i M 1 A) is small 2. it should be inexpensive to apply M 1 Note that ρ(i M 1 A) small spectrum of M 1 A close to one Idea: design a good M for a stationary iterative method, and then use it as a preconditioner for a Krylov method. Theorem Using a Krylov method with preconditioner M is always better than just using the stationary iteration.
19 Proof: The stationary iterative method computes u n = (I M 1 A)u n 1 +M 1 f = u n 1 +r n 1 stat which means the residual r n stat := M 1 f M 1 Au n satisfies M 1 f M 1 Au n = M 1 f M 1 A(u n 1 +r n 1 stat ) = r n stat = (I M 1 A)r n 1 stat = (I M 1 A) n r 0 The Krylov method will use the Krylov space K n (M 1 A,r 0 ) := {r 0,M 1 Ar 0,...,(M 1 A) n 1 r 0 } to search for u n u 0 +K n (M 1 A,r 0 ), i.e. n u n = u 0 + α i (M 1 A) i 1 r 0 i=1 which means the residual r n kry satisfies r n kry = pn (M 1 A)r 0, p n (0) = 1
20 An example of non-linear preconditioning Additive Preconditioned Inexact Newton (): Cai, Keyes and Young DD13 (2001), Cai and Keyes SISC (2002) The nonlinear system is transformed into a new nonlinear system, which has the same solution as the original system. For certain applications the nonlinearities of the new function are more balanced and, as a result, the inexact Newton method converges more rapidly. Instead of solving F(u) = 0, solve instead G(F(u)) = 0 with If G(v) = 0 then v = 0 G F 1 in some sense G(F(v)) is easy to compute Applying Newton, (G(F(v))) w should also be easy to compute
21 For F : R n R n, define (overlapping) subsets Ω j for the indices {1,2,...,n}, such that Ω j = {1,2,...,n} j and corresponding restriction matrices R j, e.g. 1 Ω 1 = {1,2,3} R 1 = 1 1 Define the solution operator T j : R n R Ω j such that R T RF(v T j (v)) = 0. Then solves using inexact Newton J G(F(u)) := T j (u) = 0. j=1.
22 For F : R n R n, define (overlapping) subsets Ω j for the indices {1,2,...,n}, such that Ω j = {1,2,...,n} j and corresponding restriction matrices R j, e.g. 1 Ω 1 = {1,2,3} R 1 = 1 1 Define the solution operator T j : R n R Ω j such that R T RF(v T j (v)) = 0. Then solves using inexact Newton J G(F(u)) := T j (u) = 0. j=1. may look a bit complicated... (Cai, Keyes 2002)
23 : Newton without preconditioning Driven cavity flow problem (Cai, Keyes 2002)
24 : Newton with preconditioning Driven cavity flow problem (Cai, Keyes 2002)
25 Preconditioners: Recall from the linear case: from a stationary iterative method for Au = f, u n+1 = (I M 1 A)u n +M 1 f with fast convergence, i.e. ρ(i M 1 A) small, we obtain a good preconditioner M to solve with a Krylov method. M 1 Au = M 1 f Idea: For the non-linear problem F(u) = 0, construct a fixed point iteration u n+1 = G(u n ) and then solve using Newton s method. F(u) := G(u) u = 0
26 Using a Method One dimensional non-linear model problem L(u) := x ((1+u 2 ) x u) = f, in Ω := (0,L), u(0) = 0, u(l) = 0, Parallel method with two subdomains Ω 1 := (0,β) and Ω 2 := (α,l), α < β L(u1 n) = f, u1 n (0) = 0, u1 n (β) = un 1 2 (β), L(u2 n) = f, u2 n (α) = un 1 1 (α), (L) = 0. u n 2 in Ω 1 := (0,β), in Ω 2 := (α,l), This is a non-linear fixed point iteration. How can we apply Newton to solve at the fixed point?
27 One Level RASPEN { u n u n (x) := 1 (x) if 0 x < α+β 2, u2 n α+β (x) if 2 x L, or with zero extension operators P i (often called R T i in RAS) u n = P 1 u n 1 + P 2 u n 2. With the solution operators for the non-linear subdomain problems u n 1 = G 1 (u n 1 ), u n 2 = G 2 (u n 1 ), we obtain (for I subdomains) I u n = P i G i (u n 1 ) =: G 1 (u n 1 ). i=1 RASPEN: Solve the fixed point equation with Newton I F 1 (u) := G 1 (u) u = P i G i (u) u = 0 i=1
28 #! # $ % & "! "# "$ "% : Forchheimer Equation (q( λ(x)u(x) )) = f(x) in Ω, u(0) = u D 0, u(l) = u D L. β > 0, 0 < λ min λ(x) λ max, q(g) = sgn(g) β g 2β Residual as a function of iterations for 8 subdomains! # $ % & "! "# "$ "% "&! "!! #! $! %! &! "#! "$! "%! "&! "&! "!! #! $! %! &! "#! "$! "%! "&! RASPEN
29 Adding a Coarse Grid Correction Use the Full Approximation Scheme (FAS) from multigrid: compute correction c from a non-linear coarse problem L c (R 0 u n 1 +c) = L c (R 0 u n 1 )+R 0 (f L(u n 1 )), Add the correction c := C 0 (u n 1 ) to the iterate u n 1 new = un 1 +P 0 C 0 (u n 1 ), This gives naturally the two level fixed point iteration u n = I P i G i (u n 1 +P 0 C 0 (u n 1 )) =: G 2 (u n 1 ), i=1 Two level RASPEN means solving with Newton F 2 (u) := G 2 (u) u = I P i G i (u +P 0 C 0 (u)) u = 0. i=1
30 Comparison of and RASPEN One Level Variants: RASPEN : F 1 (u) := I i=1 P i G i (u) u = 0 : F 1 (u) := I i=1 P ig i (u) u = 0 Two Level Variants: RASPEN : F 2 (u) := I P i=1 i G i (u +P 0 C 0 (u)) u = 0 : F 2 (u) = P 0 C0 A(u)+ I i=1 P ig i (u) u = 0, F 0 (C0 A(u)+u 0 ) = R 0F(u), F 0 (u0 ) = 0 The main differences are: RASPEN does not sum corrections in the overlap like, since it is a convergent fixed point method RASPEN uses the full approximation scheme for the coarse correction, whereas does an additive ad hoc construction RASPEN uses exact Newton, since one obtains the exact an from the inner non-linear solves.
31 Newton One Level AS Two level AS One Level Two level Iteration n Newton One Level AS Two level AS One Level Two level subdomain solves Newton One Level RAS Two level RAS One Level RASPEN Two level FA RASPEN Iteration n Newton One Level RAS Two level RAS One Level RASPEN Two level FA RASPEN subdomain solves
32 A Non- Diffusion Problem: One Level ((1+u 2 ) u) = f, Ω = [0,1] 2, u = 1, x = 1, u = 0, otherwise. n Results for RASPEN ( in parentheses) N N n lsn G lsn in lsn min LS n 1 level (20) 3(3) 2(2) 2 17(23) 2(2) 2(2) 57(75) 3 18(26) 1(1) 1(1) (37) 2(2) 2(2) 2 35(41) 2(2) 1(1) 110(129) 3 38(46) 1(1) 1(1) (54) 2(2) 2(2) 2 51(59) 2(2) 1(1) 164(183) 3 57(65) 1(1) 1(1) ls G n : GMRES steps for an, ls in n maximium and ls min n minimum iterations to evaluate F
33 A Non- Diffusion Problem: Two Level ls G n ((1+u 2 ) u) = f, Ω = [0,1] 2, u = 1, x = 1, u = 0, otherwise. n Results for Two Level RASPEN (Two Level ) N N n lsn G lsn in lsn min LS n 2 level (23) 3(3) 2(2) 2 15(26) 2(2) 2(2) 51(83) 3 17(28) 1(1) 1(1) (33) 2(2) 2(2) 2 22(39) 2(2) 1(1) 71(123) 3 26(46) 1(1) 1(1) (35) 2(2) 2(2) 2 23(42) 2(2) 1(1) 73(133) 3 27(51) 1(1) 1(1) : GMRES steps for an, lsin n minimum iterations to evaluate F maximium and lsmin n
34 preconditioning means solve the preconditioned linear system M 1 Au = M 1 f using a Krylov method. Non-linear preconditioning means solve the preconditioned non-linear system G(F(u)) = 0 using Newton s method. Preconditioners can be defined using 1. intuition and experience 2. a stationary fixed point iteration What should a linear or non-linear preconditioner really do? Preprints are available at gander
Linear and Non-Linear Preconditioning
and Non- martin.gander@unige.ch University of Geneva July 2015 Joint work with Victorita Dolean, Walid Kheriji, Felix Kwok and Roland Masson (1845): First Idea of After preconditioning, it takes only three
More informationarxiv: v1 [math.na] 14 May 2016
NONLINEAR PRECONDITIONING: HOW TO USE A NONLINEAR SCHWARZ METHOD TO PRECONDITION NEWTON S METHOD V. DOLEAN, M.J. GANDER, W. KHERIJI, F. KWOK, AND R. MASSON arxiv:65.449v [math.na] 4 May 6 Abstract. For
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationNONLINEAR PRECONDITIONING: HOW TO USE A NONLINEAR SCHWARZ METHOD TO PRECONDITION NEWTON S METHOD
NONLINEAR PRECONDITIONING: HOW TO USE A NONLINEAR SCHWARZ METHOD TO PRECONDITION NEWTON S METHOD V Dolean, Martin J. Gander, Walid Kheriji, F Kwok, R Masson To cite this version: V Dolean, Martin J. Gander,
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationOn the choice of abstract projection vectors for second level preconditioners
On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationKRYLOV SUBSPACE ITERATION
KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More information2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids 2002; 00:1 6 [Version: 2000/07/27 v1.0] Nonlinear Additive Schwarz Preconditioners and Application in Computational Fluid
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationComparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations
American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov
More informationShort title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic
Short title: Total FETI Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ-70833 Ostrava, Czech Republic mail: zdenek.dostal@vsb.cz fax +420 596 919 597 phone
More informationOptimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms
Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationSolving Ax = b, an overview. Program
Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationJacobi s Ideas on Eigenvalue Computation in a modern context
Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear
More informationFrom Stationary Methods to Krylov Subspaces
Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationFEM and Sparse Linear System Solving
FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science
More informationXIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.
Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationConvergence Behavior of a Two-Level Optimized Schwarz Preconditioner
Convergence Behavior of a Two-Level Optimized Schwarz Preconditioner Olivier Dubois 1 and Martin J. Gander 2 1 IMA, University of Minnesota, 207 Church St. SE, Minneapolis, MN 55455 dubois@ima.umn.edu
More information20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations
Fourteenth International Conference on Domain Decomposition Methods Editors: Ismael Herrera, David E. Keyes, Olof B. Widlund, Robert Yates c 23 DDM.org 2. A Dual-Primal FEI Method for solving Stokes/Navier-Stokes
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationKrylov Space Solvers
Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse
More informationC. Vuik 1 R. Nabben 2 J.M. Tang 1
Deflation acceleration of block ILU preconditioned Krylov methods C. Vuik 1 R. Nabben 2 J.M. Tang 1 1 Delft University of Technology J.M. Burgerscentrum 2 Technical University Berlin Institut für Mathematik
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationPreconditioning Techniques Analysis for CG Method
Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system
More informationHigh Performance Nonlinear Solvers
What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationPreconditioning for Nonsymmetry and Time-dependence
Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationFast Iterative Solution of Saddle Point Problems
Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationMultispace and Multilevel BDDC. Jan Mandel University of Colorado at Denver and Health Sciences Center
Multispace and Multilevel BDDC Jan Mandel University of Colorado at Denver and Health Sciences Center Based on joint work with Bedřich Sousedík, UCDHSC and Czech Technical University, and Clark R. Dohrmann,
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationSince the early 1800s, researchers have
the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. 1521-9615/00/$10.00 2000 IEEE HENK A. VAN DER VORST
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationEfficient Solvers for the Navier Stokes Equations in Rotation Form
Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationAn introduction to PDE-constrained optimization
An introduction to PDE-constrained optimization Wolfgang Bangerth Department of Mathematics Texas A&M University 1 Overview Why partial differential equations? Why optimization? Examples of PDE optimization
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationCombination Preconditioning of saddle-point systems for positive definiteness
Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov
More informationShifted Laplace and related preconditioning for the Helmholtz equation
Shifted Laplace and related preconditioning for the Helmholtz equation Ivan Graham and Euan Spence (Bath, UK) Collaborations with: Paul Childs (Schlumberger Gould Research), Martin Gander (Geneva) Douglas
More informationFast solvers for steady incompressible flow
ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,
More informationNewton additive and multiplicative Schwarz iterative methods
IMA Journal of Numerical Analysis (2008) 28, 143 161 doi:10.1093/imanum/drm015 Advance Access publication on July 16, 2007 Newton additive and multiplicative Schwarz iterative methods JOSEP ARNAL, VIOLETA
More informationConjugate Gradients: Idea
Overview Steepest Descent often takes steps in the same direction as earlier steps Wouldn t it be better every time we take a step to get it exactly right the first time? Again, in general we choose a
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationCourse Notes: Week 4
Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived
More informationThe Conjugate Gradient Method for Solving Linear Systems of Equations
The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2
More informationOn the accuracy of saddle point solvers
On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More informationAN INTRODUCTION TO DOMAIN DECOMPOSITION METHODS. Gérard MEURANT CEA
Marrakech Jan 2003 AN INTRODUCTION TO DOMAIN DECOMPOSITION METHODS Gérard MEURANT CEA Introduction Domain decomposition is a divide and conquer technique Natural framework to introduce parallelism in the
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationA DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to
A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationLINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationCoupled FETI/BETI for Nonlinear Potential Problems
Coupled FETI/BETI for Nonlinear Potential Problems U. Langer 1 C. Pechstein 1 A. Pohoaţǎ 1 1 Institute of Computational Mathematics Johannes Kepler University Linz {ulanger,pechstein,pohoata}@numa.uni-linz.ac.at
More informationAn advanced ILU preconditioner for the incompressible Navier-Stokes equations
An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,
More informationLecture 17: Iterative Methods and Sparse Linear Algebra
Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More information