Computational Linear Algebra
|
|
- Susan Simmons
- 6 years ago
- Views:
Transcription
1 Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18
2 Part 3: Iterative Methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 48
3 overview definitions splitting methods projection and KRYLOV subspace methods multigrid methods PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 49
4 basic concept we consider linear systems of type Ax b (3.2.1) with regular matrix A and right hand side b Definition 3.17 A projection method for solving (3.2.1) is a technique that computes approximate solutions x m x 0 K m under consideration of (b Ax m ) L m, (3.2.2) where x 0 is arbitrary and K m and L m represent m dimensional subspaces of. Here, orthogonality is defined via the EUCLIDEAN dot product x y (x, y) 2 0. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 50
5 basic concept (cont d) observation in case K m L m, the residual vector r m b Ax m is perpendicular to K m we obtain an orthogonal projection method and (3.2.2) is called GALERKIN condition in case K m L m, we obtain a skew projection and (3.2.2) is called PETROV GALERKIN condition splitting methods projection methods computation of approximated solutions x m Rn x m x 0 K m Rn dim K m m n computation method x m Mx m 1 Nb b Ax m L m Rn dim L m m n PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 51
6 basic concept (cont d) Definition 3.18 A KRYLOV subspace method is a projection method for solving (3.2.1), where K m represents the KRYLOV subspace with r 0 b Ax 0. K m K m (A, r 0 ) span r 0, Ar 0,..., A m 1 r 0 KRYLOV subspace methods are often described as reformulation of a linear system into a minimisation problem well known methods are conjugate gradients (HESTENES & STIEFEL, 1952) and GMRES (SAAD & SCHULTZ, 1986) both methods compute the optimal approximation x m x 0 K m w.r.t. (3.2.2) via incrementing the subspace dimension in every iteration by one neglecting round off errors, both methods would compute the exact solution at latest after n iterations PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 52
7 method of steepest descent note: throughout this section, we assume the linear system (3.2.1) to exhibit a symmetric and positive definite (SPD) matrix we further consider functions F : x ½(Ax, x) 2 (b, x) 2 (3.2.3) and will first study some of their properties in order to derive the method Lemma 3.19 Let A be symmetric, positive definite and b given, then for a function F defined via (3.2.3) applies iff xˆ arg min F(x) x Axˆ b. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 53
8 method of steepest descent (cont d) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 54
9 method of steepest descent (cont d) idea: we want to achieve a successive minimisation of F based on point x along particular directions p hence, we define for x, p a function f x, p : f x, p ( ) : F(x p) Lemma and Definition 3.20 Let matrix A be symmetric, positive definite and vectors x, p with p 0 given, hence (r, p) opt opt (x, p) : arg min f x, p ( ) 2 (Ap, p) 2 applies with r : b Ax. Vector r is denoted as residual vector and its EUCLIDEAN norm r 2 as residual. PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 55
10 method of steepest descent (cont d) PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 56
11 method of steepest descent (cont d) with given sequence p m m of search directions out of \ 0, we can determine a first method basic solver choose x 0 for m 0, 1,... r m b Ax m m (r m, p m ) 2 (Ap m, p m ) 2 x m 1 x m m p m in order to complete our basic solver, we need a method to compute search directions p m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 57
12 Technische Universität München method of steepest descent (cont d) further (w/o loss of generality), we request p m 2 1 for x A 1 b we achieve a globally optimal choice via ˆx x p with xˆ A 1 b, xˆ x 2 as hereby follows for definition of opt according to 3.20 x x opt p x xˆ x 2 xˆ (b Ax, xˆ x) 2 xˆ x (b Ax, xˆ x) 2 xˆ x 2 however, this approach requires the knowledge of the exact solution xˆ for computing search directions PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 58
13 method of steepest descent (cont d) restricting to local optimality, search directions can be computed with the negative gradient of F here applies, hence F(x) ½(A A T )x b Ax b r p : yields the direction of steepest descent function F is due to 2 F(x) A and SPD matrix A strictly convex it is obvious that xˆ A 1 b due to F(x) ˆ 0 represents the only and global minimum of F PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 59 A sym. r for r 0 r 2 0 for r 0 (3.2.4)
14 method of steepest descent (cont d) with (3.2.4) we obtain the method of steepest decent (a.k.a. gradient method) choose x 0 for m 0, 1,... r m b Ax m Y r m 0 N m r m 2 2 (Ar m, r m ) 2 m 0 x m 1 x m m r m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 60
15 method of steepest descent (cont d) example: consider Ax b with A, b, x 0 thus, we get the following convergence m x m,1 method of steepest descent x m,2 m : x m A 1 b A e e e e e e e e e e e e e e e 15 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 61
16 method of steepest descent (cont d) what s happening here? 1 2, 2 10 x 2 x 0 x 2 x 3 x 1 x 1 contour lines of F denote convergence process stretched ellipses due to different large values of diagonal entries of A residual vector always points into the direction of point of origin, but the approximated solution might change its sign in every single iteration motivates further considerations w.r.t. optimality of search directions PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 62
17 method of steepest descent (cont d) observation: the gradient method represents in every step a projection method with K L span r m 1 obviously, optimality of the approximated solution concerning entire subspace U span r 0, r 1,..., r m 1 would be preferable for linearly independent residual vectors hereby at the latest follows x n A 1 b for the method of steepest descent all approximated solutions x m are optimal concerning r m 1 only due to missing transitivity of condition r p does not (necessarily) follow r m 2 r m from r m 2 r m 1 and r m 1 r m remedy: method of conjugated directions PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 63
18 method of conjugate directions idea: extend optimality of approximated solution x m to entire subspace U span p 0,..., p m 1 with linearly independent search directions p i the following theorem formulates a condition for search directions that assures optimality w.r.t. U m in the (m 1) st iteration step Theorem 3.21 Let F according to (3.2.3) be given and x be optimal w.r.t. subspace U span p 0,..., p m 1, then x x is optimal w.r.t. U iff applies. A U PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 64
19 method of conjugate directions (cont d) if for search directions p m either Ap m U m span p 0,..., p m 1 or equivalent Ap m p j, j 0,..., m 1 applies, then the approximated solution x m 1 x m m p m inherits according to 3.21 optimality from x m w.r.t. U m independent from the choice of scalar weighting factor m this degree of freedom m will be used further to extend optimality w.r.t. U m 1 span p 0,..., p m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 65
20 method of conjugate directions (cont d) Definition 3.22 Let A, then vectors p 0,..., p m are called pairwise conjugated or A orthogonal if applies. (p i, p j ) A : (Ap i, p j ) 2 0 i, j 0,..., m and i j let pairwise conjugate search directions p 0,..., p m \ 0 be given and x m be optimal w.r.t. U m span p 0,..., p m 1, then we get optimality of w.r.t. U m 1 if x m 1 x m m p m 0 (b Ax m 1, p j ) 2 (b Ax m, p j ) 2 (Ap m, p j ) 2 for j 0,..., m applies 0 for j m 0 for j m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 66
21 method of conjugate directions (cont d) for we yield the following representation and, thus, obtain the method of conjugate directions choose x 0 r 0 b Ax 0 for m 0, 1,..., n 1 m (r m, p m ) 2 (Ap m, p m ) 2 x m 1 x m m p m if search directions are chosen inappropriate, x n can yield the exact solution even x n 1 still has a large error in general this method is used as direct method with given search directions only r m 1 r m m Ap m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 67
22 CG: method of conjugate gradients combination of methods of steepest descent and conjugate directions in order to obtain a problem oriented approach w.r.t. selection of search directions and optimality w.r.t. orthogonality of search directions with residual vectors r 0,..., r m we successively determine search directions for m 0,..., n 1 according to p 0 r 0 p m r m j p j (3.2.5) for j 0 (j 0,..., m 1) we achieve an analogous selection of search directions according to method of steepest descent hence, under consideration of already used search directions p 0,..., p m 1 \ 0 exist m degrees of freedom in choosing j to assure search directions to be conjugated PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 68
23 CG: method of conjugate gradients (cont d) from required A orthogonality constraint follows 0 (Ap m, p i ) 2 (Ar m, p i ) 2 j (Ap j, p i ) 2 for i 0,..., m 1 hence, with (Ap j, p i ) 2 0 for i, j 0,..., m 1 and i j we obtain the wanted algorithm to compute coefficients i (3.2.6) (Ar m, p i ) 2 (Ap i, p i ) 2 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 69
24 CG: method of conjugate gradients (cont d) thus we obtain the preliminary method of conjugate gradients choose x 0 p 0 r 0 b Ax 0 for m 0, 1,..., n 1 m (r m, p m ) 2 (Ap m, p m ) 2 x m 1 x m m p m r m 1 r m m Ap m p m 1 r m 1 (Ar m 1, p j ) 2 (Ap j, p j ) 2 p j PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 70
25 CG: method of conjugate gradients (cont d) problem: for computation of p m 1 all p j (j 0,..., m) are necessary due to p m 1 r m 1 (Ar m 1, p j ) 2 p (Ap j (3.2.7) j, p j ) 2 observation a) p m is conjugated to all p j with 0 j m due to (3.2.5) and (3.2.6) b) r m U m span r 0,..., r m 1 span p 0,..., p m 1 c) r m is conjugated to all p j with 0 j m 1 for (c) applies p j U m 1 for 0 j m 1, hence Ap j U m and we get A symm. (b) (Ar m, p j ) 2 (r m, Ap j ) 2 0 PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 71
26 CG: method of conjugate gradients (cont d) from (c) and (3.2.7) follows (Ar m, p j ) 2 (Ar m, p m 1 ) 2 p m r m p j r m p (Ap m 1 j, p j ) 2 (Ap m 1, p m 1 ) 2 2 further, the method can stop in the k 1 st iteration if p k 0 (or p k 2 0), i.e. the solution x k A 1 b has been found as from r k b Ax k follows x k A 1 b r k 0, substituting r k into above equation for p k yields the wanted result finally, we obtain the method of conjugate gradients PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 72
27 CG: method of conjugate gradients (cont d) choose x 0 p 0 r 0 b Ax 0 for m 0, 1,..., n 1 Y m (r m, p m ) 2 (Ap m, p m ) 2 x m 1 x m m p m r m 1 r m m Ap m p m STOP N p m 1 r m 1 (Ar m 1, p m ) 2 (Ap m, p m ) 2 p m PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 73
28 CG: method of conjugate gradients (cont d) remarks CG can be further improved to a single MVM Ap m per iteration in case of regular matrices, there exist several variants ARNOLDI algorithm LANCZOS algorithm GMRES method (generalized minimal residual) BiCG method (bi conjugate gradient) CGS method (conjugate gradient squared) BiCGSTAB method (BiCG stabilized) TFQMR method (transpose free quasi minimal residual)... not to be discussed here PD Dr. Ralf Peter Mundani Computational Linear Algebra Winter Term 2017/18 74
Computational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationCourse Notes: Week 4
Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived
More informationConjugate Gradient algorithm. Storage: fixed, independent of number of steps.
Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationKrylov Space Solvers
Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse
More informationLecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that
More informationRESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationThe Conjugate Gradient Method for Solving Linear Systems of Equations
The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationKey words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR
POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric
More informationPETROV-GALERKIN METHODS
Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationIDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht
IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationPROJECTED GMRES AND ITS VARIANTS
PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationA DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to
A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationKRYLOV SUBSPACE ITERATION
KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationComparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations
American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationRecycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB
Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements
More informationSome minimization problems
Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of
More informationOverview. Motivation for the inner product. Question. Definition
Overview Last time we studied the evolution of a discrete linear dynamical system, and today we begin the final topic of the course (loosely speaking) Today we ll recall the definition and properties of
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationS-Step and Communication-Avoiding Iterative Methods
S-Step and Communication-Avoiding Iterative Methods Maxim Naumov NVIDIA, 270 San Tomas Expressway, Santa Clara, CA 95050 Abstract In this paper we make an overview of s-step Conjugate Gradient and develop
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationarxiv: v2 [math.na] 1 Sep 2016
The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present
More informationJanuary 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13
Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationOn the choice of abstract projection vectors for second level preconditioners
On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik
More informationDefinition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition
6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition
More informationx 1 x 2. x 1, x 2,..., x n R. x n
WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications
More informationConjugate Gradient Method
Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationResearch Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems
Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite
More information