MATH 450 / 550 Homework 1: Due Wednesday February 18, 2015

Size: px
Start display at page:

Download "MATH 450 / 550 Homework 1: Due Wednesday February 18, 2015"

Transcription

1 MATH / Homework MATH / Homework : Due Wednesday February, Answer the following questions to the best of your ability. Solutions should be typed. Any plots or graphs should be included with the question (please include the questions in your typed solutions). General Problems. Consider the centered difference discretization of u + u + u =, u() = u() =. Solve this problem using the GMRES solver you have been given. Look at the error of each iteration of the GMRES algorithm. Implement your solution on different meshes with,,,... points. How does the performance of the algorithm depend on the mesh given? It will help to look at a semi-log plot of the residual. The following listing reflects the code that is needed in order to solve the problem that was stated. import sys import os import math from numpy import from pylab import import m a t p l o t l i b. pyplot as p l t from KSPSolvers import # S c r i p t S o l v e s : u xx + u x + u = with u ( ) = u ( ) =. m = ; # D i s c r i t i z a t i o n parameter. x =. # Domain L e f t End Point. xmmo =. # Domain Right End Point. h = (xmmo x ) / (m ); p r i n t h =, h ; # I n i t i a l i z e the needed v e c t o r s. A = z e r o s (m m, f l o a t ) ; A. shape=(m,m) u = z e r o s (m, f l o a t ) ; u. shape=(m, ) b = z e r o s (m, f l o a t ) ; b. shape=(m, ) x = z e r o s (m, f l o a t ) ; x. shape=(m, ) # Populate t h e System Matrix and RHS.

2 MATH / Homework A[, ] =. ; b [ ] =. ; # LHS Boundary c o n d i t i o n. A[m,m ] =. ; b [m ] =. ; # RHS Boundary c o n d i t i o n. x [ ] = x ; x [m ] = xmmo; # Boundary Values. f o r i in range (, ( l e n ( b ) )): # The main matrix system. A[ i, i ] = ((./(. h ) ) + (. / ( h h ) ) ) ; A[ i, i ] =. + (. / ( h h ) ) ; A[ i, i +] = ( (. / (. h ) ) (. / ( h h ) ) ) ; b [ i ] =. ; # The Problem Domain. x [ i ] = x + i h ; # Check System. p r i n t A =, A p r i n t b =, b # Our GMRES S o l v e r No P r e c o n d i t i o n i n g. System = gmres ( Matrix=A,RHS=b, x=u, Tol=e, maxits=l e n ( b ) ) u, error, t o t a l I t e r s = System. s o l v e ( ) # Pring out some i n f o r m a t i o n. #p r i n t S o l u t i o n : #p r i n t u =, u #p r i n t Check = #p r i n t A. dot ( u ) # P l o t t i n g the i n i t a l p r o f i l e. p l t. p l o t ( x, u, k ) #p l t. show ( ) FileName = Probuhm + s t r (m) +. pdf s a v e f i g ( s t r ( FileName ) ) # P l o t t i n g the i n i t a l p r o f i l e. p l t. p l o t ( e r r o r, k ) #p l t. show ( ) FileName = Proberrorm + s t r (m) +. pdf s a v e f i g ( s t r ( FileName ) ) # P l o t t i n g the i n i t a l p r o f i l e. p l t. semilogy ( e r r o r [ : ], k ) #p l t. show ( ) FileName = Problogerrm + s t r (m) +. pdf s a v e f i g ( s t r ( FileName ) ) #p r i n t e r r o r =, e r r o r ; p r i n t i t e r a t i o n s NO PC =, t o t a l I t e r s ;

3 MATH / Homework mesh final solution normal residual plot log of residual vs iteration n = n = n = n = n = n = n = Notice as the mesh is refined the solution looks smoother. Also note that the size of the residual is reduced with each iteration. It takes roughly n ( the size of the system )

4 MATH / Homework iterations for the algorithm to fully converge to the desired tolerance.. Determine all iterates x k that are generated when GMRES is used to solve the linear system Ax = b, where A = and b =, with initial guess x = R. Can you generate this example to n n linear systems of any dimension n > for which GMRES exhibits an analogous behavior? What happens when the initial guess is changed? Consider the matrix system Ax = b where A =, b = Solving the system by hand we obtain the vector: x = as the solution to the equation. If we apply the Matlab version of GMRES to this system by using the commands: Matlab returns the following out put: GMRES(A, b) gmres stopped at iteration without converging to the desired tolerance e- because the method stagnated. Thus, the GMRES algorithm that is built into Matlab checks to see if the residual was reduced during the GMRES iteration, if the residual was not reduced than the method is considered to be stagnating, and GMRES will not continue to progress toward a solution. The gmres algorithm given in class did not stop when the residual is not reduced after each iteration. Matlab found the solution in five iterations. We can examine this.

5 MATH / Homework solution by looking at the history of residual norms after each iteration. Starting out the residual is ; thus, the GMRES algorithm is started, and after each of the first iterations we have a residual norm of. Then on the fifth iteration we see the first reduction in the residual norm, where it drops to zero. Why does GMRES take so many iterations to achieve a solution. To get a real good feel for this we can examine the GMRES algorithm from class by hand: Set up: Here we use the starting initial guess in our case x = [ ] T to find the starting residual. Several other quantities are also computed: ρ = r =, β = and k = Since our residual norm is not zero we need to find a search direction: v = r = r Iteration : Compute a second search direction. v = Av = For j = : h, = v T v = [ ] and since h, = then v will remain unchanged. h, = v v = Thus, the Hessenburg matrix is currently: H = = =

6 MATH / Homework and again v remains unchanged. We now minimize the value of the quantity: βe H y where Thus, we are minimizing: e = y This quantity will be minimized when y is the zero vector, and hence: = ρ = Hence we progress to iteration without a reduction in the residual. Continuing through the algorithm we obtain the following sequence of Hessenburg Matrices and Search Directions: iteration = H = iteration = H = iteration = H = iteration = H = iteration = H =

7 MATH / Homework v iteration = v = iteration = v = iteration = v = iteration = v = iteration = v = After each iteration we make no progress in the reduction of the error residual until iteration. Here we have a search direction that will allow for a nonzero y vector to be found. We should note that as we progress through the algorithm the columns of the Hessenburg matrix are forming our original A matrix. It takes iteration to find the solution because of the order in which the algorithm progress through search directions. The search direction we need just happens to be the last orthogonal search direction left for the algorithm to look in. If we pass the algorithm a different initial guess. We see that the algorithm may take fewer iterations. Note we could generalize this to any size system following the pattern in the given system matrix. In this case when given a zero vector as the initial guess the algorithm will continue to stall until system reaches the nth iterate at which point it will obtain the solution it is searching for. Graduate Students. Consider preconditioning the system given in the first general homework problem. Specifically use the preconditioning matrix M on your system matrix in Ax = b where the precondition map M is defined by: (Mf) = f, (Mf)() = (Mf)() =. That is, precondition with a solver for the high-order term in the differential equation using the correct boundary conditions. Implementation of the system will then look like: MAx = Mb Complete the study using the same meshes as suggested above. Note here we use a second order discritization of the second derivative operator. Following the work done in problem one above it can be seen that we obtain a matrix that can be populated using the following loop: M[, ] =. ; M[m,m ] =. ; # P r e c o n d i t i o n e r f o r i in range (, ( l e n (b) )): # P r e c o n d i t i o n e r. M[ i, i ] = (./(h h ) ) ; 7

8 MATH / Homework M[ i, i ] = (. / ( h h ) ) ; M[ i, i +] = (./(h h ) ) ; In order to get the full effect of this preconditioning we consider the inverse of this matrix as a preconditioner. This is implemented in the code as: # ======================================================================= # Our GMRES S o l v e r With P r e c o n d i t i o n i n g. # ======================================================================= SystemP = gmres ( Matrix=l i n a l g. inv (M). dot (A),RHS=l i n a l g. inv (M). dot ( b ), x=up up, errorp, t o t a l I t e r s p = SystemP. s o l v e ( ) The following table illustrates the number of iterations needed by the GMRES algorithm to achieve the provided residual size tolerance. Mesh n = n = n = n = No PC 9 9 With PC Note that with the preconditioning the algorithm converges iterations each time for the meshes considered which is in-contrast to the unpreconditioned case where the number of steps increases proportionally to the size of the system solved. The following figure showcases the final approximated solution on each of the different meshes considered, as well as looks at the different ways of observing the size of the residual in the algorithm as it progresses. Note that the size of the residual is always going down.

9 MATH / Homework mesh final solution normal residual plot log of residual vs iteration n = n = n = n = In all cases it can be seen that the preconditioned case achieves the required residual tolerance for algorithm termination much faster that when GMRES is run without the preconditioner. 9

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter

Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter Quarter 2013 March 20, 2013 Joshua Zorn jezorn@ucdavis.edu

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Qualifying Examination

Qualifying Examination Summer 24 Day. Monday, September 5, 24 You have three hours to complete this exam. Work all problems. Start each problem on a All problems are 2 points. Please email any electronic files in support of

More information

y + 3y = 0, y(0) = 2, y (0) = 3

y + 3y = 0, y(0) = 2, y (0) = 3 MATH 3 HOMEWORK #3 PART A SOLUTIONS Problem 311 Find the solution of the given initial value problem Sketch the graph of the solution and describe its behavior as t increases y + 3y 0, y(0), y (0) 3 Solution

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y)/2. Hence the solution space consists of all vectors of the form

Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y)/2. Hence the solution space consists of all vectors of the form Math 2 Homework #7 March 4, 2 7.3.3. Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y/2. Hence the solution space consists of all vectors of the form ( ( ( ( x (5 + 3y/2 5/2 3/2 x =

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

HOMEWORK 1 SOLUTIONS

HOMEWORK 1 SOLUTIONS HOMEWORK 1 SOLUTIONS MATH 170A Problem 0.1. Watkins 1.1.9 Solution. When I ran the program, my ratios were 8.74, 4.02, and 4.29. Since matrix-vector multiplication is O(n 2 ), I would expect doubling the

More information

Math 21b: Linear Algebra Spring 2018

Math 21b: Linear Algebra Spring 2018 Math b: Linear Algebra Spring 08 Homework 8: Basis This homework is due on Wednesday, February 4, respectively on Thursday, February 5, 08. Which of the following sets are linear spaces? Check in each

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Direct and Incomplete Cholesky Factorizations with Static Supernodes

Direct and Incomplete Cholesky Factorizations with Static Supernodes Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Iterative solution methods and their rate of convergence

Iterative solution methods and their rate of convergence Uppsala University Graduate School in Mathematics and Computing Institute for Information Technology Numerical Linear Algebra FMB and MN Fall 2007 Mandatory Assignment 3a: Iterative solution methods and

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

10725/36725 Optimization Homework 4

10725/36725 Optimization Homework 4 10725/36725 Optimization Homework 4 Due November 27, 2012 at beginning of class Instructions: There are four questions in this assignment. Please submit your homework as (up to) 4 separate sets of pages

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Principles and Analysis of Krylov Subspace Methods

Principles and Analysis of Krylov Subspace Methods Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation

Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation C. Vuik, Y.A. Erlangga, M.B. van Gijzen, and C.W. Oosterlee Delft Institute of Applied Mathematics c.vuik@tudelft.nl

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

AMSC/CMSC 466 Problem set 3

AMSC/CMSC 466 Problem set 3 AMSC/CMSC 466 Problem set 3 1. Problem 1 of KC, p180, parts (a), (b) and (c). Do part (a) by hand, with and without pivoting. Use MATLAB to check your answer. Use the command A\b to get the solution, and

More information

Assignment on iterative solution methods and preconditioning

Assignment on iterative solution methods and preconditioning Division of Scientific Computing, Department of Information Technology, Uppsala University Numerical Linear Algebra October-November, 2018 Assignment on iterative solution methods and preconditioning 1.

More information

Math 551 Homework Assignment 3 Page 1 of 6

Math 551 Homework Assignment 3 Page 1 of 6 Math 551 Homework Assignment 3 Page 1 of 6 Name and section: ID number: E-mail: 1. Consider Newton s method for finding + α with α > 0 by finding the positive root of f(x) = x 2 α = 0. Assuming that x

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

HOMEWORK 10 SOLUTIONS

HOMEWORK 10 SOLUTIONS HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Error Bounds for Iterative Refinement in Three Precisions

Error Bounds for Iterative Refinement in Three Precisions Error Bounds for Iterative Refinement in Three Precisions Erin C. Carson, New York University Nicholas J. Higham, University of Manchester SIAM Annual Meeting Portland, Oregon July 13, 018 Hardware Support

More information

arxiv: v1 [math.na] 11 Jul 2011

arxiv: v1 [math.na] 11 Jul 2011 Multigrid Preconditioner for Nonconforming Discretization of Elliptic Problems with Jump Coefficients arxiv:07.260v [math.na] Jul 20 Blanca Ayuso De Dios, Michael Holst 2, Yunrong Zhu 2, and Ludmil Zikatanov

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Krylov Subspaces. The order-n Krylov subspace of A generated by x is Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

Laplace's equation: the potential between parallel plates

Laplace's equation: the potential between parallel plates 4/3/01 Electrostatics Laplace Solver 1 Laplace's equation: the potential between parallel plates Laplace's equation describing the electric potential in two dimensions is: ( x, y) 0 At right is the potential

More information

Numerical Analysis Fall. Roots: Open Methods

Numerical Analysis Fall. Roots: Open Methods Numerical Analysis 2015 Fall Roots: Open Methods Open Methods Open methods differ from bracketing methods, in that they require only a single starting value or two starting values that do not necessarily

More information

Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools

Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools Xiangjun Wu (1),Lilun Zhang (2),Junqiang Song (2) and Dehui Chen (1) (1) Center for Numerical Weather Prediction, CMA (2) School

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations Harrachov 2007 Peter Sonneveld en Martin van Gijzen August 24, 2007 1 Delft University of Technology

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

FALL 2018 MATH 4211/6211 Optimization Homework 4

FALL 2018 MATH 4211/6211 Optimization Homework 4 FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution

More information

Domain decomposition for the Jacobi-Davidson method: practical strategies

Domain decomposition for the Jacobi-Davidson method: practical strategies Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.

More information

High Performance Nonlinear Solvers

High Performance Nonlinear Solvers What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27, Math 371 - Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, 2011. Due: Thursday, January 27, 2011.. Include a cover page. You do not need to hand in a problem sheet.

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

PALADINS: Scalable Time-Adaptive Algebraic Splitting and Preconditioners for the Navier-Stokes Equations

PALADINS: Scalable Time-Adaptive Algebraic Splitting and Preconditioners for the Navier-Stokes Equations 2013 SIAM Conference On Computational Science and Engineering Boston, 27 th February 2013 PALADINS: Scalable Time-Adaptive Algebraic Splitting and Preconditioners for the Navier-Stokes Equations U. Villa,

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

Math 51, Homework-2. Section numbers are from the course textbook.

Math 51, Homework-2. Section numbers are from the course textbook. SSEA Summer 2017 Math 51, Homework-2 Section numbers are from the course textbook. 1. Write the parametric equation of the plane that contains the following point and line: 1 1 1 3 2, 4 2 + t 3 0 t R.

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

MATH 571: Computational Assignment #2

MATH 571: Computational Assignment #2 MATH 571: Computational Assignment #2 Due on Tuesday, November 26, 2013 TTH 12:pm Wenqiang Feng 1 MATH 571 ( TTH 12:pm): Computational Assignment #2 Contents Problem 1 3 Problem 2 8 Page 2 of 9 MATH 571

More information

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems

Matlab s Krylov Methods Library. For. Large Sparse. Ax = b Problems Matlab s Krylov Methods Library For Large Sparse Ax = b Problems PCG Preconditioned Conjugate Gradients Method. X = PCG(A,B) attempts to solve the system of linear equations A*X=B for X. The N-by-N coefficient

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING

STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING SIAM J. MATRIX ANAL. APPL. Vol.?, No.?, pp.?? c 2007 Society for Industrial and Applied Mathematics STEEPEST DESCENT AND CONJUGATE GRADIENT METHODS WITH VARIABLE PRECONDITIONING ANDREW V. KNYAZEV AND ILYA

More information

Chapter 8 Linear Algebraic Equations

Chapter 8 Linear Algebraic Equations PowerPoint to accompany Introduction to MATLAB for Engineers, Third Edition William J. Palm III Chapter 8 Linear Algebraic Equations Copyright 2010. The McGraw-Hill Companies, Inc. This work is only for

More information