ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION

Similar documents
OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Preconditioning Techniques Analysis for CG Method

APPLIED NUMERICAL LINEAR ALGEBRA

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

A Jacobi Davidson Method for Nonlinear Eigenproblems

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Iterative projection methods for sparse nonlinear eigenvalue problems

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Arnoldi Methods in SLEPc

Iterative Methods for Solving A x = b

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Scientific Computing

1 Conjugate gradients

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Finite-choice algorithm optimization in Conjugate Gradients

Direct methods for symmetric eigenvalue problems

Consider the following example of a linear system:

9.1 Preconditioned Krylov Subspace Methods


Iterative methods for symmetric eigenvalue problems

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

On Algebraic Structure of Improved Gauss-Seidel Iteration. O. M. Bamigbola, A. A. Ibrahim

Eigenvalue Problems and Singular Value Decomposition

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

9. Iterative Methods for Large Linear Systems

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Lecture 18 Classical Iterative Methods

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

Linear Solvers. Andrew Hazel

From Stationary Methods to Krylov Subspaces

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Course Notes: Week 1

A Note on Inverse Iteration

Classical iterative methods for linear systems

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

Next topics: Solving systems of linear equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Solving Linear Systems

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Conjugate Gradient (CG) Method

DELFT UNIVERSITY OF TECHNOLOGY

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

The Conjugate Gradient Method

1 Number Systems and Errors 1

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

The Conjugate Gradient Method for Solving Linear Systems of Equations

M.A. Botchev. September 5, 2014

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Chapter 7 Iterative Techniques in Matrix Algebra

The Lanczos and conjugate gradient algorithms

Review: From problem to parallel algorithm

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Since the early 1800s, researchers have

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Preconditioned Parallel Block Jacobi SVD Algorithm

Linear and Non-Linear Preconditioning

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

On the modified conjugate gradient method in cloth simulation

Solving Linear Systems of Equations

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Evaluation of Iterative-Solutions used for Three-Dimensional Volume Integral Equation

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Numerical Methods - Numerical Linear Algebra

Introduction and Stationary Iterative Methods

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS

Variational Principles for Nonlinear Eigenvalue Problems

Introduction to Scientific Computing

Iterative methods for Linear System

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Introduction to Applied Linear Algebra with MATLAB

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Linear algebra for MATH2601: Theory

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Lecture 17: Iterative Methods and Sparse Linear Algebra

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

On the Vorobyev method of moments

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

EECS 275 Matrix Computation

Transcription:

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Introduction Summer School 2006 1 / 15

Introduction Two fundamental tasks in numerical computing are the solution of a linear system of equations Ax = b where A R n n and b R n are given, and the solution of linear eigenvalue problems Ax = λx or Ax = λbx. These systems arise very frequently from finite element or finite volume or finite difference approximations to boundary value problems and the dimension n of the problem can be very large. n = 10 5 to n = 10 6 are not unusual. TUHH Heinrich Voss Introduction Summer School 2006 2 / 15

Example Consider the boundary value problem u = f, in Ω := {(x, y, z) T R 3 : 0 x, y, z 1} u = g, on Ω. } If we discretize the differential equation employing central differences u(x, y, z) 1 ( h 2 6u(x, y, z) u(x + h, y, z) u(x h, y, z) ) u(x, y + h, z) u(x, y h, z) u(x, y, z + h) u(x, y, z h) then we obtain for h := 1/(m + 1) the following system of linear equations TUHH Heinrich Voss Introduction Summer School 2006 3 / 15

Discretization by finite differences 6U i,j,k U i 1,j,k U i+1,j,k U i,j 1,k U i,j+1,k U i,j,k 1 U i,j,k+1 = h 2 f (ih, jh, kh), i, j, k = 1,..., m, (1) where U i,j,k denotes an approximation to u(ih, jh, kh) for i, j, k = 1,..., m and U i,j,k = g(ih, jh, kh) if i {0, m + 1} or j {0, m + 1} or k {0, m + 1}. For the moderate step size h = 1/101 the dimension of the system is already n = 10 6. Notice however that the matrix of the system is sparse, i.e. most of the entries are 0. TUHH Heinrich Voss Introduction Summer School 2006 4 / 15

Solution of large linear systems There are essentially two approaches to the numerical solution of large and sparse linear systems. One is to adapt direct methods, such as Gaussian elimination (which obtain the solution of the system in a finite number of operations), to exploit the sparsity structure of the system matrix A. Typical adaption strategies involve the intelligent choice of the numbering of the unknowns (to minimize the band width of the matrix or to obtain special data structures) or special pivoting rules to minimize fill-in in the elimination process. We are not going to study these methods in this course. Most of the methods in use today for large and sparse linear systems are iterative. These methods try to improve an initial approximation x 0 of the solution by generating a sequence of approximate solutions x k, k = 1, 2,..., of (hopefully) increasing accuracy. TUHH Heinrich Voss Introduction Summer School 2006 5 / 15

Advantages of iterative solvers upon direct methods are the following ones The system matrix A is not needed in explicit form but only a procedure to compute the matrix-vector product Ax for a given vector x R n. This is especially advantageous in finite element applications where the product Ax can be performed elementwise. Preinformation about the solution which usually is at hand in engineering applications can be introduced into the solution process via the initial approximation x 0. Usually, the linear system under consideration is the discrete version of a system with an infinite number of degrees of freedom (which itself is only a model of the real system). Hence, it does not make any sense to compute the solution of the linear system with higher accuracy than the discretization error. Iterative methods can be terminated when the desired accuracy is derived. Hence, they require only the employment of those resources (time, memory) which are necessary to obtain the wanted accuracy. TUHH Heinrich Voss Introduction Summer School 2006 6 / 15

direct vs. iterative solution TUHH Heinrich Voss Introduction Summer School 2006 7 / 15

direct vs. iterative solution TUHH Heinrich Voss Introduction Summer School 2006 8 / 15

In English: I recommend this method for imitation. Hardly you will ever again eliminate directly, at least not if the system has more than two unknowns. The indirect method can be performed half in sleep or you can think about other things at the same time. TUHH Heinrich Voss Introduction Summer School 2006 9 / 15 Historical remark The idea of solving linear systems by iterative methods dates back at least to Carl Friedrich Gauß roughly 180 years ago. The method of least squares yielded systems which were too large to be handled by elimination methods if someone used only pencil and paper. In his paper Supplementum theoriae combinationis observationum erroribus minimae obnoxiae (1819-1822) Gauß described an iteration method which is called block-gauß-seidel-method today. He was so fascinated by this method that he wrote in a letter to Gerling in 1823 Ich empfehle Ihnen diesen Modus zur Nachahmung. Schwerlich werden Sie je wieder direkt eliminieren, wenigstens nicht, wenn Sie mehr als 2 Unbekannte haben. Das indirekte Verfahren lässt sich halb im Schlafe ausführen, oder man kann während desselben an andere Dinge denken.

Historical remark ct. Methods which are quite similar to that of Gauß were described by Carl Gustav Jacobi in 1845 and Phillip Ludwig Seidel in 1874. Since the emergence of digital computers the dimension of systems that could be treated numerically grew enormously. It turned out that the convergence of classical methods is far too slow for discrete versions of boundary value problems for partial differential equations. A substantial improvement was derived at the end of the forties of the last century by relaxation methods which were introduced by Southwell and Young. In 1952 Hestenes and Stiefel independently developed the conjugate gradient method which, in exact arithmetic, is a direct method (i.e. it generates the exact solution in a finite and predetermined number of operations). It was soon realized that round off errors destroy this finite convergence property and that the method was not an efficient alternative to factorization for dense systems. TUHH Heinrich Voss Introduction Summer School 2006 10 / 15

Historical remark ct. Stiefel, Rutishauser and others already discussed the iterative aspects of the conjugate gradient method for discrete versions of boundary value problems at the end of the fifties. A revival of the method as an iterative procedure was initiated by the work of Reid at the beginning of the seventies of the last century, who observed that for certain large problems sufficiently good approximations were obtained in far less steps than are needed to get the exact solution. Today, together with its numerous variants, it is the standard method for the numerical treatment of large and sparse linear systems, especially when preconditioning is used. In the first half of this summer school we survey this type of iterative methods for linear systems. TUHH Heinrich Voss Introduction Summer School 2006 11 / 15

Eigenvalue problems Generalizations of the conjugate gradient methods are iterative projection methods where the system under consideration is projected to a space of small dimension called search space. If one is not yet satisfied with the accuracy of the approximate solution, then the search space is expanded by some vector, and this expansion is repeated until (hopefully) convergence. Solvers for sparse eigenvalue problems follow the same lines, and they have many properties in common with solvers of linear systems. Two types of iterative projection methods for eigenproblems are considered in this summer school; methods which projected the underlying problem to a sequence of Krylov spaces, and methods, where the search space is expanded by a direction which has a high approximation potential for the eigenvector wanted next. TUHH Heinrich Voss Introduction Summer School 2006 12 / 15

Nonlinear eigenvalue problems In the last section of the summer school we consider generalizations of iterative projection methods to nonlinear eigenvalue problems, i.e. If T (λ) C n n, λ D C is a family of matrices, then λ D is an eigenvalue and x 0 is a corresponding eigenvector, if T (λ)x = 0. Problems of this type arise in vibrations of conservative gyroscopic systems, damped vibrations of structures, problems with retarded argument, lateral buckling problems, fluid-solid vibrations, quantum dot heterostructures, and sandwich plates, e.g. TUHH Heinrich Voss Introduction Summer School 2006 13 / 15

References Z. Bai, J. Demmel, J. Dongarra, A. Ruhe & H.A. van der Vorst (eds.): Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia 2000 B. Barret, M. Berry, T. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo & H.A. van der Vorst (eds): Templates for the Solution of Linear Systems: Building Blocks for Iterative Systems, SIAM, Philadelphia 1994 Gene H. Golub & Charles F. Van Loan: Matrix Computation (3rd ed.) The John Hopkins University Press, Baltimore 1996 Anne Greenbaum: Iterative Methods for Solving Linear Systems SIAM, Philadelphia 1997 B.N. Parlett: The symmetric eigenvalue problem, SIAM, Philadelphia 1998 TUHH Heinrich Voss Introduction Summer School 2006 14 / 15

References ct. Y. Saad:Numerical Methods for Large Eigenvalue Problems, John Wiley & Sons, New York 1992 Youssef Saad: Iterative Methods for Sparse Linear Systems (2nd ed.) SIAM, Philadelphia 2003 First edition available from http://www-users.cs.umn.edu/ saad/books.html Henk A van der Vorst: Iterative Krylov Methods for Large Linear Systems Cambridge University Press, Cambridge 2003 TUHH Heinrich Voss Introduction Summer School 2006 15 / 15