Numerical Programming I (for CSE)

Similar documents
6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Numerical Programming I (for CSE)

Kasetsart University Workshop. Multigrid methods: An introduction

Chapter 12: Iterative Methods

1. Fast Iterative Solvers of SLE

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

9. Iterative Methods for Large Linear Systems

Stabilization and Acceleration of Algebraic Multigrid Method

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

and let s calculate the image of some vectors under the transformation T.

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Theory of Iterative Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Scientific Computing

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Solving PDEs with Multigrid Methods p.1

Multigrid Methods and their application in CFD

CAAM 454/554: Stationary Iterative Methods

Lecture 18 Classical Iterative Methods

Solving Linear Systems

Iterative Methods for Solving A x = b

Algebraic Multigrid as Solvers and as Preconditioner

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

COURSE Iterative methods for solving linear systems

6. Multigrid & Krylov Methods. June 1, 2010

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Computational Linear Algebra

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Bootstrap AMG. Kailai Xu. July 12, Stanford University

Computational Linear Algebra

Numerical Methods - Numerical Linear Algebra

Notes for CS542G (Iterative Solvers for Linear Systems)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Solving Linear Systems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010

Iterative Methods and Multigrid

1 Number Systems and Errors 1

Numerical Analysis: Solving Systems of Linear Equations

Iterative Methods. Splitting Methods

Iterative Methods and Multigrid

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Iterative Methods for Ax=b

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

INTRODUCTION TO MULTIGRID METHODS

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Introduction to Scientific Computing II Multigrid

CAAM 335 Matrix Analysis

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Scientific Computing II

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Multigrid absolute value preconditioning

The Conjugate Gradient Method

AIMS Exercise Set # 1

Algebra C Numerical Linear Algebra Sample Exam Problems

Linear algebra II Tutorial solutions #1 A = x 1

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

CLASSICAL ITERATIVE METHODS

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Modeling and Simulation with ODE for MSE

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Eigenvalue and Eigenvector Homework

Introduction to Applied Linear Algebra with MATLAB

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Preface to the Second Edition. Preface to the First Edition

4. Linear transformations as a vector space 17

9.1 Preconditioned Krylov Subspace Methods

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius!

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Solving Linear Systems of Equations

Lab 1: Iterative Methods for Solving Linear Systems

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

TMA4125 Matematikk 4N Spring 2017

Introduction and Stationary Iterative Methods

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Course Notes: Week 1

Solutions Problem Set 8 Math 240, Fall

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Definition (T -invariant subspace) Example. Example

MAT 1302B Mathematical Methods II

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Scientific Computing: An Introductory Survey

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Eigenvalues and Eigenvectors

Linear Algebra- Final Exam Review

Math 3191 Applied Linear Algebra

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Section 1.7: Properties of the Leslie Matrix

Math Spring 2011 Final Exam

Transcription:

Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let C R n n be a diagonalizable matrix. Show: ρ(c) < 1 lim i C i =. Hint: Write down the powers of C by using its diagonalized representation. b) Consider the SOR method, i.e., the Gauss-Seidel method with relaxation, x (i+1) = x (i) + αy (i), c) where y (i) is the update of the Gauss-Seidel iteration. Prove that α (, ) is a necessary (but not sufficient) condition for convergence. Hint: Use the determinant of the iteration matrix det(c) of the SOR method and consider the relation between the eigenvalues λ i of C and det(c). An intermediate result is det(c) = (1 α) n. For which matrices do the Jacobi method and the Gauss-Seidel method converge? 3 1 1.5 1 (i) 1 3 1 (ii) 1 1 (iii) 1 1 1 1 3.5 1 Solution: a) If C is diagonalizable, we can represent it as C = S 1 DS, where D is a diagonal matrix with the eigenvalues of C and S is a matrix made up by the eigenvectors of C. Computing C i, we get C i = (S 1 DS) i = S 1 DS S 1 DS... S 1 DS = S 1 DD...DS = S 1 D i S Since ρ(c) < 1, we have λ i < 1 and D i = diag(λ i 1,..., λ i n) i. Thus, lim i Ci = lim S 1 D i S = S 1 S =. i Remark: This can be also shown for an arbitrary matrix C. In the general case, you have to prove for the Jordan matrix J that J i i.

Numerical Programming I (for CSE) Tutorial 1: Iterative Methods, page b) With M = 1 α D A + L A, we get det(c) = det( M 1 (A M)) = triangular matrixes = = det(a M) = det (( 1 1 ) ) α DA + U A det( M) det ( 1 α D ) A L A det (( 1 1 ) ) ( α DA 1 det ( 1 α D ) = α) det(da ) ( A α) det(da ) ( ) 1 α 1 = (1 α) n α Thus, we have α / (, ) det(c) 1 i : λ i 1 a) C i i x x (i) i c) Matrix A is strictly diagonally dominant, iff a ii > j i a ij, i Jacobi Gauss-Seidel (i) + + A strictly diagonally dominant! (ii) + + Consider spectral radius of the iteration matrix! (iii) + Consider spectral radius of the iteration matrix! (ii) Jacobi: 3 M = D A = 1 1 C = M 1 3 4 3 (A M) = 1 1 = 1 eigenvalues of C: λ 1,,3 = ρ(c) = < 1 convergence (ii) Gauss-Seidel: equivalent to Jacobi iteration convergence M = D A + L A = D A Attention: The positive eigenvalues of A are not sufficient for convergence of Gauss-Seidel (or Jacobi) because it holds: positive eigenvalues But, for symmetric matrices, it follows in general positive definite. symmetric + positive eigenvalues positive definite.

Numerical Programming I (for CSE) Tutorial 1: Iterative Methods, page 3 (iii) Jacobi: M = D A = I, C = M 1 (A M) = I A = 1 1 From characterisitc polynomial: det(c λi) = λ 3 = it follows, that eigenvalues of C: λ 1,,3 = ρ(c) = < 1 convergence (iii) Gauss-Seidel: 1 M = D A + L A = 1 1, 1 1 C = M 1 (A M) = 1 1 1 = 3 1 eigenvalues of C: λ 1 =, λ,3 = ρ(c) = 1 no convergence ) Multilevel Methods Given the one-dimensional stationary heat equation T (x) x = s(x), (1) with x the spatial variable, T the temperature value, and s a source term. We look for a solution T (x) of equation (1) in the spatial domain Ω = [, π], with boundary conditions T () = T (π) =, and a source term s(x) = sin(x). To solve this problem numerically, a central finite difference scheme is applied, transforming equation (1) into the discrete algebraic form T i+1 + T i T i 1 h = s i. () Subindex i =,..., n + 1 denotes the discrete position x i = ih, where h is the discretization interval h = π/(n + 1), and n is the number of internal grid points, as denoted in Figure 1. The discrete unknowns T i are located at the internal nodes, boundary values are prescribed for T = T n+1 =. In the following, you will perform one coarse-grid correction of the fine-grid level solution. T 1 T i =T(x i ) T n 3.14 x h Figure 1: Spatial grid of discretized heat equation. a) For n f = 7, setup the fine-level system of equations A f x f = b f of the discrete problem given by equation (), the boundary conditions and source term. How can the resulting system matrix be classified? b) We do not aim at using the matrix in an explicit form, but to evaluate the equations only individually per unknown T i, i.e., node-wise, in an iterative manner. What advantages does this have? Using a Gauss-Seidel iterative scheme, write down the equation for updating one unknown, using index k and k+1 to denote the old and new iteration state of involved variables.

Numerical Programming I (for CSE) Tutorial 1: Iterative Methods, page 4 P c) P d) P e) P f) P g) Implement the Gauss-Seidel method for the given problem. Perform iterations on the fine grid (so called pre-smoothing iterations) starting from an initial guess of Ti =, i. For comparison, compute the numerical solution of the fine-grid system using Matlab s builtin direct equation solver and also derive the analytical solution of equation (1). Plot and explain the differences between the three solutions. Hint: The analytical solution is a simple trigonometric function. The direct solver solution is very close to it, while the smoothed solution (after two iterations) is about a quarter in magnitude of the others. To obtain a coarse-grid correction for the fine-grid approximation computed by the Gauss- Seidel method, we need to use the residual r f of the fine grid approximation. Derive and implement a node-wise evaluation of the residual. After computing the residual, setup a coarser mesh with h c = h f and restrict the computed residual to the coarse mesh by using injection, i.e., coarse-grid residual values are set by copying the values from coinciding fine-grid nodes. Hint: The residuals should be approximately as large as the expected errors for this example. On the coarse level, we now want to solve for the error A c e c = r c, to later use the coarse-grid error as correction on the fine-grid level. Setup the coarse-grid equation system, assuming zero error at the boundaries. Solve the system by using Matlab s builtin direct solver. Prolongate the coarse grid error e c up to the fine grid e f by linear interpolation, i.e., coinciding nodes have same value and intermediate fine grid nodes get an average of left and right coarse grid node values (remember the zero boundaries on the coarse grid). Correct the smoothed solution by x f = x f e f. Plot and compare the corrected solution to the other solutions obtained in c). Hint: After the correction, the solution should approximately at 8% of the direct solution. Perform post-smoothing iterations with the Gauss-Seidel method implemented in c). Why do you think is a post-smoothing useful? Plot and compare the post-smoothed solution to the other solutions obtained in c) and f). Solution: a) A f x f = 64 π h = π 8, 1 h = 64 π 1 1 1 1 1 1 1 1 1 1 1 1 T 1 T T 3 T 4 T 5 T 6 T 7 sin(1π/8) sin(π/8) sin(3π/8) = b f = sin(4π/8) sin(5π/8) sin(6π/8) sin(7π/8) The system matrix A f is a tri-diagonal matrix, i.e., a band-matrix with one main, upper, and lower diagonal. It is also sparse, i.e., most entries are zero, and symmetric. b) The advantage of not setting up an explicit matrix lies in not needing the storage for it. This allows for efficient dynamic grid adaptivity (e.g., grid changes and refinements) and possibly enables to solve for a larger amount of unknowns.

Numerical Programming I (for CSE) Tutorial 1: Iterative Methods, page 5 Gauss-Seidel update rule: Ti+1 k k+1 + Ti Ti 1 k+1 h = s i Ti k+1 = 1 (h s i Ti 1 k+1 T i+1) k Ti k+1 = 1 (h sin(x i ) Ti 1 k+1 T i+1) k P c) P d) Show and execute Matlab codes. Analytical solution: Exact solution of the original continuous problem. Direct solver solution: Numerical solution of the discretized problem and, thus, including a discretization and rounding error. Smoothed solution: Only the inital guess with smoothed error. The smoothening allows to represent the residual on a coarser grid, necessary for the coarse grid correction. Residual in matrix notation: r f = b f A f x f Component wise (omitting subindex f): n r i = b i a ij x j = sin(x i ) + 1 h (T i 1 T i + T i+1 ) j=1 Number of nodes on coarse level: n c = π h c 1 = π h f 1 = Injection of residuals: (draw sketch of injection) π π/(n f + 1) 1 = n f + 1 1 = 7 + 1 1 = 3. r f, r c = r f,4 r f,6 P e) P f) P g) Coarse-grid correction system: A c e c = 16 1 π Execute Matlab codes. 1 1 1 e c,1 e c, e c,3 Prolongation: (draw sketch of linear interpolation).5e c,1 e c,1.5(e c,1 + e c, ) e f = e c,.5(e c, + e c,3 ) e c,3.5(e c,3 ) Execute matlab codes. Execute Matlab codes. = r c = Post-smoothing removes high-frequency errors potentially introduced by the prolongation. r f, r f,4 r f,6