HOMEWORK 10 SOLUTIONS

Similar documents
Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Iterative Methods for Solving A x = b

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Notes on Some Methods for Solving Linear Systems

From Stationary Methods to Krylov Subspaces

9. Iterative Methods for Large Linear Systems

7.2 Steepest Descent and Preconditioning

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

Preconditioning Techniques Analysis for CG Method

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Lecture 18 Classical Iterative Methods

9.1 Preconditioned Krylov Subspace Methods

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Notes for CS542G (Iterative Solvers for Linear Systems)

Iterative methods for Linear System

4.6 Iterative Solvers for Linear Systems

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

The Conjugate Gradient Method

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Iterative Methods and Multigrid

6.4 Krylov Subspaces and Conjugate Gradients

EECS 275 Matrix Computation

AMS526: Numerical Analysis I (Numerical Linear Algebra)

PETROV-GALERKIN METHODS

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

Linear Solvers. Andrew Hazel

Chapter 7 Iterative Techniques in Matrix Algebra

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Iterative solvers for linear equations

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

PDE Solvers for Fluid Flow

INTRODUCTION TO DIFFERENTIATION

CLASSICAL ITERATIVE METHODS

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lab 1: Iterative Methods for Solving Linear Systems

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Motivation: Sparse matrices and numerical PDE's

Iterative Methods. Splitting Methods

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

M.A. Botchev. September 5, 2014

Solving Linear Systems

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

High Performance Nonlinear Solvers

Conjugate Gradient (CG) Method

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Iterative Solution methods

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

Chapter 1 Review of Equations and Inequalities

Numerical Methods I Non-Square and Sparse Linear Systems

EXAMPLES OF CLASSICAL ITERATIVE METHODS

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

JACOBI S ITERATION METHOD

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Algebra C Numerical Linear Algebra Sample Exam Problems

Solving Linear Systems

Lecture 10: Powers of Matrices, Difference Equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Computational Economics and Finance

Getting Started with Communications Engineering

CHAPTER 7: TECHNIQUES OF INTEGRATION

Math 112 Group Activity: The Vertical Speed of a Shell

Solving PDEs with CUDA Jonathan Cohen

O.K. But what if the chicken didn t have access to a teleporter.

Review: From problem to parallel algorithm

A-Level Notes CORE 1

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Unit 8 - Polynomial and Rational Functions Classwork

DON T PANIC! If you get stuck, take a deep breath and go on to the next question. Come back to the question you left if you have time at the end.

2 P a g e. Essential Questions:

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Preface to the Second Edition. Preface to the First Edition

Lecture 17: Iterative Methods and Sparse Linear Algebra

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Finding Limits Graphically and Numerically

Key Facts and Methods

the method of steepest descent

Iterative techniques in matrix algebra

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

The Conjugate Gradient Method

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

The Conjugate Gradient Method

Homework 2. Due Friday (5pm), Feb. 8, 2013

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Numerical methods part 2

Classical iterative methods for linear systems

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Sparse Matrix Techniques for MCAO

Transcription:

HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately multiplied by e (j) ρ(g) k. Thus, I want ρ(g) k = 1/e. Solving this equation for k, the number of iterations, I get k 1/ ln(ρ(g)) = 1/R (G). Thus it takes about 1/R (G) iterations to reduce the error by a factor of e. Problem 0.2. Watkins 8.3.12 Solution. We assume that ρ(g GS ) = ρ(g J ) 2. Then R (G GS ) = ln(ρ(g GS )) = ln(ρ(g J ) 2 ) = 2 ln(ρ(g J )) = 2R (G J ), as desired. Problem 0.3. Watkins 8.3.17a Solution. For h = 1/10, ρ(g) = cos(π/10) 0.951 and R (G) ln(cos(π/10)) 0.0502. Since the error after k steps is multiplied by about ρ(g) k, I want ρ(g) k = 1/100. Thus I want k 91.66, so it takes about 92 iterations to reduce my error by a factor of 100. Problem 0.4. Watkins 8.3.19a,b Solution. (a) If the eigenvalues of A are λ 1, λ n, then the eigenvalues of I ωa are 1 ωλ 1, 1 ωλ n. (This is similar to what we talked about for the shiftand-invert method for finding eigenvalues.) Since all the λ i > 0, then clearly 1 ωλ i < 1, as desired. (b) For Richardson s method, G = I ωa. Thus the eigenvalues we found above must all have norm less than 1 in order for Richardson s method to converge. Since we already know 1 ωλ i < 1, we just need to show that 1 ωλ i > 1 as well. Rearranging that equation, we find that the method converges if and only if ω < 2/λ i. Since λ n is the smallest eigenvalue, that shows that we actually just need to make sure that ω < 2/λ n. Problem 0.5. Watkins 8.3.20 1

2 MATH 170A Solution. Using what the proposed iteration is, we can calculate x (k+1) = x (k) + M 1 r (k) = x (k) + M 1 (b Ax (k) ) = x (k) + M 1 b M 1 (M N)x (k) = x (k) + M 1 b x (k) + M 1 Nx (k) = M 1 Nx (k) + M 1 b, which is the standard iteration for any splitting. Since M = 1 I for Richardson s method, this last result is immediate. ω Problem 0.6. Watkins 8.3.22 Solution. If we consider the preconditioned problem M 1 Ax = M 1 b, Richardson s method with ω = 1 had M = I and N = I M 1 A, and uses b = M 1 b. The iteration for Richardson s method is Mx (k+1) = Nx (k) + b. Using the calculated quantities, x (k+1) = (I M 1 A)x (k) + M 1 b = x (k) M 1 Ax (k) + M 1 b = x (k) M 1 (M N)x (k) + M 1 b = x (k) x (k) + M 1 Nx (k) + M 1 b = M 1 Nx (k) + M 1 b which is the standard iteration. (There are other ways to do this calculation.) Problem 0.7. Watkins 8.4.7a Solution. Consider the very first step. Our direction is e 1, i.e., the x -direction. The exact line search produces α 1 = et 1 r (1) = r(1) 1. e T 1 Ae 1 a 11 Thus x (2) is the starting guess plus r (1) 1 /a 11 in the first entry only. But x ( 11) + r (1) 1 /a 11 = 1 a 11 (b 1 a 1j x j ), which is the standard step for Gauss-Seidel. The j 1 second and so forth steps proceed similarly. Thus each group of n steps is one iteration of Gauss-Seidel. Problem 0.8. Watkins 8.4.12

HOMEWORK 10 SOLUTIONS 3 Solution. x (k+1) = x (k) + α k p (k) Ax (k+1) = Ax (k) + α k Ap (k) b Ax (k+1) = b (Ax (k) + α k Ap (k) ) r (k+1) = r (k) α k Ap (k) Problem 0.9. Usually it doesn t matter which stopping criterion you use. Explain briefly why sometimes it does. Solution. Though usually it doesn t matter, sometimes the least stringent condition r (k+1) 2 < ɛ( b 2 + A 2 x (k+1) 2 ) will allow the iteration to converge more quickly, while still showing that the iteration was backward stable. In fact, sometimes this condition will allow it to converge, while the other standard choices will never converge at all! Problem 0.10. Briefly and intuitively explain why preconditioners help descent methods converge more quickly. Solution. Descent methods struggle when the level sets of J(y) form long, narrow canyons, and this happens when the matrix has a large condition number. A preconditioner effectively widens the canyons before the descent method is begun, which makes the descent methods converge much more quickly. Problem 0.11. Watkins 8.4.8(a,b). Use a sketch like Figure 8.4 to explain why part b makes sense graphically, and why overrelaxation performs better than standard Gauss-Seidel, but only for a good choice of ω. (This picture, to be clear, only applies when A is positive definite.) Solution. (a) This solution works essentially the same as for 8.4.7, which I did above, so I won t repeat it. (b) Let s just consider p = e i. (I ll drop the superscript (k) s for simplicity.) Then α = et i r e T i Ae i Note that a ii > 0 since A is positive definite. = r i a ii.

4 MATH 170A Next, consider J(x + ωαe i ). J(x + ωαe i ) = 1 2 (x + ωαe i) T A(x + ωαe i ) (x + ωαe i ) T b = J(x) + ωαe T i Ax + 1 2 ω2 α 2 a ii ωαe T i b = J(x) + 1 2 ω2 α 2 a ii + ωαe T i (Ax b) = J(x) + 1 2 ω2 α 2 a ii ωαr i = J(x) + 1 2 ω2 r2 i = J(x) + r2 i a ii a 2 ii ω r i r i a ii ). ( 1 2 ω2 ω ( Thus J(x (k+1) ) is bigger than J(x (k) ) if r2 i 1 a ii 2 ω2 ω ) > 0 and smaller if it is less than zero. Since ri 2 /a ii > 0, this depends only on the sign of ω 2 /2 ω, and so the desired results are easy to show. A less technical way is this: Consider the line defined by your search direction. Along that line, J(y) is a parabola. An exact line search will find the bottom of the parabola. If you double the distance to the bottom from where you started, you will end at the same height as you started, because of symmetry. Thus, ω = 2 makes J(y) the same as before. Between where you started (ω = 0) and ω = 2, you decrease J, thanks to the shape of the parabola. Similarly, outside that range, J increases, thanks to the shape. The following figure has three pictures. These are exact graphs for SOR with, respectively, ω = 1, 1.5, and 2. The problem I was solving was [ 7, with initial guess [0; 0]. 5] [ 1.1 1 1 1.05] [ x y ] = Figure 1. SOR with various ω. As you can see, part (b) makes sense because ω = 1 goes straight to the lowest part, and ω = 2 goes to the location with the same height as you started. The

HOMEWORK 10 SOLUTIONS 5 reason overrelaxation performs better is that standard Gauss-Seidel gets stuck in the canyon very early, and goes back and forth in it a lot. But overrelaxation allows you to run down the length of the canyon more quickly, so you don t have so many tiny steps. For fun, I have included the equivalent graphs for steepest descent and conjugate gradient. Remember, conjugate gradient converges in two steps, so it has converged already. Figure 2. Steepest descent and conjugate gradient methods. Problem 0.12. Write a function in MATLAB that takes as input a symmetric positive definite matrix A, the right hand side vector b, a maximum number of iterations, and a tolerance for convergence and returns as output the solution of Ax = b as found by performing the conjugate-gradient method as found in (8.7.1). It will be easiest to program the exact algorithm found there, though you will have to change notation and names of variables so that MATLAB can understand it, and check the tolerance. Solution. Here is my code, which is very, very similar to the algorithm in the book. function x=cg(a, b, i t e r, t o l e r a n c e ) n=length (A) ; norma=normest(a) ; %e s t i m a t e norm, s i n c e f u l l norm i s too slow. normb=norm( b ) ; x=zeros (n, 1 ) ; r=b A x ; p=r ; v=r r ;

6 MATH 170A for i =1: i t e r q=a p ; mu=p q ; a=v/mu; x=x+a p ; r=r a q ; vp=r r ; b=vp/v ; p=r+b p ; v=vp ; %I m using the s t o p p i n g c o n d i t i o n in 8. 5. i f norm( r ) / ( normb+norma norm( x))< t o l e r a n c e i %d i s p l a y number of i t e r a t i o n s break end end Problem 0.13. Compare your SOR solver with your conjugate-gradient method. Pick a large matrix (such as the one for the Poisson equation), and compare the number of iterations each method uses to solve Ax = b up to the desired precision. (Compare with several ω.) Solution. I used this code to generate the Laplacian matrix. It s not the most efficient, but it works. function A = lap (m) %t h i s makes the Poisson equation matrix f o r an mxm g r i d % of i n t e r i o r p o i n t s ( not counting any boundary ) r=zeros (mˆ 2, 1 ) ; r (1)=4; r (2)= 1; r (m+1)= 1; A=sparse ( toeplitz ( r ) ) ; %Making i t sparse i s super important. I t saves room %and makes i t much f a s t e r. %This i s NOT q u i t e the r i g h t A. % There need to be some e x t r a z e r o s. for i =1:m 1 A( i m, i m+1)=0; A( i m+1, i m)=0; end

HOMEWORK 10 SOLUTIONS 7 I then ran it in the above CG code, with the m = 100 version, f the vector of 10000 zeros, except f(123) = 1, f(666) = 24 and f(3243) = 3. (Any f other than zero would have worked.) Using that code and stopping condition, it took 288 iterations to converge to 10 8 precision, and only 53 to converge to 10 2 precision. (I changed the stopping condition to norm(oldu-u)/norm(u) to be consistent with my SOR code.) I then reshaped f using reshape(f,[100,100]), and used my efficient SOR code that was specifically for the Poisson problem. For ω = 1, and 10 2 tolerance, it took 50 iterations, while with ω = 1.9, it took only 35. If I increased the tolerance to 10 8, it took a (very slow) 643 iterations. I didn t bother trying it with ω = 1, since I didn t want to sit here all day...