CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Similar documents
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 12: Discrete Laplacian

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

Errors for Linear Systems

DUE: WEDS FEB 21ST 2018

Appendix B. The Finite Difference Scheme

2 Finite difference basics

Multigrid Methods and Applications in CFD

One-sided finite-difference approximations suitable for use with Richardson extrapolation

where the sums are over the partcle labels. In general H = p2 2m + V s(r ) V j = V nt (jr, r j j) (5) where V s s the sngle-partcle potental and V nt

1 GSW Iterative Techniques for y = Ax

EEE 241: Linear Systems

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Inexact Newton Methods for Inverse Eigenvalue Problems

Singular Value Decomposition: Theory and Applications

Numerical Heat and Mass Transfer

1 Matrix representations of canonical matrices

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

APPENDIX A Some Linear Algebra

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

MMA and GCMMA two methods for nonlinear optimization

Deriving the X-Z Identity from Auxiliary Space Method

for Linear Systems With Strictly Diagonally Dominant Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Computational Astrophysics

First day August 1, Problems and Solutions

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Norms, Condition Numbers, Eigenvalues and Eigenvectors

PART 8. Partial Differential Equations PDEs

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Statistical Mechanics and Combinatorics : Lecture III

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Perron Vectors of an Irreducible Nonnegative Interval Matrix

NUMERICAL DIFFERENTIATION

Statistical pattern recognition

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

Lecture 10: May 6, 2013

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

MATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018

Linear Approximation with Regularization and Moving Least Squares

Overlapping additive and multiplicative Schwarz iterations for H -matrices

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Report on Image warping

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Lecture 5.8 Flux Vector Splitting

Solution 1 for USTC class Physics of Quantum Information

Professor Terje Haukaas University of British Columbia, Vancouver The Q4 Element

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Topic 5: Non-Linear Regression

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX

Dynamic Systems on Graphs

Feb 14: Spatial analysis of data fields

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

New Method for Solving Poisson Equation. on Irregular Domains

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

MATH Homework #2

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Chapter Newton s Method

e - c o m p a n i o n

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Lecture Notes on Linear Regression

Lecture 3. Ax x i a i. i i

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

A PROCEDURE FOR SIMULATING THE NONLINEAR CONDUCTION HEAT TRANSFER IN A BODY WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY.

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Lecture 10 Support Vector Machines II

Discretization. Consistency. Exact Solution Convergence x, t --> 0. Figure 5.1: Relation between consistency, stability, and convergence.

Problem Set 9 Solutions

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

A new construction of 3-separable matrices via an improved decoding of Macula s construction

Transcription:

CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense, but use teratve technques) Gven Ax = b, wrte Mx = Nx + b and construct the teraton Subtractng these equatons, we obtan Mx (k+1) = Nx (k) + b M(x x (k+1) ) = N(x x (k) ) Therefore f we denote the error n x (k) by e (k) = x x (k), then e (k+1) = M 1 Ne (k) Be (k) Thus e (k) = B k e (0), whch suggests the followng theorem: Theorem e (k) 0 as k for all e (0) f and only f ρ(b) < 1 Convergence can stll occur f ρ(b) = 1, but n that case we must be careful n how we choose x (0) Note that from e (k) = B k e (0), t follows that e (k) e (0) B k The Jacob Method We now develop a smple teratve method If we rewrte Ax = b as n a x = b, = 1,, n, =1 then x = b a x, or x = 1 [ b a x ]

In other words, a 11 M =, a nn 0 a 1 a 1n N = a 1 a n1 a n,n 1 0 Our teraton s therefore x (k+1) = 1 [ b a x (k)], known as the Jacob method, wth a 0 1 a 11 M 1 a 1 N = a a n1 a a nn n,n 1 a nn 0 a 1n a 11 B J So, f M 1 N = max 1 n a < 1, e f B J s strctly dagonally domnant, then the teraton converges For example, suppose 4 1 1 A = 1 1 4 Then B J = 1, so the Jacob method converges rapdly On the other hand, f 1 1 A = 1, 1 whch arses from dscretzng the Laplacan, then B J = 1 A more subtle analyss can be used to show convergence, but t s slow Note that for these two examples, x (0) x (1) when all elements of x (1) have been computed Ths s a waste of storage; we need only n + elements of storage of A above Ths shows that the orderng of equatons s very mportant If we reorder the equatons n such a way that oddnumbered equatons and even-numbered equatons are grouped separately, then we obtan, for the

latter example, 1 1 1 1 1 A = 1 1 1 1 1 Then, we can solve for all odd ndces, then all even ndces, ndependently of each other Not only does ths approach save storage space but t also lends tself to parallelsm 3 The Gauss-Sedel Method In the Jacob method, we compute x (k+1) usng the elements of x (k), even though x (k+1) 1,, x (k+1) 1 are already known The Gauss-Sedel method s desgned to take advantage of the latest nformaton avalable about x: x (k+1) = 1 [ 1 n ] b a x (k+1) a x (k) =1 =+1 To derve ths method, we wrte A = L + D + U where 0 a 11 0 a 1 a 1n a L = 1, D =, U = an 1,n a n1 a n,n 1 0 a nn 0 Thus the Gauss-Sedel teraton can be wrtten as or whch yelds Dx (k+1) = b Lx (k+1) Ux (k), (D + L)x (k+1) = b Ux (k) x (k+1) = (D + L) 1 Ux (k) + (D + L) 1 b Thus the teraton matrx for the Gauss-Sedel method s B GS = (D + L) 1 U, as opposed to the teraton matrx for the Jacob method, B J = D 1 (L + U) In some cases, ρ(b GS ) = (ρ(b J )), so the Gauss-Sedel method converges twce as fast On the other hand, note that Gauss-Sedel s very sequental; e t does not lend tself to parallelsm 4 Posson s Equaton Consder the standard problem of solvng Posson s equaton on a doman R n two dmensons, u = f, (x, y) R, u = u xx + u yy, u = g, (x, y) R We take R to be the unt rectangle [0, 1] [0, 1] and dscretze the problem usng a unform grd wth spacng h = 1/(N + 1) n the x and y drectons, and grdponts x = h, = 0,, N + 1, 3

and y = h, = 0,, N + 1 Then, for, = 1,, N, we replace the dfferental equaton by a dfference approxmaton u 1, + u u +1, h + u,+1 + u u,+1 h = f, where u = u(x, y ) and f = f(x, y ) From the boundary condtons, we have u 0 = g(x 0, y ), = 1,,, N, and smlar condtons for the other grdponts along the boundary Let u = [u 1,, u N ] Then where u 1 + T u u +1 = f 4 1 1 h f 1 + g(x 0, ) = 1 T = 1, [ f ] = h f =,, N 1 h f N + g(x N, ) = N 14 Thus we can solve the problem on the entre doman by solvng Au = f where T I I T I A = I I T We say that A s a block trdagonal matrx A s also a band matrx, but the band s sparse and Gaussan elmnaton may fll-n the whole band However, the equatons can be re-ordered to avod fll-n Department of Computer Scence, Gates Buldng B, Room 80, Stanford, CA 94305-905 E-mal address: golub@stanfordedu 4

CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 15 GENE H GOLUB 1 Convergence of Iteratve Methods Recall the basc teratve methods based on the splttng A = D + L + U, the Jacob method and the Gauss-Sedel method Dx (k+1) = (L + U)x (k) + b (D + L)x (k+1) = Ux (k) + b These are examples of one-step statonary method, whch s an teraton of the form Mx (k+1) = Nx (k) + b, where A = M N Let B = M 1 N, and defne e (k) = x x (k) Then e (k+1) = Be (k) = B k+1 e (0) Recall that f ρ(b k ) < 1 then e (k) 0 for all choces of x (0) Also, recall that for all consstent norms, ρ(b) B Therefore, a suffcent condton for convergence of the Jacob method s B < 1 where, b = 0 = Note that f B s dagonally domnant Now, defne Then we have the followng result: B = max r = a, a < 1 r = max r Theorem If r < 1, then ρ(b GS ) < 1 In other words, the Gauss-Sedel teraton converges f A s dagonally domnant Proof The proof proceeds usng nducton on the elements of e (k) We have whch can be wrtten as =1 a e (k+1) (D + L)e (k+1) = Ue (k), = =+1 a e (k), = 1,, N

Thus For = 1, we have e (k+1) = Assume that for p = 1,, 1, Then, Therefore from whch t follows that snce r < 1 =+1 e (k+1) 1 a e (k) = 1 =1 a e(k) a e (k+1), = 1,, N e (k) r 1 e (k+1) p e (k) r p r e (k) e (k+1) 1 =1 a e(k+1) 1 r e (k) + =1 e (k) = r e (k) r e (k) a a =+1 + e a e(k) =+1 e (k+1) r e (k) r k+1 e (0), lm k e(k) = 0 We see that the Jacob method and the Gauss-Sedel method both converge f A s dagonally domnant, but convergence can be slow n some cases For example, f 1 1 A = 1 1 s of sze N N then 0 1/ D 1 1/ (L + U) = 1/ 1/ 0 and therefore π ρ(b J ) = cos N + 1 = cos πh 1 π h + whch s approxmately 1 for small h = 1 N+1 We would lke to develop a method where ρ(b) 1 ch Now, suppose B = B Then e (k) e (0) B k = ρ(b) k a

We want e (k) / e (0) ɛ, so f we let ρ k = ɛ, then k = log ɛ log ρ s the number of teratons necessary for convergence The quantty log ρ s called the rate of convergence The SOR Method The method of successve overrelaxaton (SOR) s the teraton = ω 1 [b a x (k+1) x (k+1) =1 =+1 ] a x (k) + (1 ω)x (k) The quantty ω s called the relaxaton parameter If ω = 1, then the SOR method reduces to the Gauss-Sedel method In matrx form, the teraton can be wrtten as whch can be rearranged to obtan or Defne Then x (k+1) = Dx (k+1) = ω(b Lx (k+1) Ux (k) ) + (1 ω)dx (k) (D + ωl)x (k+1) = ωb + [(1 ω)d ωu]x (k) ( ) 1 1 [( ) ] ( ) 1 1 1 ω D + L ω 1 D U x (k) + ω D + L b L ω = ( ) 1 1 [( ) ] 1 ω D + L ω 1 D U ( ) 1 1 [( 1 det L ω = det ω D + L det = 1 det ( 1 det ω D + L) ω n = n =1 = (1 ω) n [( 1 ω 1 (1 ω) n n =1 ω n ) ω 1 ) ] D U ] D U Therefore, n =1 λ = (1 ω) n where λ 1,, λ n are the egenvalues of L ω, wth λ 1 λ n Therefore λ 1 n (1 ω) n Snce we must have λ 1 < 1 for convergence, t follows that a necessary condton for convergence of SOR s 0 < ω < 3 Block Methods Recall that n solvng Posson s equaton on a rectangle, we needed to solve systems of the form Ths can be accomplshed usng an teraton v + T v v +1 = g T v (k+1) = g + v (k) 1 + v(k) +1, 3

whch s an example of a block Jacob teraton, snce t nvolves solvng the system Au = g by applyng the Jacob method to A, except each block of sze N N s treated as a sngle element Smlarly, we can use the block Gauss-Sedel teraton Consder the teraton T v (k+1) = g + v (k+1) 1 + v (k) 4 Rchardson Method x (k+1) = (I αa)x (k) + αb = x (k) + α(b Ax (k) ) = x (k) + αr (k) Ths s known as the Rchardson method If we defne the error e (k) = x x (k), then e (k+1) = B α e (k) where B α = I αa; we want to choose the parameter α a pror so as to mnmze B α Suppose A s symmetrc postve defnte, wth egenvalues µ 1 µ µ n > 0 Snce B = I αa, λ = 1 αµ We want to choose α so that B α s mnmzed; e mn α The optmal parameter ˆα s found by solvng max λ (α) = mn max 1 αµ 1 n α 1 n 1 ˆαµ n = (1 ˆαµ 1 ) whch yelds ˆα = µ 1 + µ n Note that When 1 αµ n = 1 that the teraton dverges, from whch t follows that the method converges for 0 < α < /µ n However, ths teraton s senstve to perturbaton, and therefore bad numercally For example, f µ 1 = 10 and µ n = 10 4, then the optmal α s /(10 + 10 4, but ths s close to a value of α for whch the teraton dverges, α = /10 Also, note that and smlarly, λ 1 (ˆα) = 1 λ n (ˆα) = µ 1 µ n µ 1 + µ n = µ 1 + µ n µ 1 = µ n µ 1 µ 1 + µ n, µ 1 µ n 1 µ 1 µ n + 1 = κ(a) 1 κ(a) + 1 Therefore the convergence rate depends on κ(a) For example, consder the Helmholtz equaton on a rectangle R, u (k+1) + σ(x, y)u (k) = f, u = g, Usng a fnte dfference approxmaton for gves T I I A = I I T and thus the teraton has the form Au (k+1) + h Σu (k) = f (x, y) R (x, y) R 4

where Σ = σ 11 σ nn, σ = σ(x, y ) We wsh to determne the rate of convergence We defne the error operator by Therefore But and Therefore e (k+1) = (h A 1 Σ)e (k) e (k+1) h A 1 Σ e (k) Σ = max σ, λ mn = 4 4 cos πh = 4(1 cos πh) ( ) πh = 8 sn e (k+1) max, σ ) e max, σ π e (k) ( sn xh/ h/ and thus the sze of the problem mesh has dsappeared, and the method converges f max, σ 0 The rate of convergence s essentally ndependent of h, whch s very desrable Department of Computer Scence, Gates Buldng B, Room 80, Stanford, CA 94305-905 E-mal address: golub@stanfordedu 5

Problem 1 A = gallery( posson, n) returns the block trdagonal matrx A of order n resultng from dscretzng Posson s equaton wth the 5-pont operator on an n- by-n mesh Choose b so that Ax = b has soluton vector x of all 1 s For n =5, 10, approxmate x by usng the Jacob, Gauss-Sedel, and SOR wth optmal ω Problem Do the same procedure for Hlbert matrx H of sze n =50, 100