Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics

Similar documents
Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Downloaded 02/10/16 to Redistribution subject to SIAM license or copyright; see

9. Iterative Methods for Large Linear Systems

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Computational Linear Algebra

Notes for CS542G (Iterative Solvers for Linear Systems)

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

2 J JANSSEN and S VANDEWALLE that paper we assumed the resulting ODEs were solved exactly, ie, the iteration is continuous in time In [9] a similar it

Kasetsart University Workshop. Multigrid methods: An introduction

Theory of Iterative Methods

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Splitting of Expanded Tridiagonal Matrices. Seongjai Kim. Abstract. The article addresses a regular splitting of tridiagonal matrices.

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,

The Conjugate Gradient Method

Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method

CLASSICAL ITERATIVE METHODS

Cache Oblivious Stencil Computations

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

COURSE Iterative methods for solving linear systems

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

CAAM 454/554: Stationary Iterative Methods

Solving Linear Systems

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Iterative Methods for Sparse Linear Systems

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

t x 0.25

Iterative Methods for Ax=b

Solving Linear Systems

Iterative Methods. Splitting Methods

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

The Solution of Linear Systems AX = B

Scientific Computing: An Introductory Survey

Iterative Methods and Multigrid

1.6: 16, 20, 24, 27, 28

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP

Stabilization and Acceleration of Algebraic Multigrid Method

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

The Algebraic Multigrid Projection for Eigenvalue Problems; Backrotations and Multigrid Fixed Points. Sorin Costiner and Shlomo Ta'asan

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Chapter 7 Iterative Techniques in Matrix Algebra

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Numerical Methods - Numerical Linear Algebra

Iterative Methods and Multigrid

ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli

Partial Differential Equations

Poisson Equation in 2D

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Preconditioning of elliptic problems by approximation in the transform domain

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Lecture 18 Classical Iterative Methods

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

x n+1 = x n f(x n) f (x n ), n 0.

A Brief Outline of Math 355

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Exam in TMA4215 December 7th 2012

AIMS Exercise Set # 1

Algebra C Numerical Linear Algebra Sample Exam Problems

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo

1. Fast Iterative Solvers of SLE

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Preliminary Examination, Numerical Analysis, August 2016

Numerical Solution Techniques in Mechanical and Aerospace Engineering

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

A Block Red-Black SOR Method. for a Two-Dimensional Parabolic. Equation Using Hermite. Collocation. Stephen H. Brill 1 and George F.

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Multi-Factor Finite Differences

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Multigrid Techniques for Nonlinear Eigenvalue Problems; Solutions of a. Sorin Costiner and Shlomo Ta'asan

LINEAR SYSTEMS (11) Intensive Computation

Iterative methods for Linear System

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Finite Difference and Finite Element Methods

Overlapping Schwarz Waveform Relaxation for Parabolic Problems

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

PROPOSITION of PROJECTS necessary to get credit in the 'Introduction to Parallel Programing in 2014'

JACOBI S ITERATION METHOD

Introduction and Stationary Iterative Methods

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

CHAPTER 5. Basic Iterative Methods

Stability of implicit extrapolation methods. Abstract. Multilevel methods are generally based on a splitting of the solution


Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lab 1: Iterative Methods for Solving Linear Systems

Iterative Methods for Solving A x = b

Optimized Schwarz Waveform Relaxation: Roots, Blossoms and Fruits

Transcription:

UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras DAMTP 1995/NA4 August 1995 Department of Applied Mathematics and Theoretical Physics Silver Street Cambridge CB3 9EW England

Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras Abstract In this paper we consider a waveform relaxation method of the form du n+1 dt u n+1 () = u ; + P u n+1 = Qu n + f; A = P? Q where P is Toeplitz or block Toeplitz matrix for a numerical solution of the equation du dt + Au = f u() = u We show that under suitable conditions this method converges and apply this result to linear parabolic equations with variable coecients. 1 Introduction In this paper we consider a waveform relaxation method for the initial value problem du dt + Au = f; u() = u : where A is a positive denite Hermitian matrix, i.e. A = A T and the Euclidean inner product hau; ui > whenever u 6=. The waveform relaxation method (also known as a dynamic iteration method) is an iterative method of the following form du n+1 dt + F (u n+1 ; u n ) = f; (1.1) u n+1 () = u ; (1.2) where the function F satises the identity A(u) = F (u; u) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, England (email S.Keras@damtp.cam.ac.uk) 1

and u is chosen arbitrarily provided it satises the initial condition u () = u. The method was known already more than one hundred years ago (see [5], [1]), however, it did not attract attention as a practical method for solving ODEs until Lelarasmee et al. ([2]) published a paper on its applications in the process of simulating large circuits. It is especially ecient when applied to sti equations, since decoupling makes it possible to solve the equations separately, applying dierent time scales for dierent parts of the system and possibly using parallel computers. Since semidiscretization of PDEs produces large sti systems of ODEs, the waveform relaxation technique was lately adopted for solving PDEs, most notably for parabolic equations. In this case A originates in a semidiscretization of a linear elliptic operator, and we will restrict ourselves to the linear decoupling of the system: du n+1 dt + P u n+1 = Qu n + f; (1.3) u n+1 () = u ; (1.4) where A = P? Q The splittings which are typically used in this case are Jacobi, Gauss-Seidel and Successive Over Relaxation, where matrices P and Q are chosen as in corresponding static iterative methods. In other words, writing A = D?L?U where D, L and U are diagonal, lower triangular and upper triangular matrices respectively, we have P = D for the Jacobi splitting, P = D? L for the GS splitting and P = 1 D? L for the SOR splitting. An extensive analysis of the waveform relaxation method with these splittings has been done in! [3] and [4], in particular, necessary and sucient conditions for the convergence has been established. While each iterative step is computationally cheap and easy to implement in parallel, the main drawback of these methods appears to be a slow rate of convergence. As it was proved in [3], the rate of convergence is approximately 1? 2 2 h 2 for the SOR splitting and 1? 2 h 2 for the Gauss-Seidel splitting. Dierent methods were suggested to overcome this diculty, most notably Mutigrid Waveform Relaxation ([6]). In this paper we suggest a dierent splitting, where matrix P is a Toeplitz or block Toeplitz approximation of A. In this case the rate of convergence does not depend on h, and in case of linear parabolic equations is signicantly less than 1. Since matrix P is Toeplitz, it is possible to apply fast Helmholtz equation solvers for the equation (1:3) which guarantees the fast convergence of the methods. 2 Toeplitz Waveform Relaxation We commence this section by stating the problem. Consider the equation du dt + Au = f; (2.1) u() = u ; (2.2) 2

where A is a hermitian positive denite matrix, i.e. A = AT and hau; ui > whenever u 6=. We solve this equation by a waveform relaxation method du n+1 dt + P u n+1 = Qu n + f; (2.3) u n+1 () = u ; (2.4) where A = P? Q. Solving (2.3-4) explicitly by integration of constants, u n+1 can be formally written as where Ku(t) = (t) = e?tp u + u n+1 = Ku n + ; Z t Z t e (s?t)p Qu(s)ds; e (s?t)p f(s)ds; and it follows from the Banach xed point theorem that the method (2.3{4) converges for all f and u if and only if (K) < 1, where denotes the spectral radius of the operator. The following theorem is crucial to our analysis. Theorem 2.1 only if for any u 6= Let A be hermitian matrix. Then the method (2.3{4) converges if and 2jhQu; uij < h(p + P H )u; ui: (2.5) Proof. As it was proved in [3], as long as all the eigenvalues of A and P have positive real parts, the spectral radius of K can be represented by means of the formula (K) = max 2R ((ii + P )?1 Q): (2.6) Let = x + iy be an eigenvalue of (ii + P )?1 Q. Then for some u and Qu = (ii + P )u (2.7) hqu; ui = h(ii + P )u; ui: (2.8) Without loss of generality we may assume hu; ui = 1 and write hp u; ui = r + ip, hqu; ui = s + it where r; p; s; t 2 R. Comparing real and imaginary parts of (2:8) we obtain s = xr? yp? y; t = xp + yr + x: 3

Solving this equation with respect to x and y yields t(p + ) + sr x = (p + ) 2 + r ; 2 y = tr? s(p + ) (p + ) 2 + r 2 ; and 2 = t 2 + s 2 (p + ) 2 + r 2 t2 + s 2 r 2 : (2.9) t Hence, the method converges if 2 +s 2 < 1. However, jh(p + P H )u; uij 2 = 4r 2 and r 2 jhqu; uij 2 = t 2 + s 2, which proves the \if" part of the theorem. For the second part let us assume that 2jhQu; uij h(p + P H )u; ui (2.1) for some u. Without loss of generality we may assume hu; ui = 1 and, as above, we write hqu; ui = s + it and hp u; ui = r + ip. Let =?p. Then h(ii + P )u; ui = r = 1 2 h(p + P H )u; ui: (2.11) Combining (2:11) and assumption (2:1) yields ((i + P )?1 Q) 1 when =?p. 2 Remark. If P = P H, then the equation (2:5) reduces to the inequality jhqu; uij < hp u; ui; or, after making the substitution Q = P? A, jh(p? A)u; uij < hp u; ui: This is equivalent to Theorem 3:3 in [3] which states that the method (2.3{4) converges if and only if 2P? A is positive denite. Next we apply the above result to linear parabolic equations. Theorem 2.2 Consider a parabolic equation u t? r(a(x)ru(x)) = f; (x; t) 2 (; 1); (2.12) u(; x) = u (x); x 2 ; (2.13) u(t; x) = ; (x; t) 2 @ [; 1): (2.14) where is a rectangular domain in R d. Let A be a discretization of the elliptic operator?r(a(x)r), < a? a(x) a + < 1 and P be a discretization of the operator?r(b(x)r) for some function b such that < b? b(x) b + < 1 and let both A and P satisfy the following assumptions 1. A and P are positive denite 4

2. chp u; ui < hau; ui < ChP u; ui for any vector u 6= as long as cb(x) < a(x) < Cb(x) for all x 2, where c; C 2 R Then the method (2.3{4) converges if max a(x)?b(x) < 1. b(x) Proof. Since bot A and P are positive denite, we can again use the result from [3], which says that (K) = max 2R ((ii +P )?1 Q), so that it suces to estimate the spectral radius of the latter operator. Again, let be an eigenvalue of K = (ii + P )?1 Q. Then we can write Qu = (ii + P )u: Taking an inner product with u we obtain hqu; ui = hii + P u; ui: (2.15) As in the previous theorem, let = x+iy, hu; ui = 1, hp u; ui = p, hqu; ui = s. Comparing real and imaginary parts of (2:15) yields s =?y + xp; = x + yp: Multiplying the rst equation by x and the second equation by y results in xs = jj 2 p: However, under our assumptions about P and A which yields s = h(a? P )u; ui max hau; ui max a(x) hp x; xi b(x) a(x)? b(x) b(x) hp u; ui = max a(x)? b(x) b(x) p: According to the remark to the previous theorem, this ensures the convergence of the method. Moreover, we can also estimate where exactly the eigenvalues of the operator (ii + P )?1 Q are. The following inequalities hold + max? max a(x)? b(x) 2b(x) a(x)? b(x) 2b(x) max max a(x)? b(x) 2b(x) ; a(x)? b(x) 2b(x) ; i.e. the eigenvalues lie in two symmetric circles, both with the radius r = max and the centres at max a(x)?b(x) 2b(x) a(x)?b(x), 2b(x), which completes the proof. Figure 2 shows the distribution of the eigenvalues for the matrix (ii + P )?1 Q when a(x) = 1 + :5 sin x and b(x) = 1. 2 5

Remark. The method also converges for nonrectangular domains, however, in this case the matrix P need not be Toeplitz or block Toeplitz and it is no longer possible to apply fast solvers based on Fast Fourier Transform techniques. Corollary 2.1 and If b(x) = C then (K) max a(x)? C C (2.16) min C2R (K) a +? a? (2.17) + a + + a? Proof. The rst statement is a direct consequence of the theorem. It is easy to check that (2:16) is minimised when C = amax+a min, where a 2 max = max x2 a(x) and a min = min x2 a(x). In this case 2 min (K) a max? a min a +? a? : C2R + a max + a min a + + a? Corollary 2.2 A standard 2d + 1 point nite dierence discretization in a rectangular domain satises the assumptions of the theorem. Proof. The positive deniteness of the matrices A and P follows from the fact that both A and P are symmetric and diagonally dominant with positive entries along their diagonals. If a(x) > cb(x) then A?cP is symmetric and diagonally dominant with positive entries along the diagonal, hence positive denite. Similarly, if a(x) < Cb(x) then CP? A is symmetric and diagonally dominant with positive entries along the diagonal, hence positive denite. 2 3 The algorithm The last corollary in the previous section allows us immediately to construct a method for solving the equation (2.12{14). We approximate the coecient a(x) by a constant C = amax+a min 2 and use its discretization as an operator P in the equation 2.3. Since P in this case is a Toeplitz or a block Toeplitz matrix, we will call this method a Toeplitz waveform relaxation. The rate of convergence in this case is not greater than amax?a min a max+amin, which, in the case of mildly varying coecient a(x), is close to and does not depend on h! As it is proved in [4], any A stable method produces a convergent discrete waveform relaxation method, so that for the discretization in time one can chose, for instance, a backward Euler method or a trapezoidal rule. At each time step a new iterate can be calculated using the value of the new iterate at the previous time step (or, in case of multistep methods, in several previous steps) and the value of the old iterate at the present time step. We describe this in Figure 1 where the arrows show the values needed for the calculation of the next iterate. It is easy to 6

Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 u 4 1??? - u 5 1 u 4 2?????? u 3-1 u 3-2 u 3 3????????? u 2-1 u 2-2 u 2-3 u 2 4???????????? u 1-1 u 1-2 u 1-3 u 1-4 u 1 5 : : : : : : : : : : : : : : : Figure 1: A graphical description of the iterative procedure for parabolic equations when using waveform relaxation method. see that, for instance, u 1 2 and u 1 2 (where a superscript denotes the number of iteration and a subscript denotes the number of the time step) can be calculated simultaneously and independently of each other. The same is true for u 1 3, u 2 2, u 3 1. It is trivial to show by induction that this is true for any u n m along the vertical line m + n = const. This means that the algorithm can be eciently implemented in parallel, each new iterate being calculated on an independent processor immediately using the values obtained by another processor for the previous iterate. 4 Numerical Experiments In this section we present the results of numerical experiments with Toeplitz operator splitting. As a test equation we chose the following parabolic problem: @u @t = @ (1 + sin 4x sin 4y) @u + @ (1 + sin 4x sin 4y) @u (4.1) @x @x @y @y (x; y; t) 2 (; 1); = (; 1) (; 1) u(x; y; t) = ; (x; y; t) 2 @ (; 1) (4.2) u(x; y; ) = sin x sin y (4.3) Here ) < < 1, which ensures that the problem is parabolic. This equation was discretized using ve-point approximation for the space variables and backward Euler approximation for the time variable and was solved using SOR and Toeplitz methods with dierent grid sizes and dierent vales of. The value of! for the SOR method was chosen so that to minimise the total number of iterations. Since for the SOR method the calculation time is virtually independent of, we only present the results with = :5. For both methods the stopping criterion was jju n+1? u n jj < T OL, where u n ; u n+1 denote two successive 7

iterates and T OL = 1?6. For the Toeplitz method we have chosen b(x; y) = 1, in which case the spectral radius of the operator K is K. All calculations were carried out on a Sun SPARC-1 workstation. Figures 3 to 5 present the plots of the execution times and of the number of iterations for dierent values of and dierent grid sizes. As expected, the SOR algorithm shows no sensitivity to the values of, while Toeplitz algorithm performs considerably better when is close to (the case of mildly varying coecients) and its performance deteriorates when approaches 1 (the case of highly varying coecients). Since the convergence of the waveform relaxation methods is linear, in both cases the number of iterations L can be estimated by means of the inequality L jje jj < T OL where jje jj = jju? ujj is the norm of the initial error and is the spectral radius of the operator K. This yields L ln(t OL=jje jj) ; (4.4) ln In our example and, as! 1, ln? 1, so that the number of iterations grows inverse linearly as approaches 1 Next we compare asymptotically the number of the operations required for the two methods in order to achieve the given tolerance TOL when space discretization step h tends to. In both cases it is equal to the number of iterations L times the number of the operations in each iteration M. L can be estimated as in (4:4). It is known (see [3]) that for the SOR method 1? 2 2 h 2 so that L SOR ln(t OL=jje jj) ln(1? 2 2 h 2 )?ln(t OL=jje jj) 2 2 h 2 = O(n 2 ); where n = 1=h is a number of grid points at the unit interval. On the other hand at each step the resulting linear equations are lower triangular with O(n 2 ) nonzero entries, so that the solutions of such equations requires O(n 2 ) operations. Thus, the total cost of the SOR method can be estimated as L SOR M SOR = O(n 4 ): For the Toeplitz method we know that is independent of h, which means that L T ln(t OL=jje jj) ln T = O(1) as n grows to innity. The number of the operations required to solve a block tridiagonal matrix (which is the case if the domain is rectangular) is M T = O(n 2 ln n 2 ) = O(n 2 ln n), provided that n can be factorised into a product of small primes. This allows us to write L SOR M SOR L T M T = O(n 2 = ln n) Figures 6 depicts the results obtained in numerical experiments. In this case was xed and we have chosen dierent values of n. The rst graph shows the ratio of the times 8

.3.2.1.1.2.3.4.2.2.4 Figure 2: The eigenvalues of K(), A = d dx ((1+:5 sin x) d d2 dx ), P = discretized dx 2 with a mesh size h = :2, a + = 1:5, a? = :5. required for two methods for dierent values of n and the second graph shows the same ratio multiplied by ln n=n 2. One can see that the results of the experiment closely agree with the theoretical predictions. The results of the experiments allow us to conclude that even for moderate grid sizes (n = m = 16) the Toeplitz method outperforms the SOR method if coecients are mildly varying. The advantage of the Toeplitz method becomes even greater when the number of the grid points increases. 5 Acknowledgments The author would like to than Dr Arieh Iserles for many useful comments on this work. The author was supported by Leslie Wilson Scholarship from Magdalene College, Cambridge and ORS award. References [1] E Lindelof. Sur l'application des methodes d'approximations successives a l'etude des integrales reeles des equations dierentielles ordinaires. Journal de Mathematiques 9

4 exec time (sec) 3 2 1.1.2.3.4.5.6.7.8.9 alpha 6 number of iterations 4 2.1.2.3.4.5.6.7.8.9 alpha Figure 3: The execution time and the number of iterations for the Toeplitz method with the number of grid points m = n = 16. The dashed line represents the execution time for the SOR method 1

3 exec time (sec) 2 1.1.2.3.4.5.6.7.8.9 alpha 6 number of iterations 4 2.1.2.3.4.5.6.7.8.9 alpha Figure 4: The execution time and the number of iterations for the Toeplitz method with the number of grid points m = n = 32. The dashed line represents the execution time for the SOR method 11

1 exec time (sec) 8 6 4 2.1.2.3.4.5.6.7.8.9 alpha 6 number of iterations 4 2.1.2.3.4.5.6.7.8.9 alpha Figure 5: The execution time and the number of iterations for the Toeplitz method with the number of grid points m = n = 64. The dashed line represents the execution time for the SOR method 12

4 ratio sor/toeplitz 3 2 1 1 2 3 4 5 6 7 number of points on a unit interval n=1/h ratio (sor/toeplitz)*n^2/log(n).1.8.6.4.2 1 2 3 4 5 6 7 number of points on a unit interval n=1/h Figure 6: The ratio of the execution times required for the SOR and Toeplitz methods 13

Pures et Appliquees, 1:117{128, 1894. [2] E Lelarasmee, A E Ruehli, and A L Sangiovanni-Vincentelli. The waveform relaxation method for time-domain analysis of large scale integrated circuits. IEEE Trans. on CAD of IC and Syst., 1:131{145, 1982. [3] U Miekkala and O Nevalinna. Convergence of dynamic iteration methods for initial value problems. SIAM J. Sci. Stat. Comput., 8(4):459{482, 1987. [4] U Miekkala and O Nevanlinna. Sets of convergence and stability regions. BIT, 27:554{ 584, 1987. [5] E Picard. Sur l'application des methodes d'approximations successives a l'etude de certaines equations dierentielles ordinaires. Journal de Mathematiques Pures et Appliquees, 9:217{271, 1893. [6] S Vandewalle. Parallel Multigrid Waveform Relaxation for Parabolic Problems. B. G. Teubner, Stuttgart, 1993. 14