2 J JANSSEN and S VANDEWALLE that paper we assumed the resulting ODEs were solved exactly, ie, the iteration is continuous in time In [9] a similar it

Size: px
Start display at page:

Download "2 J JANSSEN and S VANDEWALLE that paper we assumed the resulting ODEs were solved exactly, ie, the iteration is continuous in time In [9] a similar it"

Transcription

1 ON SOR WAVEFORM RELAXATION METHODS JAN JANSSEN AND STEFAN VANDEWALLE y Abstract Waveform relaxation is a numerical method for solving large-scale systems of ordinary dierential equations on parallel computers It diers from standard iterative methods in that it computes the solution on many time levels or along a continuous time interval simultaneously This paper deals with the acceleration of the standard waveform relaxation method by successive overrelaxation (SOR) techniques In particular, dierent SOR acceleration schemes, based on multiplication with a scalar parameter or convolution with a time-dependent function, are described and theoretically analysed The theory is applied to a one-dimensional and two-dimensional model problem and checked against results obtained by numerical experiments Key words waveform relaxation, successive overrelaxation, convolution AMS subject classications 65F0, 65L05 Introduction Waveform relaxation, also called dynamic iteration or Picard{ Lindelof iteration, is a highly parallel iterative method for numerically solving largescale systems of ordinary dierential equations (ODEs) It is the natural extension to systems of dierential equations of the relaxation methods for solving systems of algebraic equations In the present paper we will concentrate on linear ODE-systems of the form () B _u(t) + Au(t) = f(t) ; u(0) = u 0 ; where the matrices B and A belong to C dd, and B is assumed to be non-singular Such a system is found, for example, after spatial discretisation of a constant-coecient parabolic partial dierential equation (PDE) using a conforming Galerkin niteelement method, [23] Matrix B is then a symmetric positive denite mass matrix and A is a stiness matrix A spatial discretisation based on nite volumes or nite dierences leads to a similar system with B respectively a diagonal matrix or the identity matrix Waveform relaxation has been applied successfully to more general, timedependent coecient problems and to non-linear problems Yet, only problems of the form () seem to allow precise quantitative convergence estimates of the waveform iteration Waveform relaxation methods for linear problems () are usually dened by splittings of the coecient matrices of the ODE, ie, B = M B N B and A = M A N A, and correspond to an iteration of the following form: (2) M B d dt + M A u () (t) = N B d dt + N A u ( ) (t) + f(t) ; u () (0) = u 0 : In [8], we have investigated the convergence of the above iteration, when (M B ; N B ) and (M A ; N A ) correspond to a standard Jacobi or Gauss{Seidel matrix splitting In Katholieke Universiteit Leuven, Department of Computer Science, Celestijnenlaan 200A, B-300 Heverlee, Belgium (janjanssen@cskuleuvenacbe) This text presents research results of the Belgian Incentive Program "Information Technology" { Computer Science of the Future, initiated by the Belgian State { Prime Minister's Service { Federal Oce for Scientic, Technical and Cultural Aairs The scientic responsibility is assumed by its authors y California Institute of Technology, Applied Mathematics 27-50, Pasadena, CA 925 (vandewalle@na-netornlgov) This work was supported in part by the NSF under Cooperative Agreement No CCR

2 2 J JANSSEN and S VANDEWALLE that paper we assumed the resulting ODEs were solved exactly, ie, the iteration is continuous in time In [9] a similar iteration was studied for the discrete-time case, dened by discretising (2) with a linear multistep method The Jacobi and Gauss{ Seidel waveform methods proved to be convergent, with convergence rates very similar to the convergence rates obtained with corresponding relaxation methods for algebraic systems In order to improve the convergence of the waveform relaxation methods, several acceleration techniques have been described: successive overrelaxation ([], [2], [7], [8], [2]), Chebyshev iteration ([3], [22]), Krylov subspace acceleration ([5]), and multigrid acceleration ([8], [9], [4], [24]) This paper deals with acceleration methods based on the successive overrelaxation (SOR) idea A rst SOR waveform relaxation method, based on matrix splitting, was introduced by Miekkala and Nevanlinna in [7], [8] for problems of the form () with B = I They showed that SOR leads to some acceleration, although a much smaller one than the acceleration by the standard SOR method for algebraic equations A second method, briey discussed in [2], is based on multiplying the Gauss{Seidel correction by an overrelaxation parameter As in the case of the previous method, numerical experiments indicate that a careful selection of the overrelaxation parameter leads to some acceleration, but again only a marginal one These somewhat disappointing results led Reichelt, White and Allen to dene a third method, which they named the convolution SOR waveform relaxation method, [2] They changed the second SOR algorithm by replacing the multiplication with an SOR parameter by a convolution with a time-dependent SOR kernel This x appeared to be a very eective one Application to a non-linear semiconductor device simulation problem showed the new method to be by far superior to the standard SOR waveform relaxation method In this paper, we extend the SOR waveform relaxation techniques to general problems of the form () Using the theoretical framework developed in [8], [9], the convergence properties of these methods are investigated A lot of attention is hereby paid to the convolution SOR method, the discrete-time convergence properties of which were outlined for the B = I case in [2] We complete this discrete-time analysis and also treat the continuous-time case We identify the nature of the dierent continuous-time and discrete-time SOR iteration operators, and derive the corresponding spectral radii and norm expressions Application of these results to some model problems yields explicit formulae for the optimal convergence factors as a function of the mesh size h These results are veried by numerical experiments In particular, we show that for these model problems the convolution SOR waveform method attains an identical acceleration as the standard SOR method does for the linear system Au = f This is not so for the other SOR waveform methods We also illustrate by numerical experiments that the convolution SOR algorithm can attain excellent convergence rates, even in cases where the current theory does not apply The structure of the paper is as follows We start in x2 by describing the dierent types of SOR waveform algorithms Their convergence as continuous-time methods is presented in x3 In x4, we briey point out the corresponding discrete-time convergence results, and we show how the latter relate to the continuous-time ones A model problem analysis and numerical results for the one- and two-dimensional heat equation are given for point relaxation in x5 and for line relaxation in x6 Finally, we end in x7 with some concluding remarks 2 SOR waveform relaxation methods

3 SOR WAVEFORM RELAXATION METHODS 3 2 Continuous-time methods The most obvious way to dene an SOR waveform method for systems of dierential equations is based on the natural extension of the standard SOR procedure for algebraic equations, [25], [27] Let the elements of matrices B and A be denoted by b ij and a ij, i; j d First, a function ^u () i (t) is computed with a Gauss{Seidel waveform relaxation scheme, (2) b ii d dt + a ii ^u () i (t) = Xi dx j= j=i+ b ij d dt + a ij d b ij dt + a ij u () j (t) u ( ) j (t) + f i (t) ; with ^u () i (0) = (u 0 ) i Next, the old approximation u ( ) i (t) is updated by multiplying the correction ^u () i (t) u ( ) i (t) using an overrelaxation parameter!, (22) u () i (t) = u ( ) (t) +! i ^u () i (t) u ( ) (t) Elimination of the intermediate approximation ^u () i (t) from (2) and (22) leads to an iterative scheme of the form (2), corresponding to the matrix splittings (23) and (24) M B =! D B L B ; N B =!! D B + U B M A =! D A L A ; N A =!! D A + U A ; where B = D B L B U B and A = D A L A U A are the standard splittings of B and A in diagonal, lower and upper triangular parts, [8, Rem 3] This method was briey considered for ODE-systems () with B = I in [2] Its non-linear variant was studied in [], [2] The rst step of the convolution SOR waveform relaxation (CSOR) method is similar to the rst step of the previous scheme, and consists of the computation of a Gauss{Seidel iterate ^u () i (t) using (2) Instead of multiplying the resulting correction by a scalar!, as in (22), the correction is convolved with a function (t), [2], (25) u () i (t) = u ( ) i (t) + Z t 0 (t s) ^u () i i (s) u ( ) (s) We will allow for fairly general convolution kernels of the form i : ds : (26) (t) =! (t) +! c (t) ; with! a scalar parameter, (t) the delta function, and! c (t) a function in L In that case, (25) can be rewritten as (27) i (t) = u ( ) i (t) +! u () + Z t 0! c (t s) ^u () i ^u () i (t) u ( ) (t) i (s) u ( ) (s) i ds :

4 4 J JANSSEN and S VANDEWALLE The latter equation corresponds to (22) with an additional correction based on a Volterra convolution As such, the standard SOR waveform relaxation method can be treated as a special case of the CSOR method by setting! c (t) 0 Remark 2 The original SOR waveform method, developed and analysed by Miekkala and Nevanlinna for systems of ODEs () with B = I in [7], [8] diers from the standard SOR waveform method described above in that only the coecient matrix A is split in an SOR manner To distinguish between both SOR methods, we will further refer to the Miekkala{Nevanlinna method as the single-splitting SOR waveform relaxation (SSSOR) method, whereas the standard SOR method will be referred to as the double-splitting SOR (DSSOR) method The SSSOR method, which can be cast into the double-splitting framework of (2) by setting M B = I, N B = 0 and (24), has not yet found any use for general ODE-systems () with B 6= I Hence, in that case we will distinguish only between the CSOR method and its variant for! c (t) 0, ie, the standard SOR or DSSOR method 22 Discrete-time methods In an actual implementation, the continuoustime methods are replaced by their discrete-time variants As in [9], we shall concentrate in this paper on the use of (irreducible, consistent, zero-stable) linear multistep formulae for time discretisation For the reader's convenience, we recall the general linear multistep formula for calculating the solution to the ODE _y(t) = f(t; y) with y(0) = y[0], see eg [2, p ], kx l=0 l y[n + l] = kx l=0 l f[n + l] : In this formula, l and l are real constants, denotes a constant step size and y[n] denotes the discrete approximation of y(t) at t = n We shall assume that k starting values y[0]; y[]; : : : ; y[k ] are given The characteristic polynomials of the linear multistep method are given by a(z) = kx l=0 l z l and b(z) = l=0 kx l=0 l z l : The stability region S consists of those 2 C for which the polynomial a(z) b(z) (around = : a(z) b(z)) satises the root condition: all roots satisfy jz l j and those of modulus are simple The rst step of the discrete-time convolution SOR waveform relaxation algorithm is obtained by discretising (2) The second step approximates the convolution integral in (25) by a convolution sum with kernel = f[n]g N n=0, where N denotes the (possibly innite) number of time steps, nx (28) u () i [n] = u ( ) i [n] + [n l] ^u () i [l] u ( ) i [l] : We obtain from this a discrete-time analogue of (27) if we set =! + (! c ), with the discrete delta function, [20, p 409] The right-hand side of (28) then becomes u ( ) i [n] +! ^u () i [n] u ( ) i [n] + nx l=0! c [n l] ^u () i [l] u ( ) i [l] :

5 SOR WAVEFORM RELAXATION METHODS 5 Note that the convolution sum is a bounded operator if (or (! c ) ) is an l -sequence Clearly, the discrete-time analogue to the standard SOR waveform relaxation method is obtained by setting! c [n] 0 The latter can also be derived by discretising (2) and using the splittings (23) and (24) We shall always assume that we do not iterate on k given starting values, ie, ^u () i [n] = u () i [n] = u ( ) i [n] = u i [n], n < k For implicit multistep methods, ie, methods with k 6= 0, the linear systems arising from discretising (2) can be solved uniquely for every n if and only if the discrete solvability condition, k k 62 ( M B M A) ; is satised, where () denotes the spectrum, [9, eq (45)] In the case of (2), this simplies to (29) k k 62 ( D B D A) : 23 Block-wise methods The point-wise SOR waveform relaxation methods described above can be adapted easily to the block-wise relaxation case Matrices B and A are then partitioned into similar systems of d b d b rectangular blocks b ij and a ij Correspondingly, D B, L B and U B (and D A, L A and U A ) are block diagonal, block lower triangular and block upper triangular matrices 3 Continuous-time convergence analysis We will analyse the convergence properties of the continuous-time SOR waveform relaxation methods outlined in the previous section We will consider the general case of block-wise relaxation, which includes point-wise relaxation as a limiting case This analysis will follow the framework of [8] and extend and complete the results of [2] to systems of the form () We will concentrate on deriving the nature and the properties of the iteration operator of the CSOR waveform relaxation method The results for the standard SOR waveform method then follow immediately by setting! c (t) 0 For the splitting SOR waveform method with single splitting { which nds use only in the B = I case { we refer to [7], [8] 3 Nature of the operator In order to identify the nature of the CSOR waveform iteration operator we need the following elementary result, which we formulate as a lemma Lemma 3 The solution to the ODE b _u(t) + au(t) = q _v(t) + pv(t) + w(t) with constants b; a; q; p 2 C mm and b non-singular, is given by u(t) = Kv(t) + '(t), with Kx(t) = b qx(t) + Z t '(t) = e b at u(0) b q v(0) + 0 k c (t s)x(s)ds with k c (t) = e b at b (p ab q) ; Z t 0 e b a(t s) b w(s)ds : The function k c (t) 2 L (0; ) if Re() > 0 for all 2 (b a) If, in addition, w(t) 2 L p (0; ), then '(t) 2 L p (0; ) Proof The lemma is an immediate consequence of the well-known solution formula for the ODE _u + au = f, see eg [4, p 9] The CSOR waveform relaxation algorithm implicitly denes a classical successive iteration scheme of the form u () (t) = K CSOR u ( ) (t) + ' CSOR (t), where ' CSOR (t)

6 6 J JANSSEN and S VANDEWALLE depends on the ODE's right-hand side f(t) and the initial condition, and where K CSOR denotes a linear operator, which we name the continuous-time CSOR waveform relaxation operator The nature of this operator is identied in the next theorem Theorem 32 The continuous-time convolution SOR waveform relaxation operator is of the form (3) K CSOR = K CSOR + K CSOR c ; with K CSOR 2 C dd and with Kc CSOR a linear Volterra convolution operator, whose matrix-valued kernel kc CSOR (t) 2 L (0; ) if all eigenvalues of DB D A have positive real parts and if (t) is of the form (26) with! c (t) 2 L (0; ) Proof Introductory Lemma 3 can be applied to equation (2) to give (32) X i ^u () i (t) = (H ij + (h c ) ij?)u () j (t) + j= Xd b j=i+ (H ij + (h c ) ij?)u ( ) j (t) + i (t) : Here, we have used the symbol \?" to denote convolution The (matrix) constants H ij and (matrix) functions (h c ) ij (t) can be derived from the result of Lemma 3 with b = b ii, a = a ii, q = b ij and p = a ij Note that (h c ) ij (t) 2 L (0; ) if Re() is positive for all 2 (bii a ii), while i (t) 2 L p (0; ) if, in addition, f i (t) 2 L p (0; ) We shall prove the existence of constants Kij CSOR and L -functions (kc CSOR ) ij (t), i; j = ; : : : ; d b, such that (33) u () i (t) = Xd b j= Kij CSOR u ( ) j (t) + Xd b (kc CSOR j= ) ij? u ( ) j (t) + ' i (t) : The case i = follows immediately from the combination of (32) with (27) More precisely, K CSOR = (!)I, Kj CSOR =!H j, (kc CSOR ) (t) =! c (t)i, (kc CSOR ) j (t) =! (h c ) j (t)+! c?(h j +(h c ) j )(t), 2 j d b, and ' (t) = (? (t))i with I the identity matrix whose dimension equals the dimension of b and a The general case follows by induction on i This step involves computing u () i (t) from (32) and (27) and using (33) to substitute u () j (t), j < i The result then follows from the knowledge that linear combinations and convolutions of L -functions are in L 32 Symbol The symbol K CSOR (z) of the continuous-time CSOR waveform relaxation operator is obtained after Laplace-transforming the iterative scheme (2){ (25) This gives the equation ~u () (z) = K CSOR (z)~u ( ) (z) + ~'(z) in Laplacetransform space, where we have used the \~"-notation to denote a Laplace-transformed variable, as, for instance, in ~u () (z) = L u () (t) = R 0 u () (t)e zt dt In particular, we obtain (34) K CSOR (z) = z ~(z) D B L B + ~(z) D A L A z! ~(z) D B + U B ~(z)!! + ~(z) D A + U A ~(z) where ~(z) =!+~! c (z) With reference to (3), we can also write (34) as K CSOR (z) = K CSOR + K CSOR c (z), with K CSOR c (z) = L kc CSOR (t) ;

7 SOR WAVEFORM RELAXATION METHODS 7 33 Spectral radii and norm The convergence properties of the continuoustime CSOR waveform relaxation operator can be expressed in terms of its symbol, as explained in our general theoretical framework for operators consisting of a matrix multiplication and convolution part, [8, x2] 33 Finite time intervals The following theorem follows immediately from [8, Lemma 2], applied to K CSOR, taking into account that! c (t) is an L -function, and, consequently, lim z! ~! c (z) = 0 Theorem 33 Consider K CSOR as an operator in C[0; T ] Then, K CSOR is a bounded operator and (35) K CSOR = K CSOR () = K CSOR : If (t) is of the form (26) with! c (t) 2 L (0; T ), we have (36) K CSOR =! D B L B!! D B + U B! As a result, we have shown that the asymptotic convergence behaviour of the iteration on a nite time window is independent of the L -function! c (t) 332 Innite time intervals If all eigenvalues of DB D A have positive real parts, Theorem 32 implies that the convolution kernel of K CSOR belongs to L (0; ) Consequently, we can apply [8, Lemmas 23 and 24] in order to obtain the following convergence theorem Theorem 34 Assume all eigenvalues of DB D A have positive real parts, and let (t) be of the form (26) with! c (t) 2 L (0; ) Consider K CSOR as an operator in L p (0; ), p Then, K CSOR is a bounded operator and : (37) (K CSOR ) = sup K CSOR (z) = sup K CSOR (i) : Re(z)0 2R Denote by jj jj 2 the L 2 -norm and by jj jj the standard Euclidean vector norm Then, (38) jjk CSOR jj 2 = sup jjk CSOR (z)jj = sup Re(z)0 2R jjkcsor (i)jj : Remark 3 In Theorem 34, we require ~(z) to be the Laplace transform of a function of the form (26) with! c (t) 2 L (0; ) For this, a sucient (but not necessary) condition is that ~(z) is a bounded and analytic function in an open domain containing the closed right half of the complex plane, [0, Prop 23] 34 On the relation between the Jacobi and CSOR symbols At the heart of classical SOR theories for determining the optimal overrelaxation parameter lies usually the existence of a Young relation, ie, a relation between the eigenvalues of the Jacobi and the SOR iteration matrices The following lemma reveals the existence of such a relation between the eigenvalues of the CSOR symbol K CSOR (z) and the eigenvalues of the Jacobi symbol (39) K JAC (z) = (zd B + D A ) (z(l B + U B ) + (L A + U A )) : It uses the notion of a block-consistent ordering, for which we refer to [27, p 445, Def 32]

8 8 J JANSSEN and S VANDEWALLE Lemma 35 Assume the matrices B and A are such that zb + A is a blockconsistently ordered matrix with non-singular diagonal blocks, and ~(z) 6= 0 If (z) is an eigenvalue of K JAC (z) and (z) satises (30) ((z) + ~(z) ) 2 = (z)(~(z)(z)) 2 ; then (z) is an eigenvalue of K CSOR (z) Conversely, if (z) 6= 0 is an eigenvalue of K CSOR (z) which satises (30), then (z) is an eigenvalue of K JAC (z) Proof Using the shorthands D? = zd B +D A, L? = zl B +L A and U? = zu B +U A, we can write (39) and (34) as K JAC (z) = (D? ) (L? + U? ) K CSOR (z) = (D? ~(z)l? ) (( ~(z))d? + ~(z)u? ) : Thus, if ~(z) is viewed as a complex overrelaxation parameter, then K JAC (z) and K CSOR (z) are the standard Jacobi and SOR iteration matrices for the matrix zb + A = D? L? U? The result of the lemma then follows immediately from classical SOR theory, [25, Thm 43], [27, p 45, Thm 34] Under the assumption of a block-consistent ordering of zb + A, we have that if (z) is an eigenvalue of K JAC (z) then also (z) is an eigenvalue of K JAC (z), see eg [27, p 45, Thm 34] Therefore, if (z) is an eigenvalue of K JAC (z) and (z) satises (30), we can choose (3) (z) + ~(z) = p (z)~(z)(z) : The next lemma determines the optimal (complex) value ~(z) which minimises the spectral radius of K CSOR (z) for a given value of z The result follows immediately from complex SOR theory, see eg [, Thm 4] or [9, eq (99)] The result was rediscovered in [2, Thm 52], and presented in a waveform relaxation context for the B = I case Lemma 36 Assume the matrices B and A are such that zb + A is a blockconsistently ordered matrix with non-singular diagonal blocks Assume the spectrum K JAC (z) lies on a line segment [ (z); (z)] with (z) 2 C n f( ; ] [ [; )g The spectral radius of K CSOR (z) is then minimised for a given value of z by the unique optimum ~ opt (z), given by (32) ~ opt (z) = 2 + p 2 (z) ; where p denotes the root with the positive real part In particular, (33) K CSOR;opt (z) = j~ opt (z) j < : Remark 32 Following Kredell in [, Lemma 4], the condition on the collinearity of the eigenvalues of the Jacobi symbol can be weakened For (32) to hold, it is sucient that there exists a critical eigenvalue pair (z) of K JAC (z), ie, for all values of the overrelaxation parameter the pair (z) corresponds to the dominant eigenvalue of K CSOR (z) The existence of such a pair may be dicult to verify in practice, though Remark 33 The results of the above lemmas rely on the assumption of a blockconsistent ordering of the matrix zb + A In the case of semi-discrete parabolic PDEs

9 SOR WAVEFORM RELAXATION METHODS 9 in two dimensions, this assumption is, for example, satised if B and A correspond to a ve-point star and the relaxation is point-wise with lexicographic, diagonal or red/black ordering of the grid points It is also satised if B and A correspond to a 3 3-stencil { a discretisation with linear or bilinear nite elements on a regular triangular or rectangular mesh, for example { and the relaxation is line-wise Remark 34 The assumption that the eigenvalues of the Jacobi symbol are on a line is rather severe Yet, it is satised for certain important classes of problems For example, this is so when B = I and A is a symmetric positive denite consistently ordered matrix with constant positive diagonal D A = d a I (d a > 0) In that case the spectrum of K JAC (z) equals d a =(z + d a )(K JAC (0)) It is the spectrum of the standard Jacobi iteration matrix, which is real and has maximum smaller than, [27, p 47, Thm 35], scaled and rotated around the origin If the assumptions of the theorem are violated, a more general complex SOR theory, allowing the eigenvalues of the Jacobi symbol to be in a certain ellipse, should be used, see eg [6], [9], [27] Alternatively, one could decide to continue to use (32) Although one gives up optimality then, this practice may lead to a good enough convergence, as is illustrated in x6 Remark 35 If (z) 2 C n f( ; ] [ [; )g and (z) is analytic for Re(z) 0, then ~ opt (z) is bounded and analytic for Re(z) 0 According to Remark 3, Theorem 34 then may be applied to calculate the spectral radius of the CSOR operator with optimal kernel (K CSOR;opt ) In general however (z) is only known to be piecewise analytic, with possible discontinuities or lack of smoothness where the maximum switches from one eigenvalue branch to another In that case, one might opt for using (32) with an analytic function (z) that approximates the \correct" one at certain values of z (eg, near the values that correspond to the slowest converging error components) 35 Optimal point-wise overrelaxation in the B = I case In this section, we will study the optimal variants of the three dierent SOR methods for ODEsystems of the form () with B = I The resulting formulae will be applied to a model problem in x5 We rst recall the analytical expression of the spectral radius of the single-splitting SOR waveform relaxation operator K SSSOR, as presented by Miekkala and Nevanlinna in [7, Thm 42] Theorem 37 Consider () with B = I Assume A is a consistently ordered matrix with constant positive diagonal D A = d a I (d a > 0), the eigenvalues of K JAC (0) are real with = (K JAC (0)) < and 0 <! < 2 Then, if we consider K SSSOR as an operator in L p (0; ), p, we have (34) (K SSSOR ) = 8 >< >: q! + (! 2 ) 2 + (! )! + (! 4 ) 2!! d 8(! ) 2 8(! ) (! ) 2! >! d ; where! d = (4=3)(2 p )=2 Furthermore, we have! opt = 4=(4 2 ) >! d Based on Lemma 35 we can derive an analogous expression for the spectral radius of the double-splitting SOR waveform relaxation operator K DSSOR Theorem 38 Consider () with B = I Assume A is a consistently ordered matrix with constant positive diagonal D A = d a I (d a > 0), the eigenvalues of K JAC (0) are real with = (K JAC (0)) < and 0 <! < 2 Then, if we consider K DSSOR as

10 0 J JANSSEN and S VANDEWALLE an operator in L p (0; ), p, we have (35) (K DSSOR ) = 8 >< >:! + 2 (! ) 2 + (! )! p! q! + 4 (! ) 2!! d (! ) + 4!! >! d ; p! 4 where! d = (4 2 p )=2 Furthermore, we have that! opt >! d Proof Since B = I and D A = d a I, we have that (36) (K JAC (z)) = d a z + d a (K JAC (0)) : In order to apply Lemma 35 to the double-splitting case, we note that here ~(z) =! and that (z) is an eigenvalue of the DSSOR symbol K DSSOR (z), which is given by K DSSOR (z) = z! I +! D A L A z!! I +!! D A + U A With (0) denoting an arbitrary eigenvalue of K JAC (0), we can rewrite (3) as p z (z)!(0) = + d a (z) +! : We get equilibrium lines for j(z)j, : (37) z d a = + p j(z)j!(0) j(z)je i t 2 + (! )e i t 2 ; by setting (z) = j(z)je it with t varying from 0 to 4 The supremum of j(z)j along the imaginary axis is attained at a point where such an equilibrium line osculates the imaginary axis, ie, when Re z(t) = 0 and Re z 0 (t) = 0 In addition, we note that the eigenvalues of K JAC (z) are collinear, which implies that K JAC (z) has a critical eigenvalue pair, [, Lemma 4] According to Remark 32, the dominant eigenvalue of K DSSOR (z) is then obtained by replacing (0) by in (37) This yields for Re z(t) = 0 the following condition t 4j(z)j(! ) cos 2 2 (38) while Re z 0 (t) = 0 gives (39) 4j(z)j(! ) cos p j(z)j! (j(z)j+! ) cos t 2 2p j(z)j! (j(z)j +! ) sin t +(j(z)j!+) 2 = 0 ; 2 t = 0 : 2 If sin(t=2) = 0, the osculation with the imaginary axis occurs at the origin and the corresponding largest value of j(0)j equals the spectral p radius of the algebraic SOR method, [25], [27], which, for!! opt alg = 2=( + 2 ), is given by (320) j(0)j =! + 2 (! ) 2 + (! ) r! + 4 (! ) 2 :

11 SOR WAVEFORM RELAXATION METHODS If sin(t=2) 6= 0, the osculation is at a complex point z = i, 6= 0 (and by symmetry at z = i) The corresponding value of j(z)j is obtained by eliminating cos(t=2) from (38) and (39) This gives the equation j(z)j (! )2 2! 2 (! ) 2 6(! )! 2 2 whose largest solution for! > equals j(z)j + (! ) 2 = 0 ; (32) j(z)j = (! ) p!! p!! : In order to determine the range of validity of this result, we need in (39) to specify the condition that < cos(t=2) < This is a condition on! which, when combined with (32), leads to! >! d with! d as given in the formulation of the theorem It turns out that <! d <! opt alg and that (32) is larger than (320) for! d <!! opt alg Hence, the proof is completed by combining the latter two expressions Finally, we investigate the spectral radius of K CSOR;opt, the convolution SOR waveform operator with optimal kernel Theorem 39 Consider () with B = I Assume A is a consistently ordered matrix with constant positive diagonal D A = d a I (d a > 0) and the eigenvalues of K JAC (0) are real with = (K JAC (0)) < Then, if we consider K CSOR;opt as an operator in L p (0; ), p, we have (322) (K CSOR;opt ) = 2 ( + p 2 )2 : Proof Under the assumptions of the theorem we have (36) Hence, we may apply Lemma 36 with (z) = d a z + d a ; in order to derive the optimum complex overrelaxation parameter ~ opt (z) Since ~ opt (z) is a bounded analytic function in the complex right-half plane, including the imaginary axis, we know from Remark 3 that it is the Laplace transform of a function of the form (26), with! c (t) in L (0; ) Thus, we may apply Theorem 34, which, when combined with (33), yields (K CSOR;opt ) = sup 2R (KCSOR;opt (i)) = sup 2R j ~ opt (i) j = sup p j (i)j 2 2R j + : 2 (i)j2 Since the numerator is maximal for = 0, and the denominator is minimal for = 0, the latter supremum is obtained for = 0 This completes the proof Since the maximum of (K CSOR;opt (z)) is found at the complex origin, we can easily state and prove the following properties Property 3 Under the assumptions of Theorem 39, point-wise optimal convolution SOR waveform relaxation for the ODE-system _u + Au = f attains the same asymptotic convergence rate as optimal algebraic SOR for the linear system Au = f Proof This follows from ~ opt (0) =! alg opt

12 2 J JANSSEN and S VANDEWALLE Table 3 Spectral radii of optimal single-splitting (SSSOR), double-splitting (DSSOR) and convolution (CSOR) SOR waveform relaxation for problem () with B = I and D A = d ai The value of the optimal parameter! opt is given in parenthesis (K SSSOR;!opt ) 068 (2539) 0822 (293) 0906 (37) 0952 (3224) (K DSSOR;!opt ) 0750 (374) 0867 (539) 0932 (626) 0965 (67) (K CSOR;opt ) Property 32 Under the assumptions of Theorem 39, we have that opt (t) = (t)+(! c ) opt (t) Also! opt alg, the optimal overrelaxation parameter for the system Au = f satises! alg opt = + R 0 (! c) opt (t)dt Proof We have that lim z > (z) = 0 Hence, lim z > ~ opt (z) = and the rst result follows The second result follows from ~ opt (0) =! alg opt and the denition of the Laplace transform Property 33 Under the assumptions of Theorem 39, we have that K CSOR;opt is a convolution operator Proof The matrix multiplication part of K CSOR;opt satises K CSOR;opt = lim z > KCSOR;opt (z) = 0 : Hence, by Theorem 32, K CSOR;opt is a convolution operator with an L -kernel Using formulae (34), (35) and (322), we can compute the spectral radii of the optimal single-splitting, double-splitting and convolution SOR waveform relaxation methods as a function of These values are presented in Table 3, together with the values of the optimal parameter! opt for the SSSOR and DSSOR method 4 Discrete-time convergence analysis In this section we will analyse the discrete-time SOR waveform relaxation methods outlined in x22, following the theoretical framework developed in [9] We will identify the nature and the convergence properties of the discrete-time CSOR iteration operator The results for the discretetime standard SOR waveform method are then obtained by setting! c [n] 0 The corresponding results for the SOR method with single splitting can be found in [8] 4 Nature of the operator Since we do not iterate on the k starting values, we use a shifted subscript -notation for sequences u of which the initial values u[n], n < k are known, ie, u = fu[k + n]g N n=0 The discrete-time version of Lemma 3 reads as follows kx l=0 Lemma 4 The solution to the dierence equation kx lb + l a u[n + l] = lq + l p v[n + l] + l=0 kx l=0 l w[n + l] ; n 0; with b; a; q; p 2 C mm and b non-singular is given by u = K v + ', with K discrete convolution operator: (K x )[n] = (k? x )[n] = nx l=0 k[n l]x[l] ; n 0 ; a

13 SOR WAVEFORM RELAXATION METHODS 3 and ' depending on w[l], l 0 and the initial values u[l]; v[l], l = 0; : : : ; k The sequence k 2 l () if ( b a) ints, where S is the stability region of the linear multistep method If, in addition, w 2 l p (), then ' 2 l p () Proof The proof of the lemma is based on a Z-transform argument and the use of Wiener's inversion theorem for discrete l -sequences It is analogous to the proof of [9, Lemma 43] The discrete-time CSOR scheme can be written as a classical successive approximation method, u () = K CSOR u ( ) + ' Here, ' is a sequence which depends on the dierence equation's right-hand side f and the initial conditions, while the nature of the discrete-time CSOR operator K CSOR is identied below Theorem 42 The discrete-time CSOR waveform relaxation operator K CSOR is a discrete convolution operator, whose matrix-valued kernel k CSOR 2 l () if ( D B D A) int S and 2 l () Proof Discretisation of (2) and (25) leads to the discrete-time CSOR scheme, given by (4) kx l=0 lb ii + l a ii Xd b kx j=i+ l=0 ^u () i X i [n + l] = lb ij + l a ij and (28) Application of Lemma 4 to (4) gives kx j= l=0 u ( ) j [n + l] + lb ij + l a ij kx l=0 u () j [n + l] l f i [n + l] ; n 0 ; (42) X i (^u () i ) = (h ij )? (u () j ) + j= Xd b j=i+ (h ij )? (u ( ) j ) + ( i ) ; with (h ij ) 2 l () if ( bii a ii) int S and ( i ) an l p ()-sequence It is now easy to prove that there exist l -sequences (kij CSOR ) such that (43) (u () i ) = Xd b (kij CSOR j= )? (u ( ) j ) + (' i ) ; Indeed, the combination of (42) and (28) for i = gives (k CSOR ) = ( )I, (kj CSOR ) =? (h j ), 2 j d b, and (' ) = (? ( ) )I with I the identity matrix of the appropriate dimension in the case of block relaxation The general case involves the computation of (u () i ) from (42) and (28) It follows by induction on i and is based on the elimination of the sequences (u () j ), j < i, from (42) using (43) Consequently, the resulting (kij CSOR ), which consist of linear combinations and convolutions of l -sequences, belong to l () 42 Symbol Discrete Laplace or Z-transformation of the iterative scheme of the discrete-time CSOR waveform relaxation method yields an iteration of the form ~u () (z) = K P CSOR (z)~u ( ) (z)+ ~' (z) with ~u () (z) = Z u () = i=0 u() [i]z i and

14 4 J JANSSEN and S VANDEWALLE K CSOR (z) the discrete-time CSOR symbol This symbol is given by K CSOR (z) = a b (z) ~ (z) D B L B + a b (z) with ~ (z) =! + (~! c ) (z)! ~ (z) D B + U B ~ (z) ~ (z) D A L A!! + ~ (z) D A + U A ~ (z) 43 Spectral radii and norm The theorems in this section follow immediately from the general theory in [9, x2], where we analysed the properties of discrete convolution operators For the iteration on nite intervals we have the following result from [9, Lemma 2] Theorem 43 Assume that the discrete solvability condition (29) is satised, and consider the discrete-time CSOR operator K CSOR is a bounded operator and p and N nite Then, K CSOR (44) (K CSOR ) = (K CSOR ()) : as an operator in l p (N) with ; If ( DB D A) int S, Theorem 42 implies that the kernel of the CSOR operator belongs to l () Hence, we may apply [9, Lemmas 22 and 23] in order to derive the following innite-interval result Theorem 44 Assume ( DB D A) int S and 2 l () Consider the discrete-time CSOR iteration operator K CSOR Then, K CSOR (45) as an operator in l p (), p is a bounded operator and (K CSOR ) = max jzj (K CSOR (z)) = max jzj= (K CSOR (z)) : If we denote by jj jj 2 the l 2 -norm and by jj jj the Euclidean vector norm, we have (46) jjk CSOR jj 2 = max jzj jjk CSOR (z)jj = max jzj= jjk CSOR (z)jj : Remark 4 In Theorem 44, we require ~ (z) to be the Z-transform of an l -kernel For this, a sucient (but not necessary) condition is that ~ (z) is a bounded and analytic function in an open domain containing fz 2 C j jzj g A tighter set of conditions can be found in [5, p 7] The following lemma is the discrete-time equivalent of Lemma 36 It involves the eigenvalue distribution of the discrete-time Jacobi symbol, which is related to its continuous-time equivalent by [9, eq (40)], (47) K JAC (z) = K JAC a b (z) Lemma 45 Assume the matrices B and A are such that a b (z)b + A is a block-consistently ordered matrix with non-singular diagonal blocks Assume the spectrum K JAC (z) lies on a line segment [ ( ) (z); ( ) (z)] with ( ) (z) 2 C n f( ; ] [ [; )g The spectral radius of K CSOR (z) is then minimised for a given value of z by the unique optimum (~ opt ) (z), given by (48) (~ opt ) (z) = 2 + p ( ) 2 (z) ; :

15 SOR WAVEFORM RELAXATION METHODS 5 where p denotes the root with the positive real part In particular, (49) (K CSOR;opt (z)) = j(~ opt ) (z) j < : Proof In analogy with Lemma 36, the result follows from standard complex SOR theory applied to the complex matrix a b (z)b + A 44 Continuous-time versus discrete-time results Under the assumption ~ (z) = ~ a (40) b (z) ; the discrete-time and continuous-time CSOR symbols are related by a formula similar to (47), ie, K CSOR (z) = K CSOR a b (z) As a result, we have the following two theorems which provide the spectral radius of the discrete-time operator in terms of the symbol of the continuous-time operator They correspond to Theorems 4 and 44 in [9], so we can omit their proofs Theorem 46 Assume that both (40) and the discrete solvability condition (29) are satised, and consider K CSOR N nite Then, (4) : as an operator in l p (N) with p and (K CSOR ) = K CSOR k k Theorem 47 Assume (40) and ( DB D A) int S, and consider K CSOR as an operator in l p (), p Then, (42) (K CSOR ) = supf(k CSOR (z)) j z 2 C n int Sg = sup (K CSOR (z)) : z2@s Also, in complete analogy to the result in [9, x43], (40) implies that (43) lim K CSOR = K CSOR ;!0 for both the nite and innite time-interval computation Observe that equality (40), and, hence, Theorems 46 and 47 and equality (43), are not necessarily satised They do hold, however, in the two important cases explained below The rst case concerns the standard SOR waveform method with double splitting Indeed, if! c (t) 0 and! c [n] 0, we have ~ (z) ~ a b (z)! Equality (40) is also satised for the optimal CSOR method As the discretetime and continuous-time Jacobi symbols are related by (47), a similar relation holds for their respective eigenvalues with largest modulus: ( ) (z) = a b (z) Hence, by comparing the formulae for the optimal convolution kernels in Lemmas 36 and 45, we nd (~ opt ) (z) = ~ opt a b (z) : :

16 6 J JANSSEN and S VANDEWALLE 5 Point-wise SOR waveform relaxation: model problem analysis and numerical results We consider the m-dimensional heat mu = 0 ; x 2 [0; ] m ; t > 0 ; with Dirichlet boundary conditions and a given initial condition We discretise using nite dierences or nite elements on a regular grid with mesh-size h 5 Finite-dierence discretisation A nite-dierence discretisation of (5) with central dierences leads to a system of ODEs of the form () with mass matrix B = I and stiness matrix A, which, in the one-dimensional and two-dimensional case, is given respectively by the stencils A = h 2 [ 2 ] and A = h Matrix A is consistently ordered for an iteration with point-wise relaxation in a lexicographic, red/black or diagonal ordering The eigenvalues of the Jacobi iteration matrix corresponding to matrix A are well-known They are real and one has independent of the spatial dimension that 3 5 : (52) = (K JAC (0)) = cos(h) : Hence, the assumptions of Theorems 37, 38 and 39 are satised In Figure 5, we illustrate formulae (34) and (35) by depicting the spectral radii of the single and double-splitting operators (K SSSOR ) and (K DSSOR ), together with (Kalg SOR ), the spectral radius of the standard SOR iteration matrix for the linear system Au = f, as a function of! In [7, p 473], Miekkala and Nevanlinna derived the following result for the optimal single-splitting SOR waveform method from Theorem 37 Property 5 Consider model problem (5), discretised with nite dierences Then, if we consider K SSSOR;!opt as an operator in L p (0; ), p, we have for small h that (53) (K SSSOR;!opt ) 2 2 h 2 ;! opt h 2 : A similar result can be proven for the optimal DSSOR waveform relaxation method The calculation is standard, but rather lengthy and tedious It was performed by using the formula manipulator Mathematica, [26] The computation is based on rst dierentiating (35) with respect to! and then nding the zeros of the resulting expression The formula for! opt is then substituted back into (35) Finally, entering (52) and calculating a series expression for small h leads to the desired result Property 52 Consider model problem (5), discretised with nite dierences Then, if we consider K DSSOR;!opt as an operator in L p (0; ), p, we have for small h that (54) (K DSSOR;!opt ) p 2 2 h 2 ;! opt (4 2 p 2) + (3 9p 2 4 )2 h 2 :

17 SOR WAVEFORM RELAXATION METHODS h = = h = = Fig 5 (K SSSOR ) (solid), (K DSSOR ) (dashed) and (K SOR alg ) (dots) vs! for model problem (5) with nite-dierence discretisation The '+'- and ''-symbols indicate measured values from numerical experiments (x54) Table 5 Spectral radii of optimal single-splitting (SSSOR), double-splitting (DSSOR) and convolution (CSOR) SOR waveform relaxation for model problem (5) with nite-dierence discretisation The value of the optimal parameter! opt is given in parenthesis h /8 /6 /32 /64 (K SSSOR;!opt ) 0745 (273) 0927 (366) 098 (329) 0995 (3323) (K DSSOR;!opt ) 0804 (452) 0947 (647) 0986 (698) 0997 (7) (K CSOR;opt ) For the CSOR waveform relaxation method with optimal overrelaxation kernel we may invoke Property 3 The operator's spectral radius equals that of the optimal SOR iteration matrix for the discrete Laplace operator Property 53 Consider model problem (5), discretised with nite dierences Then, if we consider K CSOR;opt as an operator in L p (0; ), p, we have for small h that (55) (K CSOR;opt ) 2 h : Numerical values of these spectral radii, together with the corresponding! opt are presented in Table 5 as a function of the mesh-size h They are computed from (34), (35) and (322) 52Finite-element discretisation A nite-element discretisation of (5) does not in general lead to a matrix zb+a that is consistently ordered for point relaxation This precludes the use of Lemma 35 An exception is the one-dimensional model problem (5) discretised with linear nite elements In that case one obtains a system of ODEs of the form () where B and A are given by the stencils (56) B = h 6 [ 4 ] and A = h [ 2 ] : In the following theorem, we derive the spectral radius of the double-splitting SOR waveform relaxation operator, based on the DSSOR versions of Theorem 34

18 8 J JANSSEN and S VANDEWALLE and Lemma 35 In particular, we set ~(z) =! so that the right-hand side of (34) becomes K DSSOR (z) Theorem 5 Consider () with B and A given by (56) Assume 0 <! < 2 Then, if we consider K DSSOR as an operator in L p (0; ), p, we have with = cos(h) that (57) (K DSSOR ) = 8 >< >:! + (! 2 ) 2 +! q! + (! 4 ) 2!! d + 3 p! 8! + (! ) 8! 2 2 3!! >! d ; 8 p! + 8! 2 2 p with! d = ( )=2 Furthermore, we have! opt >! d Proof The spectrum of the Jacobi symbol is given by (K JAC 2zh2 + 2 (58) (z)) = 4zh j j h with j = cos(jh) : Since the conditions of Lemma 35 are satised, (3) can be written as or, after setting (z) = j(z)je it, (z) +! = p (z)! 2zh zh j ; (59) z = 3 h 2 p j(z)je i t 2 + (! )e i t 2 j(z)j! p j j(z)je i t 2 + (! )e i t : j(z)j!j In complete analogy with the proof of Theorem 38, we replace j by and impose the conditions Re z(t) = 0 and Re z 0 (t) = 0 on the equilibrium curve (59) to determine the supremum of j(z)j along the imaginary axis This gives 4j(z)j(! ) cos 2 t 2 +(j(z)j +! ) 2 2 j(z)j!2 2 j = 0 2p j(z)j!j (j(z)j +! ) cos t 2 and 4j(z)j(! ) cos t 2 4p j(z)j!j (j(z)j +! ) sin t = 0 : 2 We deduce that either the supremum is attained at the origin giving (320), or the supremum is found at a certain point z = i ( 6= 0) giving (50) j(z)j = (! ) 3 8 p!! + 8! 2 2 p!! + 8! 2 2 : The proof is completed by combining (320) and (50) The value of! d is derived by determining the least value of! for which the supremum is not attained at the origin It turns out that <! d <! opt alg

19 SOR WAVEFORM RELAXATION METHODS h = = h = = Fig 52 (K DSSOR ) (dashed) and (K SOR alg ) (dots) vs! for model problem (5) with m = and linear nite-element discretisation The '+'-symbols indicate measured values from numerical experiments (x54) Table 52 Parameters! d and! opt, together with spectral radii of optimal double-splitting (DSSOR) and convolution (CSOR) SOR waveform relaxation for model problem (5) with m = and linear nite-element discretisation h /8 /6 /32 /64! d ! opt (K DSSOR;!opt ) (K CSOR;opt ) Equation (57) is illustrated in Figure 52, where the theoretical values of the spectral radius are plotted against! The spectral radius of the CSOR waveform relaxation operator with optimal kernel is calculated in the following theorem As in the nite-dierence case, it equals the spectral radius of the optimal standard SOR method for the system Au = f where A is the discrete Laplacian Theorem 52 Consider () with B and A given by (56) Then, if we consider K CSOR;opt as an operator in L p (0; ), p, we have for small h that (5) (K CSOR;opt ) 2h : Proof Because of (58), we have that the eigenvalues of K JAC (z) lie on the line segment [ (z); (z)] with j (z)j < Therefore, the conditions of Lemma 36 are satised Since (z) is an analytic function for Re(z) 0, so is ~ opt (z) Hence, we may apply Theorem 34 and Lemma 36 to nd (K CSOR;opt ) = sup 2R j ~ opt (i) j = j~ opt (0) j ; from which the result follows Table 52 shows some values of! d,! opt and (K DSSOR;!opt ) calculated by means of (57) The latter spectral radii obviously satisfy a relation of the form O(h 2 ) For comparison purposes we have also added the spectral radii of optimal convolution SOR waveform relaxation, which are identical to the ones in Table 5

20 20 J JANSSEN and S VANDEWALLE Table 53 Spectral radii of discrete-time optimal double-splitting (DSSOR) and convolution (CSOR) SOR waveform relaxation for model problem (5) with m = and linear nite-element discretisation (h = =6, = =00) multistep method CN BDF() BDF(2) BDF(3) BDF(4) BDF(5) (K DSSOR;!? ) (K CSOR;opt ) =0: =2:2 =0:6 =0: Fig 53 Spectral pictures of optimal double-splitting (left) and convolution (right) SOR waveform relaxation for model problem (5) with m = and linear nite-element discretisation (h=/6, =/00) 53 Eect of time discretisation We analyse the use of the Crank{Nicolson (CN) method and the backward dierentiation (BDF) formulae of order up to 5, for the one-dimensional model problem (5) with linear nite-element discretisation on a mesh with mesh-size h = =6 The results for nite dierences or more-dimensional problems are qualitatively similar We computed (K DSSOR;!? ) by direct numerical evaluation of (45), with = =00 and with =!?, where!? equals the optimal! for the continuous-time iteration Since ( ) (z) 2 C nf( ; ][[; )g and ( ) (z) is analytic for jzj, we have that (~ opt ) (z), given by (48), is also analytic Hence, we can also compute ) by evaluation of (45) The results for the DSSOR and CSOR iteration are reported in Table 53 They can be illustrated by a so-called spectral picture, see [9, x6], in which the scaled stability region boundaries of the linear multistep methods are plotted on top of the contour lines of the spectral radius of the continuous-time waveform relaxation symbol Two such pictures are given in Figure 53, with contour lines drawn for the values 08, 0, 2, 4, 6, 8, 20 and 22 in the DSSOR case and 06, 07, 08 and 09 in the CSOR case According to Theorem 47, the values of Table 53 can be veried visually by looking for the supremum of the symbol's spectral radius along the plotted scaled stability region boundaries (K CSOR;opt 54 Numerical results In this section we will present the results of some numerical experiments We will show that the observed convergence behaviour agrees very well with the theory For long enough time windows the averaged convergence factors closely match the theoretical spectral radii on innite time intervals For an

21 SOR WAVEFORM RELAXATION METHODS 2 explanation of why they do not agree with the nite-interval ones, we refer to [9, x7] and to the pseudo-spectral analysis of [7, 6] To determine the averaged convergence factor, we rst computed the -th iteration convergence factor by calculating the l 2 - norm of the discrete error of the -th approximation u (), and by dividing the result for successive iterates This factor takes a nearly constant value after a suciently large number of iterations The averaged convergence factor is then dened as the geometric average of these iteration convergence factors in the stable regime For the problems already discussed in this section, the Z-transform of the optimal convolution kernel is known to be analytic for jzj and is given by (48) The computation of the components opt [n] of the optimal discrete convolution kernel, which satisfy (52) (~ opt ) (z) = X n=0 opt [n]z n ; involves the use of an inverse Z-transform technique The method we used is based on a Fourier-transform method, and is justied by the following observation Setting (t) = (~ opt ) (e it ), (52) becomes (t) = X n=0 opt [n]e int : Thus, opt [n] is the n-th Fourier coecient of the 2-periodic function (t) More precisely, Z opt [n] = 2 (t)e int dt ; 2 0 which can be approximated numerically by the nite sum num [n] = M M X k=0 k 2 M e ink 2 M : Consequently, numerical approximations to the M values f opt [n]g M can be found by computing the discrete Fourier transform of the sequence n n=0 (~ opt ) (e ik 2 M ) o M This can be performed very eciently by using the FFT-algorithm In the numerical experiments the number of time steps N is always nite Hence, we only use the numerical approximations of f opt [n]g N n=0 To compute the discrete kernel, we took M N and large enough to anticipate possible aliasing eects The correctness of formulae (34) and (35) is illustrated in Figure 5 by the '+'- and the ''-symbols They correspond to the measured averaged convergence factors of the single-splitting and double-splitting SOR waveform relaxation method respectively, applied to the one-dimensional model problem (5) In this computation the time window equals [0; ], the time step is =000 and the time discretisation is by the Crank{Nicolson method Averaged convergence factors as a function of h are given in Tables 54 and 55 for the one- and two-dimensional model problem (5) They agree very well with the theoretical values given in Table 5, and they illustrate the correctness of formulae (53), (54) and (55) Note that we take the overrelaxation parameters! in the k=0

Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics

Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras DAMTP 1995/NA4 August 1995 Department of Applied Mathematics and Theoretical

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013 Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL

More information

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN H.T. Banks and Yun Wang Center for Research in Scientic Computation North Carolina State University Raleigh, NC 7695-805 Revised: March 1993 Abstract In

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli

ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices 1 General concepts Numerical Methods and Simulation / Umberto Ravaioli Introduction to the Numerical Solution of Partial Differential

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

Iterative Methods for Problems in Computational Fluid. Howard C. Elman. University of Maryland. David J. Silvester. University of Manchester

Iterative Methods for Problems in Computational Fluid. Howard C. Elman. University of Maryland. David J. Silvester. University of Manchester Report no. 96/19 Iterative Methods for Problems in Computational Fluid Dynamics Howard C. Elman University of Maryland David J. Silvester University of Manchester Andrew J. Wathen Oxford University We

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

SPECTRAL METHODS ON ARBITRARY GRIDS. Mark H. Carpenter. Research Scientist. Aerodynamic and Acoustic Methods Branch. NASA Langley Research Center

SPECTRAL METHODS ON ARBITRARY GRIDS. Mark H. Carpenter. Research Scientist. Aerodynamic and Acoustic Methods Branch. NASA Langley Research Center SPECTRAL METHODS ON ARBITRARY GRIDS Mark H. Carpenter Research Scientist Aerodynamic and Acoustic Methods Branch NASA Langley Research Center Hampton, VA 368- David Gottlieb Division of Applied Mathematics

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH

MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH Abstract. A multigrid preconditioning scheme for solving the Ciarlet-Raviart mixed method equations for the biharmonic Dirichlet

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

U U B U P x

U U B U P x Smooth Quantum Hydrodynamic Model Simulation of the Resonant Tunneling Diode Carl L. Gardner and Christian Ringhofer y Department of Mathematics Arizona State University Tempe, AZ 8587-184 Abstract Smooth

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Numerical Schemes from the Perspective of Consensus

Numerical Schemes from the Perspective of Consensus Numerical Schemes from the Perspective of Consensus Exploring Connections between Agreement Problems and PDEs Department of Electrical Engineering and Computer Sciences University of California, Berkeley

More information

Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method

Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method Available online at http://ijim.srbiau.ac.ir Int. J. Industrial Mathematics Vol. 1, No. 2 (2009)147-161 Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

t x 0.25

t x 0.25 Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

1 Holomorphic functions

1 Holomorphic functions Robert Oeckl CA NOTES 1 15/09/2009 1 1 Holomorphic functions 11 The complex derivative The basic objects of complex analysis are the holomorphic functions These are functions that posses a complex derivative

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

. Consider the linear system dx= =! =  a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

Polynomial functions over nite commutative rings

Polynomial functions over nite commutative rings Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary

More information

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou Some 2 2 Unitary Space-Time Codes from Sphere Packing Theory with Optimal Diversity Product of Code Size 6 Haiquan Wang Genyuan Wang Xiang-Gen Xia Abstract In this correspondence, we propose some new designs

More information

MATH 5640: Functions of Diagonalizable Matrices

MATH 5640: Functions of Diagonalizable Matrices MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +

More information

A Hybrid Method for the Wave Equation. beilina

A Hybrid Method for the Wave Equation.   beilina A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Robust multigrid methods for nonsmooth coecient elliptic linear systems

Robust multigrid methods for nonsmooth coecient elliptic linear systems Journal of Computational and Applied Mathematics 123 (2000) 323 352 www.elsevier.nl/locate/cam Robust multigrid methods for nonsmooth coecient elliptic linear systems Tony F. Chan a; ; 1, W.L. Wan b; 2

More information

of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 1996 Many mathematical constants are expressed as slowly convergent sums

of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 1996 Many mathematical constants are expressed as slowly convergent sums Zeta Function Expansions of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 996 Many mathematical constants are expressed as slowly convergent sums of the form C = f( ) () n n2a for some

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Math 314 Lecture Notes Section 006 Fall 2006

Math 314 Lecture Notes Section 006 Fall 2006 Math 314 Lecture Notes Section 006 Fall 2006 CHAPTER 1 Linear Systems of Equations First Day: (1) Welcome (2) Pass out information sheets (3) Take roll (4) Open up home page and have students do same

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

MULTIGRID METHODS FOR TIME-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS

MULTIGRID METHODS FOR TIME-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS s KATHOLIEKE UNIVERSITEIT LEUVEN FACULTEIT INGENIEURSWETENSCHAPPEN DEPARTEMENT COMPUTERWETENSCHAPPEN AFDELING NUMERIEKE ANALYSE EN TOEGEPASTE WISKUNDE Celestijnenlaan 200A B-3001 Heverlee MULTIGRID METHODS

More information

A mixed nite volume element method based on rectangular mesh for biharmonic equations

A mixed nite volume element method based on rectangular mesh for biharmonic equations Journal of Computational and Applied Mathematics 7 () 7 3 www.elsevier.com/locate/cam A mixed nite volume element method based on rectangular mesh for biharmonic equations Tongke Wang College of Mathematical

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

20 ZIAD ZAHREDDINE For some interesting relationships between these two types of stability: Routh- Hurwitz for continuous-time and Schur-Cohn for disc

20 ZIAD ZAHREDDINE For some interesting relationships between these two types of stability: Routh- Hurwitz for continuous-time and Schur-Cohn for disc SOOCHOW JOURNAL OF MATHEMATICS Volume 25, No. 1, pp. 19-28, January 1999 ON SOME PROPERTIES OF HURWITZ POLYNOMIALS WITH APPLICATION TO STABILITY THEORY BY ZIAD ZAHREDDINE Abstract. Necessary as well as

More information

Part 1: Overview of Ordinary Dierential Equations 1 Chapter 1 Basic Concepts and Problems 1.1 Problems Leading to Ordinary Dierential Equations Many scientic and engineering problems are modeled by systems

More information

quantum semiconductor devices. The model is valid to all orders of h and to rst order in the classical potential energy. The smooth QHD equations have

quantum semiconductor devices. The model is valid to all orders of h and to rst order in the classical potential energy. The smooth QHD equations have Numerical Simulation of the Smooth Quantum Hydrodynamic Model for Semiconductor Devices Carl L. Gardner and Christian Ringhofer y Department of Mathematics Arizona State University Tempe, AZ 8587-84 Abstract

More information

REPORTRAPPORT. Centrum voor Wiskunde en Informatica. Asymptotics of zeros of incomplete gamma functions. N.M. Temme

REPORTRAPPORT. Centrum voor Wiskunde en Informatica. Asymptotics of zeros of incomplete gamma functions. N.M. Temme Centrum voor Wiskunde en Informatica REPORTRAPPORT Asymptotics of zeros of incomplete gamma functions N.M. Temme Department of Analysis, Algebra and Geometry AM-R9402 994 Asymptotics of Zeros of Incomplete

More information

with spectral methods, for the time integration of spatially discretized PDEs of diusion-convection

with spectral methods, for the time integration of spatially discretized PDEs of diusion-convection IMPLICIT-EPLICIT METHODS FOR TIME-DEPENDENT PDE'S URI M. ASCHER, STEVEN J. RUUTH y, AND BRIAN T.R. WETTON z Abstract. Implicit-explicit (IME) schemes have been widely used, especially in conjunction with

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Constrained Leja points and the numerical solution of the constrained energy problem

Constrained Leja points and the numerical solution of the constrained energy problem Journal of Computational and Applied Mathematics 131 (2001) 427 444 www.elsevier.nl/locate/cam Constrained Leja points and the numerical solution of the constrained energy problem Dan I. Coroian, Peter

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund Center for Turbulence Research Annual Research Briefs 997 67 A general theory of discrete ltering for ES in complex geometry By Oleg V. Vasilyev AND Thomas S. und. Motivation and objectives In large eddy

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

MORE CONSEQUENCES OF CAUCHY S THEOREM

MORE CONSEQUENCES OF CAUCHY S THEOREM MOE CONSEQUENCES OF CAUCHY S THEOEM Contents. The Mean Value Property and the Maximum-Modulus Principle 2. Morera s Theorem and some applications 3 3. The Schwarz eflection Principle 6 We have stated Cauchy

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Linearly-solvable Markov decision problems

Linearly-solvable Markov decision problems Advances in Neural Information Processing Systems 2 Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

Stability of implicit extrapolation methods. Abstract. Multilevel methods are generally based on a splitting of the solution

Stability of implicit extrapolation methods. Abstract. Multilevel methods are generally based on a splitting of the solution Contemporary Mathematics Volume 00, 0000 Stability of implicit extrapolation methods Abstract. Multilevel methods are generally based on a splitting of the solution space associated with a nested sequence

More information

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1995-47 Dynamical System Prediction: a Lie algebraic approach for a novel neural architecture 1 Yves Moreau and Joos Vandewalle

More information

Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method

Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method Ninth International Conference on Computational Fluid Dynamics (ICCFD9), Istanbul, Turkey, July 11-15, 2016 ICCFD9-0113 Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume

More information

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria ESI The Erwin Schrodinger International Boltzmanngasse 9 Institute for Mathematical Physics A-19 Wien, Austria The Negative Discrete Spectrum of a Class of Two{Dimentional Schrodinger Operators with Magnetic

More information

DYNAMIC WEIGHT FUNCTIONS FOR A MOVING CRACK II. SHEAR LOADING. University of Bath. Bath BA2 7AY, U.K. University of Cambridge

DYNAMIC WEIGHT FUNCTIONS FOR A MOVING CRACK II. SHEAR LOADING. University of Bath. Bath BA2 7AY, U.K. University of Cambridge DYNAMIC WEIGHT FUNCTIONS FOR A MOVING CRACK II. SHEAR LOADING A.B. Movchan and J.R. Willis 2 School of Mathematical Sciences University of Bath Bath BA2 7AY, U.K. 2 University of Cambridge Department of

More information

Our rst result is the direct analogue of the main theorem of [5], and can be roughly stated as follows: Theorem. Let R be a complex reection of order

Our rst result is the direct analogue of the main theorem of [5], and can be roughly stated as follows: Theorem. Let R be a complex reection of order Unfaithful complex hyperbolic triangle groups II: Higher order reections John R. Parker Department of Mathematical Sciences, University of Durham, South Road, Durham DH LE, England. e-mail: j.r.parker@durham.ac.uk

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA McGill University Department of Mathematics and Statistics Ph.D. preliminary examination, PART A PURE AND APPLIED MATHEMATICS Paper BETA 17 August, 2018 1:00 p.m. - 5:00 p.m. INSTRUCTIONS: (i) This paper

More information

vestigated. It is well known that the initial transition to convection is accompanied by the appearance of (almost) regular patterns. s the temperatur

vestigated. It is well known that the initial transition to convection is accompanied by the appearance of (almost) regular patterns. s the temperatur Symmetry of ttractors and the Karhunen-Loeve Decomposition Michael Dellnitz Department of Mathematics University of Hamburg D-2046 Hamburg Martin Golubitsky Department of Mathematics University of Houston

More information

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2 Index advection equation, 29 in three dimensions, 446 advection-diffusion equation, 31 aluminum, 200 angle between two vectors, 58 area integral, 439 automatic step control, 119 back substitution, 604

More information

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA 1 BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA This part of the Basic Exam covers topics at the undergraduate level, most of which might be encountered in courses here such as Math 233, 235, 425, 523, 545.

More information

Solutions Preliminary Examination in Numerical Analysis January, 2017

Solutions Preliminary Examination in Numerical Analysis January, 2017 Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime DIMACS Series in Discrete Mathematics and Theoretical Computer Science Volume 00, 1997 Lower Bounds of Isoperimetric Functions for Nilpotent Groups Jose Burillo Abstract. In this paper we prove that Heisenberg

More information