Strang-type Preconditioners for Solving System of Delay Dierential Equations by Boundary Value Methods Raymond H. Chan Xiao-Qing Jin y Zheng-Jian Bai
|
|
- Ambrose Edwards
- 5 years ago
- Views:
Transcription
1 Strang-type Preconditioners for Solving System of Delay Dierential Equations by oundary Value Methods Raymond H. han iao-qing Jin y Zheng-Jian ai z ugust 27, 22 bstract In this paper, we survey some of the latest developments in using boundary value methods (VMs) for solving systems of delay dierential equations (DDEs). These methods require the solutions of nonsymmetric, large and sparse linear systems. The GMRES method with the Strang-type preconditioner is proposed for solving these systems. One of the main results is thatifan 2 -stable VM is used for a system of DDEs, then the preconditioner is invertible and the preconditioned matrix can be decomposed as I + L where I is the identity matrix and L is a low rank matrix. It follows that when the GMRES method is applied to solving the preconditioned systems, the method will converge fast. Introduction Let us begin with the initial value problem (IVP) of ordinary dierential equations >< >: dy(t) dt y(t )=z = J m y(t)+g(t) t 2 (t T] () where y(t), g(t) : R! R m, z 2 R m, and J m is the stiness matrix in R mm. The initial value methods (IVMs), such as the Runge-Kutta methods, are well-known methods for solving (), see []. Recently, another class of methods called the boundary value methods (VMs) has been proposed, see [5]. Using VMs to discretize (), we get a linear system Mu = b where u, b are vectors and M is a matrix depending on the multistep rule we used. The advantage of using VMs is that the methods are more stable and the resulting linear system is hence more wellconditioned. However, the system is in general large and sparse (with band-structure), and solving it is a maor problem in the application of the VMs. We use the GMRES method, which is one of Krylov subspace methods, for solving the discrete system. In order to speed up the convergence of the GMRES iterations, the Strang-type preconditioner S is proposed to precondition the discrete Department of Mathematics, The hinese University of Hong Kong, Shatin, Hong Kong. The research was partially supported by the Hong Kong Research Grant ouncil grant UHK4243/P and UHK DG y Faculty of Science and Technology, The University of Macau, Macau. The research was partially supported by the research grant RG24/-2S/JQ/FST from the University of Macau. z Department of Mathematics, The hinese University of Hong Kong, Shatin, Hong Kong, hina.
2 system. The advantage of the Strang-type preconditioner is that if an 2 -stable VM is used in (), then S is invertible and the preconditioned matrix can be decomposed as S ; M = I + L where the rank of L is at most 2m( + 2 ) which is independent of the integration step size. It follows that the GMRES method applied to the preconditioned system will converge in at most 2m( + 2 ) + iterations in exact arithmetic. VMs for the solution of ordinary dierential equations of the form () have been studyed in [6]. In this paper, we discuss the use of VMs to solve three dierent types of delay dierential equations, such as the neutral delay dierential equations, the dierential equations with multidelay, and the singular perturbation delay dierential equations. The GMRES method with the Strang-type preconditioners is proposed to solve the resulting linear systems. The outline of the paper is as follows. In x2, we give some background knowledge about the linear multistep formulae. Then, we investigate the properties of the Strang-type block-circulant preconditioner for the neutral delay dierential equations, the dierential equations with multi-delay, and the singular perturbation delay dierential equations in x3{x5, respectively. The convergence analysis of the method is given with some illustrative numerical examples. 2 Linear Multistep Formulae onsider a single -dimensional IVP y = f(t y) y(t )=y : t 2 (t T] (2) Over a uniform mesh t = t + h = r with step size h =(T ; t )=r, the -step linear multistep formula (LMF) is dened as follows: = y n+ ; h = f n+ = n = r; : (3) Here, y n is the discrete approximation to y(t n ), f n = f(t n y n ). To get the solution by (3), we need initial conditions y, y,, y ;. Since only y is provided from the original problem, we have to nd additional conditions for the remaining values y, y 2,,, y ;. The method in (3) with the ( ; ) additional conditions is called Initial Value Methods (IVMs). n IVM is called implicit if 6=and explicit if =. If an IVM is applied to the IVP (2) on the interval [t t r+; ] with uniform stepsize h, we have the following discrete problem where y =(y y + y r+; ) T, f =(f f + f r+; ) T, r y = h r f + g (4) ; g =( ( i y i ; h i f i ) y ; ; h f ; ) T 2
3 r =. r =. : rr rr Note that the matrices r and r are lower triangular Toeplitz matrices. We recall that a matrix is said to be Toeplitz if its entries are constant along its diagonals. Moreover, the linear system (4) can be solved easily by forward recursion. classical example of IVM is the second order backward dierentiation formulae: 3y n+2 ; 4y n+ + y n =2hf n+ which isatwo-step method with =, = ;4, 2 = 3 and =2. Instead of using IVM which sets initial conditions for (3), we can also use the so-called oundary Value Methods (VMs). Given, 2 such that + 2 =, then the corresponding VM requires initial addition conditions y, y,, y ; and 2 nal addition conditions y r, y r+,, y r+2 ;, which are called ( 2 )-boundary conditions. Note that the class of VMs contains the class of IVMs (i.e. =, 2 =). The discrete problem generated by a -step VM with ( 2 )-boundary conditions can be written in the following matrix form y = hf + g where y =(y y + y r; ) T, f =(f f + f r; ) T, g = ; ( i y i ; h i f i ) y ; ; h f ; y r ; h f r 2 i= ( +iy r;+i ; h +if r;+i )! T and 2 R (r; )(r; ) are dened as follows, =.. =.. : (5) Note that the coecient matrices are Toeplitz with lower bandwidth and upper bandwidth 2. n example of VMs is the third order generalized backward dierentiation formulae (GDF), 2y n+ +3y n ; 6y n; + y n;2 =6hf n which isathree-step method with (2,)-boundary conditions, and that =, = ;6, 2 =3, 4 =2and =6. 3
4 In order to study the stability properties of the VM, we introduce the characteristic polynomials (z) and(z) of the VM which are dened by (z) = z and (z) = z (6) where f i g and f i g are given by (3). The 2 -stability polynomial is dened by where z, q 2. Let ; fq 2 : Re(q) < g. Denition [5] The region (z q) (z) ; q(z) (7) D 2 = fq 2 : (z q) has zeros inside z =and 2 zeros outside z =g is called the region of 2 -stability of a given VM with ( 2 )-boundary conditions. Moreover, the VM is said to be 2 -stable if ; D 2 : lthough IVMs are more ecient than VMs (which cannot be solved by forward recursion), the advantage in using VMs over IVMs comes from their stability properties. For example, the usual backward dierentiation formulae are not -stable for >2 but the GDF are ; -stable for any, see for instances [, 3] and [5, p. 79 and Figures 5.{5.3]. 3 Neutral Delay Dierential Equations In this section, we consider the solution of neutral delay dierential equations: < : y (t) =L n y (t ; )+M n y(t)+n n y(t ; ) y(t) =(t) t t t t () where y(t), (t) :R! R n L n, M n, N n 2 R nn,and > is a constant. y applying a VM, the discrete solution of () is given by the solution of a linear system where H depends on the LMF used. Hy = b 3. VMs and Their Stability Properties In order to nd a reasonable numerical solution, we require that the solution of () is asymptotically stable. Let () denote the spectrum of a matrix, I n the n-by-n identity matrix, and kk 2 the 2-norm. We have the following lemma, see [9, 3]. Lemma Let L n, M n and N n be any matrices and kl n k 2 <. Then the solution of () is asymptotically stable if Re( i ) < for any i, where i 2 ; (I n ; L n ) ; (M n + N n ) with. 4
5 Let h = =k be the step size where k is a positive integer. For (), by using a VM with ( 2 )-boundary conditions over a uniform mesh t = t + h = r on the interval [t t + r h], we have i y p+i; = i L n y p+i; ;k + h i (M n y p+i; + N n y p+i; ;k ) (9) for p = ::: r ;, where = + 2, and f i g, f ig VM, see [2]. y providing the values are the coecients of the given y ;k ::: y y ::: y ; y r ::: y r + 2 ; () (9) can be written in a matrix form as Hy = b where H I n ; () L n ; h M n ; h () N n () y T =(y T y T + ::: yt r ;) 2 R n(r ; ) and b 2 R n(r ; ) is a vector that depends on the boundary values and the coecients of the method. In (), the matrices, 2 R (r ; )(r ; ) are dened as in (5), and (), () 2 R (r ; )(r ; ) are given as follows: () =. () = see [2]. We remark that the rst column of () is given by: ( ::: ::: k + ; and the rst column of () is given by ::: r ;k ;2 ; ( ::: ::: ::: k + ; r ;k ;2 ;. ) T ) T : (2) 5
6 3.2 Strang-type Preconditioner We rst dene the Strang's preconditioner for Toeplitz matrix. For any given Toeplitz matrix T l =[t i; ] l i = =[t q], Strang's circulant preconditioner s(t l ) is a circulant matrix with diagonals given by t q q bl=2c [s(t l )] q = >< >: t q;l [s(t l )] l+q bl=2c <q<l < ;q <l see [7, ]. The Strang-type block-circulant () preconditioner for () is dened as follows: S s() I n ; s( () ) L n ; hs() M n ; hs( () ) N n (3) = (F I n )( I n ; () L n ; h M n ; h () N n )(F I n ) (4) where s(e) is Strang's preconditioner of Toeplitz matrix E, E is the diagonal matrix given by E = Fs(E)F for E =,, (), () respectively, and F is the Fourier matrix. Now we discuss the invertibility of the Strang-type preconditioner. Let w = e 2i r ; i p ;, we have [ ] = (w )=w [ ] = (w )=w [ ()] = w ;k ; w ;k ; = (w )=w k + and [ ()] = w ;k ; w ;k ; = (w )=w k + where (z) and(z) are dened as in (6). Thus the th-block of where I n ; () L n ; h M n ; h () N n in (4) is given by S = [ ] I n ; [ ()] L n ; h[ ] M n ; h[ ()] N n h = w k ((w )I n ; h(w )M n ) ; (w )L n ; h(w )N n i w k + for = 2 ::: r ;. In order to prove that S is invertible, we need to establish that S is invertible for = 2 ::: r ;. Let where e ik ((e i )I n ; h(e i )M n ) ; (e i )L n ; h(e i )N n = e ik (I n ; e ;ik L n )D D (e i )I n ; h(i n ; e ;ik L n ) ; (M n + e ;ik N n )(e i ): (5) So, to prove that S is invertible, we onlyneedtoshowthatisinvertible for any 2 R. ssume that kl n k 2 <, we have I n ; e ;ik L n is nonsingular for any 2 R. Therefore, we only need to show forany 2 R, D is invertible. We have the following theorem. 6
7 Theorem [2] If the VM with ( 2 )-boundary conditions is 2 -stable and the conditions in Lemma hold, then for any 2 R, the matrix D dened by (5) is invertible. It follows that the Strang-type preconditioner S dened as in (3) is also invertible. 3.3 Spectral nalysis In this section, we discuss the convergence rate of the preconditioned GMRES method with the Strang-type preconditioner. First of all, we recall the following well-known result. Lemma 2 [] Let W be invertible and can be decomposed asw = I +L. If the GMRES method is applied to solving the linear system W x = b, then the method will converge in at most rank(l)+ iterations in exact arithmetic. Using Lemma 2, we have the following result for the spectra of preconditioned matrices and for the convergence rate of our method. Theorem 2 [2] Let H be given by () and S be given by (3), then we have S ; H = I n(r ; ) + L where I n(r ; ) 2 R n(r ; )n(r ; ) is the identity matrix and L is a low rank matrix with rank(l) 2( + k + +)n: Therefore, when the GMRES method is applied to solving the preconditioned system S ; Hy = S ; b the method will converge in at most 2( + k + +)n + iterations in exact arithmetic. We observe from Theorem 2 that if the step size h = =k is xed, the number of iterations for convergence of the GMRES method, when applied to solving S ; Hy = S ; b, will be independent of r,thatistosay, it is independent of the length of the interval that we considered. Regarding the cost per iteration, the main work in each GMRES iteration is the matrix-vector multiplication S ; Hy. Since () () are band matrices and L n M n N n are assumed to be sparse, the matrix{vector multiplication Hy can be done very fast. To compute S ; z for any vector z, by (4), it follows that S ; z =(F I n )( I n ; () L n ; h M n ; h () N n ) ; (F I n )z: This product can be obtained by using Fast Fourier Transforms and solving r ; linear systems of order n, see [4]. Since L n M n N n are sparse, the coecient matrices of the n-by-n linear systems will also be sparse. Thus S ; z can be obtained by solving r ; sparse n-by-n linear systems. 7
8 3.4 Numerical Result We illustrate the eciency of our preconditioner by solving Example below. We used the MTL-provided M-le \gmres" (see MTL on-line documentation) to solve the preconditioned systems. In all of our tests, the zero vector is the initial guess and the stopping criterion is r q 2 < ;6 r 2 where r q is the residual after the q-th iteration. Example. onsider < where and L n = n : y (t) =L n y (t ; ) + M n yt + N n y(t ; ) y(t) =( ) T N n = n M n = t t ; ; 2 ; ; ; : ; 2 Example is solved by using the fth order generalized dams method for t 2 [ 4]. Table lists the number of iterations required for convergence of the GMRES method for dierent n and k. In the table, I means no preconditioner is used and S denotes the Strang-type preconditioner dened in (3). We see that the number of iterations required for convergence, when a circulant preconditioner is used, is always less than that when no preconditioner is used. We should emphasize that our numerical example shows a much faster convergence rate than that predicted by the estimate provided by Theorem 2. 4 Dierential Equations with Multi-delays onsider the solution of dierential equations with multi-delays: >< >: dy(t) dt y(t) =(t) = J n y(t)+d n () y(t ; )++ D n (s) y(t ; s )+f(t) t t where y(t), f(t), (t) : R! R n J n, D n () D n (s) numbers. t t (6) 2 R nn and s > are some rational
9 n k I S * 7 n k I S * 6 Table : Number of iterations for convergence (`' means out of memory). 4. VMs and Strang-type Preconditioner s in x3., we want to nd an asymptotically stable solution for (6). We have the following lemma, see [5, 9]. Lemma 3 For any s, if (J n ) 2 max(j n + J T n ) < and (J n )+ then solution of (6) is asymptotically stable. s = D () n 2 < (7) In the following, for simplicity, we only consider the case of s = 2 in (6). The generalization to arbitrary s is straightforward. Let h = =m = 2 =m 2 be the step size where m and m 2 are positive integers with m 2 >m ( 2 > ). For (6), by using a VM with ( 2 )-boundary conditions over a uniform mesh t = t + h = r 2 on the interval [t t + r 2 h], we have i y p+i; = h i (J n y p+i; + D () n y p+i; ;m + D (2) n y p+i; ;m 2 + f p+i; ) () for p = ::: r 2 ;, where = + 2, and f i g, f ig VM. y providing the values are the coecients of the given y ;m2 ::: y ;m ::: y y ::: y ; y r2 ::: y r2 + 2 ; (9) () can be written in a matrix form as where Ry = b (2) R I n ; h J n ; h () D () n ; h(2) D (2) n (2) 9
10 y T =(y T y T + ::: y T r 2 ;) 2 R n(r 2 ; ) and b 2 R n(r 2 ; ) depends on f, the boundary values, and the coecients of the method. The matrices, 2 R (r 2 ; )(r 2 ; ) are dened as in (5) and (), (2) 2 R (r 2 ; )(r 2 ; ) in (2) are dened as the matrix () in (2). We remark that the rst column of () is given by ( ::: ::: m + ; and the rst column of (2) is given by ( ::: ::: m 2 + ; ::: r 2 ;m ;2 ; ::: r 2 ;m 2 ;2 ; The Strang-type preconditioner for (2) is dened as follows: ~S s() I n ; hs() J n ; hs( () ) D () n ; hs((2) ) D (2) n (22) where s(e) is Strang's circulant preconditioner of matrix E, fore =,, () and (2). We have the following theorem for the invertibility of our preconditioner ~ S and for the convergence rate of our method. Theorem 3 [] If the VM for (6) is 2 -stable and (7) holds, the Strang-type preconditioner ~ S dened as in (22) is invertible. Moreover, when the GMRES method is applied to solving the preconditioned system ~S ; Ry = ~ S ; b the methods will converge in at most (2+m +m )n = O(n) iterations in exact arithmetic. We know from Theorem 3 that if the step size h = =m = 2 =m 2 is xed, the number of iterations for convergence of the GMRES method, when applied to the preconditioned system ~S ; Ry = ~ S ; b, will be independent ofr 2 and therefore is independent of the length of the interval that we considered. For the operation cost of our algorithm, we refer to [, 4]. 4.2 Numerical Test We illustrate the eciency of our preconditioner by solving the following problem. In the example, the VM we used is the third order GDF for t 2 [ 4]. ) T ) T : Example 2. onsider where J n = < : y (t) =J n y(t)+d () n y(t ; :5) + D (2) n y(t ; ) t y(t) =(sint ::: ) T ; ; D () n = n t 2 ; ; ; ; 2
11 and D (2) n = n Table 2 shows the number of iterations required for convergence of the GMRES method with dierent combinations of matrix sizes n and u = =h. In the table, I means no preconditioner is used and ~ S denotes the Strang{type preconditioner dened in (22). We see that the numbers of iterations required for convergence increase slowly for increasing n and u under the column ~ S. : n u I S ~ n u I S ~ Table 2: Number of iterations for convergence. 5 Singular Perturbation Delay Dierential Equations In this section, we study the solution of singular perturbation delay dierential equations: >< >: x (t) =V () x(t)+v (2) x(t ; )+ () y(t)+ (2) y(t ; ) t t y (t) =F () x(t)+f (2) x(t ; )+G () y(t)+g (2) y(t ; ) t t < x(t) =(t) t t y(t) = (t) t t (23) where x(t), (t) : R! R m y(t), (t) : R! R n V (), V (2) 2 R mm (), (2) 2 R mn F (), F (2) 2 R nm G (), G (2) 2 R nn and > is a constant. We can rewrite (23) as the following IVP, >< >: z (t) =P z(t)+qz(t ; ) (t) z(t) = (t) t t t t where P and Q 2 R (m+n)(m+n) are dened as follows, V () P = () V (2) ; F () ; G () Q = (2) ; F (2) ; G (2) : (24) see [2]. Let h = =k 2
12 be the step size where k 2 is a positive integer. For (24), by using a VM with ( 2 )-boundary conditions over a uniform mesh t = t + h = v on the interval [t t + vh], we have i z p+i; = h i (P z p+i; + Qz p+i; ;k 2 ) (25) for p = ::: v;, where = + 2, and f i g, f ig see [5]. y providing the values are coecients of the given VM, (25) can be written in a matrix form as where The vector v in (27) is dened by z ;k2 ::: z z ::: z ; z v ::: z v+2 ; (26) Kv = b (27) K I m+n ; h P ; hu Q: (2) v T =(z T z T + ::: z T v;) 2 R (m+n)(v; ) : The right-hand side b 2 R (m+n)(v; ) of (27) depends on the boundary values and the coecients of the method. The matrices, 2 R (v; )(v; ) in (2) are dened as in (5) and U 2 R (v; )(v; ) in (2) is dened as the matrix () given by (2), see [5, 4, 6, ]. The Strang-type preconditioner can be constructed for solving (27): ^S s() I m+n ; hs() P ; hs(u) Q (29) where s(e) is Strang's circulant preconditioner of matrix E, for E =,, and U. We have the following theorem for the invertibility ofour preconditioner ^S and for the convergence rate of our method. Theorem 4 [2] If the VM with ( 2 )-boundary conditions is 2 -stable and the conditions in Lemma 3 hold, i.e., (P ) 2 max(p + P T ) < and (P )+Q 2 <, then the Strang-type preconditioner ^S dened by (29) is invertible. Moreover, when the GMRES method is applied to solving the preconditioned system ^S ; Kv = ^S; b the method will converge in at most O(m + n) iterations in exact arithmetic. We observe from Theorem 4 that if the step size h = =k 2 is xed, the number of iterations for convergence of the GMRES method applied to the system ^S; Kv = ^S; b will be independent of v, i.e., it is independent of the length of the interval we considered. For numerical examples, we refer to [2]. 2
13 References [] P. modio, F. Mazzia, and D. Trigiante, Stability of Some oundary Value Methods for the Solution of Initial Value Problems, IT, vol. 33 (993), pp. 434{45. [2] Z. ai,. Jin, and L. Song, Strang-type Preconditioners for Solving Linear Systems from Neutral Delay Dierential Equations, submited to alcolo. [3] L. rugnano and D. Trigiante, Stability Properties of Some VM Methods, ppl. Numer. Math., vol. 3 (993), pp. 29{34. [4] D. ertaccini, irculant Preconditioner for the Systems of LMF-ased ODE odes, SIM J. Sci. omput., vol. 22 (2), pp.767{76. [5] L. rugnano and D. Trigiante, Solving Dierential Problems by Multistep Initial and oundary Value Methods, Gordon and erach Science Publishers, msterdam, 99. [6] R. han,. Jin, and Y. Tam, Strang-type Preconditioners for Solving System of ODEs by oundary Value Methods, Electron. J. Math. Phys. Sci., vol. (22), pp. 4{46. [7] R. han and M. Ng, onugate Gradient Methods for Toeplitz Systems, SIM Review, vol. 3 (996), pp. 427{42. [] R. han, M. Ng, and. Jin, Strang-type Preconditioners for Systems of LMF-ased ODE odes, IM J. Numer. nal., vol. 2 (2), pp. 45{462. [9] G. Hu and T. Mitsui, Stability nalysis of Numerical Methods for Systems of Neutral Delay- Dierential Equations, IT, vol. 35 (995), pp. 54{55. []. Jin, Developments and pplications of lock Toeplitz Iterative Solvers, Kluwer cademic Publishers & Science Press, eiing, 22. []. Jin, S. Lei, and Y. Wei, irculant Preconditioners for Solving Dierential Equations with Multi-Delays, submited to omput. Math. ppl.. [2]. Jin, S. Lei, and Y. Wei, irculant Preconditioners for Solving Singular Perturbation Delay Dierential Equations. submited to Numer. Linear lgebra.. [3] J. Kuang, J. iang, and H. Tian The symptotic Stability of One Parameter Methods for Neutral Dierential Equations, IT, vol. 34 (994), pp. 4{4. [4] F. Lin,. Jin, and S. Lei, Strang-type Preconditioners for Solving Linear Systems form Delay Dierential Equations, IT, to appear. [5] T. Mori, N. Fukuma, and M. Kuwahara, Simple Stability riteria for Single and omposite Linear Systems with Time Delays, Int. J. ontrol, vol. 34 (9), pp. 75{4. [6] T. Mori, E. Noldus, and M. Kuwahara, Way to Stabilize Linear Systems with Delayed State, utomatica, vol. 9 (93), pp. 57{573. [7] Y. Saad and M. Schultz, GMRES: a Generalized Minimal Residual lgorithm for Solving Non-Symmetric Linear Systems, SIM J. Sci. Stat. omput., vol. 7 (996), pp. 56{69. 3
14 [] S. Vandewalle and R. Piessens, On Dynamic Iteration Methods for Solving Time-Periodic Dierential Equations, SIM J. Num. nal., vol. 3 (993), pp. 26{33. [9] S. Wang, Further Results on Stability of _(t) =(t) +(t ; ), Syst. ont. Letters, vol. 9 (992), pp. 65{6. 4
Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir
Order Results for Mono-Implicit Runge-Kutta Methods K urrage, F H hipman, and P H Muir bstract The mono-implicit Runge-Kutta methods are a subclass of the wellknown family of implicit Runge-Kutta methods
More informationThe Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In
The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationToeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y
Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate
More informationSemi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations
Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationFraction-free Row Reduction of Matrices of Skew Polynomials
Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationNumerical Linear Algebra And Its Applications
Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,
More informationPreconditioning for Nonsymmetry and Time-dependence
Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric
More informationPreconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994
Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.
More informationPreconditioning Techniques Analysis for CG Method
Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More information4. Higher Order Linear DEs
4. Higher Order Linear DEs Department of Mathematics & Statistics ASU Outline of Chapter 4 1 General Theory of nth Order Linear Equations 2 Homogeneous Equations with Constant Coecients 3 The Method of
More informationNumerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method
Available online at http://ijim.srbiau.ac.ir Int. J. Industrial Mathematics Vol. 1, No. 2 (2009)147-161 Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationThe parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.
The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic
More informationAccepted Manuscript. Multigrid Algorithm from Cyclic Reduction for Markovian Queueing Networks. Shu-Ling Yang, Jian-Feng Cai, Hai-Wei Sun
Accepted Manuscript Multigrid Algorithm from Cyclic Reduction for Markovian Queueing Networks Shu-Ling Yang, Jian-Feng Cai, Hai-Wei Sun PII: S0096-3003()0050-0 DOI: 006/jamc20008 Reference: AMC 5730 To
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More information1 Matrices and Systems of Linear Equations
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation
More informationA MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY
A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,
More informationUnderstand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.
Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics
More informationPeter Deuhard. for Symmetric Indenite Linear Systems
Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7
More informationVII Selected Topics. 28 Matrix Operations
VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve
More informationreal and symmetric positive denite. The examples will only display the upper triangular part of matrices, with the understanding that the lower triang
NEESSRY ND SUFFIIENT SYMOLI ONDITION FOR THE EXISTENE OF INOMPLETE HOLESKY FTORIZTION XIOGE WNG ND RNDLL RMLEY DEPRTMENT OF OMPUTER SIENE INDIN UNIVERSITY - LOOMINGTON KYLE. GLLIVN DEPRTMENT OF ELETRIL
More informationFurther experiences with GMRESR
Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics
More informationInitial-Value Problems for ODEs. Introduction to Linear Multistep Methods
Initial-Value Problems for ODEs Introduction to Linear Multistep Methods Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University
More informationERIK STERNER. Discretize the equations in space utilizing a nite volume or nite dierence scheme. 3. Integrate the corresponding time-dependent problem
Convergence Acceleration for the Navier{Stokes Equations Using Optimal Semicirculant Approximations Sverker Holmgren y, Henrik Branden y and Erik Sterner y Abstract. The iterative solution of systems of
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More information76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for
Journal of Computational Mathematics, Vol.2, No.2, 23, 75 82. LEST-SQURES SOLUTION OF X = D OVER SYMMETRIC POSITIVE SEMIDEFINITE MTRICES X Λ) nping Liao (Department of Mathematics, Hunan Normal University,
More informationTime Integration Methods for the Heat Equation
Time Integration Methods for the Heat Equation Tobias Köppl - JASS March 2008 Heat Equation: t u u = 0 Preface This paper is a short summary of my talk about the topic: Time Integration Methods for the
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 18, pp. 49-64, 2004. Copyright 2004,. ISSN 1068-9613. ETNA EFFICIENT PRECONDITIONING FOR SEQUENCES OF PARAMETRIC COMPLEX SYMMETRIC LINEAR SYSTEMS D.
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationRecent Advances in Computational Mathematics
Recent Advances in Computational Mathematics Editors: Xiao-Qing Jin, Deng Ding & Hai-Wei Sun International Press www.intlpress.com 高等教育出版社 HIGHER EDUCATION PRESS Deng Ding University of Macau Hai-Wei Sun
More informationIterative methods for positive definite linear systems with a complex shift
Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution
More informationON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS
ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,
More information19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q
II-9-9 Slider rank 9. General Information This problem was contributed by Bernd Simeon, March 998. The slider crank shows some typical properties of simulation problems in exible multibody systems, i.e.,
More informationPositive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract
Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm
More informationResearch Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations
International Mathematics and Mathematical Sciences Volume 212, Article ID 767328, 8 pages doi:1.1155/212/767328 Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More information1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r
DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationWHEN studying distributed simulations of power systems,
1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationNumerical Methods - Initial Value Problems for ODEs
Numerical Methods - Initial Value Problems for ODEs Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Initial Value Problems for ODEs 2013 1 / 43 Outline 1 Initial Value
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationReduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves
Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationAugmented GMRES-type methods
Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationApproximate Inverse-Free Preconditioners for Toeplitz Matrices
Approximate Inverse-Free Preconditioners for Toeplitz Matrices You-Wei Wen Wai-Ki Ching Michael K. Ng Abstract In this paper, we propose approximate inverse-free preconditioners for solving Toeplitz systems.
More informationIntroduction to the Numerical Solution of IVP for ODE
Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that
More informationR. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the
A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationPRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS.
PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS. LAURENT O. JAY Abstract. A major problem in obtaining an efficient implementation of fully implicit Runge- Kutta IRK methods
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationGMRESR: A family of nested GMRES methods
GMRESR: A family of nested GMRES methods Report 91-80 H.A. van der Vorst C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical
More information1 Introduction The conjugate gradient (CG) method is an iterative method for solving Hermitian positive denite systems Ax = b, see for instance Golub
Circulant Preconditioned Toeplitz Least Squares Iterations Raymond H. Chan, James G. Nagy y and Robert J. Plemmons z November 1991, Revised April 1992 Abstract We consider the solution of least squares
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationQR VERSUS CHOLESKY: A PROBABILISTIC ANALYSIS
NTRNTON OURN O NUMR NYSS N MON Volume 13, Number 1, Pages 114 121 2016 nstitute for Scientific omputing and nformation QR VRSUS OSY: PROST NYSS MR R, RM YR,. XNR TRN, N VTOR OW bstract. east squares solutions
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationA parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations
A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with
More informationIndex. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems
Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,
More informationPreconditioning of elliptic problems by approximation in the transform domain
TR-CS-97-2 Preconditioning of elliptic problems by approximation in the transform domain Michael K. Ng July 997 Joint Computer Science Technical Report Series Department of Computer Science Faculty of
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationA SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER
More informationFourth Order RK-Method
Fourth Order RK-Method The most commonly used method is Runge-Kutta fourth order method. The fourth order RK-method is y i+1 = y i + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ), Ordinary Differential Equations (ODE)
More informationU.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21
U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the
More informationAN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS
AN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS HENRIK BRANDÉN, AND PER SUNDQVIST Abstract We propose an FFT-based algorithm for computing fundamental solutions of difference operators
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationThe rate of convergence of the GMRES method
The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics
More informationJos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary
Computing eigenvalues occurring in continuation methods with the Jacobi-Davidson QZ method Jos L.M. van Dorsselaer February 1997 Abstract. Continuation methods are a well-known technique for computing
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationInexact inverse iteration with preconditioning
Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More information