Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems

Size: px
Start display at page:

Download "Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems"

Transcription

1 ISSN (print), (online) International Journal of Nonlinear Science Vol.8 (009) No., pp Relaxation Newton Iteration for A Class of Algebraic Nonlinear Systems Shulin Wu, Baochang Shi, Chengming Huang School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan , P. R. China (Received 8 November 008, accepted 7 January 009) Abstract. Relaxation Newton algorithm, introduced in [Shulin Wu, Chengming Huang, Yong Liu, Newton waveform relaxation method for solving algebraic nonlinear equations, Applied Mathematics and Computation, 0 (008), pp. 3 60], is a method derived by combining the classical Newton s method and the waveform relaxation iteration. It has been shown that, with a special choice of the so called splitting function, this algorithm takes advantages of global convergence, less storage and absolute stability and can be implemented simultaneously. In this paper, we investigate a class of nonlinear equations which are well suited to be solved by this algorithm. These nonlinear equations are derived from the implicit discretization of nonlinear ordinary differential equations and nonlinear reaction diffusion equations. Several examples are tested to illustrate our theoretical analysis and the results show the advantages of this algorithm in the sense of iterative number and CPU time very well. Keywords: relaxation Newton algorithm; waveform relaxation methods; Newton s method; nonlinear algebraic equations; reaction diffusion equations AMS (MOS) subject classifications: 6Y0, 6Y0, 6Y0, 68Q60. Introduction Consider the following equations f(x) = 0, (.) where f : D R n R n. It is well known that to solve (.) efficiently is a very important problem in many fields, such as management science, industrial and financial research, data mining and numerical simulation of nonlinear systems, etc. There are numerous methods to solve (.) and the fundamental one is the classical Newton s method and its modifications. The classical Newton s method is an iterative method which is written as f (x k ) x k = f(x k ), x k+ = x k + x k, k = 0,,..., (.) where x 0 is the initial approximation of the solution x. It is known long that method (.) converges quadratically and locally. We know that two severe drawbacks counteract the direct application of method (.) in practice. One is that this algorithm converges only locally and this implies that one should choose the initial approximation x 0 which is sufficiently close to the unknown solution x ; the other is that the Jacobian matrix f (x) This work was supported by NSF of China(No , ) and by Program for NCET, the State Education Ministry of China. Corresponding author(shulin Wu). address: wushulin ylp@63.com (Shulin Wu), sbchust@6.com (Baochang Shi), chengming huang@hotmail.com (Chengming Huang). Copyright c World Academic Press, World Academic Union IJNS /7

2 44 International Journal of Nonlinear Science,Vol.8(009),No.,pp must be nonsingular in D and one needs to calculate [f (x)] for every iteration. The latter will bring in unacceptable burden in both storage and computation time. To overcome these drawbacks, many modifications of the classical Newton s method have been investigated and many excellent results have been obtained. For example, we can treat (.) as linear problem Ax = b to obtain Δx k ; there are many methods to treat this linear problem, such as Jacobi, Gauss Seidel, Conjugate Gradient[6, 7], GMRES[3], AOR iteration [30], etc. There are so many prominent results in this field that we can not recount them detailedly. For a description of state of the art, we refer the reader to the classical books [3, 9, ] and papers [7, 0,, 9, 3, 3], etc. In [4] the authors introduce another variation of the classical Newton s method, the relaxation Newton algorithm, to solve equations (.). This method is called Newton waveform relaxation in [4], but here we call it relaxation Newton, since through each iteration we obtain a set of discrete values but not a set of continuous functions which is an important character of the waveform relaxation methods [0 3, 7, 8]. The key idea of the relaxation Newton algorithm is to choose a splitting function F : D D R n which is minimally assumed to satisfy the consistency condition F (x, x) = f(x) (.3) for any x R n. Then with an initial guess x 0 of the unknown solution x at hand, we start with the previous approximation x k to compute the next approximation x k+ by solving the problem F (x k, x k+ ) = 0, k = 0,,.... (.4) with some conventional method, such as the classical Newton s method, quasi Newton methods, Conjugate Gradient method, etc. In [4] and this paper, we adopt the classical Newton s method to solve (.4), which explains the name relaxation Newton. The combination of the notion of the waveform relaxation iteration with other methods for (.4) will lead to new algorithms and this is one of the future directions. The deduced algorithm written compactly is shown in Figure.. In figure. and hereafter for k =0,,,... with a given initial approximation x 0 of x k+ ; for m =0,,..., M F (x k, x m ) x m = F(x k, x m ), x m+ = x m + x m, end x k+ = x M, end Figure.: The relaxation Newton method F (x, y) = F (x, z) z. (.) z=y If we set x 0 = x k and M = in figure., by consistency condition (.3), the iterative scheme is equivalent to F (x k, x k ) x k = f(x k ), x k+ = x k + x k, k = 0,,.... (.6) With a special choice of F, the Jacobi matrix F (x, x) will be a diagonal or block diagonal matrix and invertible in R n, and thus iterative method (.6) can be processed stably and simultaneously with less storage compared with the classical Newton s method. IJNS for contribution: editor@nonlinearscience.org.uk

3 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic 4 In [4], an affine covariant Lipschitz condition imposed on the splitting function F (x, y) is given to guarantee the global convergence of method (.6) for general nonlinear equations (.). However, the authors say nothing about what class of nonlinear equations is suited to be solved by the relaxation Newton method; this is the topic of present paper. We will see that these nonlinear equations play an important role in the field of implicit discretization of the nonlinear ordinary differential equations(odes) and the reaction diffusion equations with nonlinear reaction term. We show in this paper that with a special choice of splitting function F (x, y), these nonlinear equations can be solved efficiently with much less storage, CPU time and iterations. The remainder of this paper is organized as follows. In section, we recall the affine covariant convergence lemma proved in [4] and then we introduce the splitting function F (x, y) and the nonlinear equations discussed in this paper. In section 3, we apply this algorithm to the reaction diffusion equations with nonlinear reaction term. In section 4, we test a set of examples to illustrate our theoretical analysis and the efficiency of this algorithm. The nonlinear equations and convergence analysis For nonlinear equations (.), the global convergence of the relaxation Newton algorithm was proven in [4] provided the splitting function F (x, y) satisfies an affine covariant Lipschitz condition. Lemma. Let F : D D R n be a continuous mapping with D R n open and convex and satisfy the consistent condition (.3). Function F (x, y) is Fr echet differentiable with aspect to the second variable y and the Jacobi matrix F (x, y) is invertible for any x, y D. Assume that and F (x, x)(f (y, y) F (x, y)) α x y with α <, (.) F (x, x)(f (x, x) F (x, y)) = 0 (.) for any x, y D. Then with an arbitrary starting approximation x 0, the sequence x k } obtained by (.6) is well defined, remains in the open ball B(x, x 0 x ) D and converges to x with F (x, x ) = 0(i.e., f(x ) = 0). Moreover, the error x k x satisfies x k x α k x 0 x. (.3) Note that condition (.) is satisfied if the matrix F (x, y) is independent of the second variable y. In present paper, we assume that the function f in (.) satisfies the following conditions. Condition Suppose that the function f in (.) has a form f(x) = x + g(x) + h(x), (.4) where g(x) = (g (x ), g (x ),..., g n (x n )) T ; moreover, each component of functions g(x), h(x) satisfies g i,max < +, h i (x) h i (y) c i x y, and c i < + g i,min, (.) for any x, y D and x i R, here and in what follows g i,min dg i (x i ) = inf, g dg i (x i ) i,max = sup, i =,,..., n. (.6) x i R dx i x i R dx i Remark Consider the following differential system y (t) + G(t, y(t)) + H(t, y(t)) = 0, t > 0, y(0) = y 0, t = 0, (.7) where G(t, y(t)) = (G (t, y (t)),..., G n (t, y n (t))) T. IJNS homepage:

4 46 International Journal of Nonlinear Science,Vol.8(009),No.,pp By applying the backward Euler method to (.7) we arrive at y n+ = y n τg(t n+, y n+ ) τh(t n+, y n+ ), n = 0,,..., N, (.8) where τ is the discretization step-size. Clearly, equations (.8) can be written into the general form x + g(x) + h(x) = 0. (.9) In fact, many implicit numerical methods applied to differential system (.7) will lead to nonlinear equations (.9). Under Condition, we consider the following splitting function F (x, y) = a y + ( a )x + [b (x )(y x ) + ]g (x ) + h (x) a y + ( a )x + [b (x )(y x ) + ]g (x ) + h (x). a n y n + ( a n )x n + [b n (x n )(y n x n ) + ]g n (x n ) + h n (x), (.0) where a i, b i (x i ) R. It is clear that the splitting function F (x, y) satisfies the consistency condition F (x, x) = f(x) with any a i and b i (x i ), i =,,..., n. Moreover, F (x, y) = diag(a +b (x )g (x ),..., a n + b n (x n )g n (x n )) and this implies that condition (.) in Lemma. holds and the Jacobi matrix F will be nonsingular with a i and b i (x i ) chosen properly. Theorem. Assume the function f in (.) satisfies conditions (.4) and (.). Then relaxation Newton method (.6) with splitting function (.0) converges globally and the Jacobi matrix F will be nonsingular, provided a i, b i (x i ) satisfy a i + b i (x i )g i (x i ) + g i,max, i =,,..., n. (.) Proof. It is clear that the Jacobi matrix F will be nonsingular since a i + b i (x i )g i (x i ) + g i,max + g i,min > c i > 0. Moreover, since the Jacobi matrix F (x, y) is independent of the second variable y, we just need to prove that condition (.) in Lemma. holds for any x, y D. By routine calculation, we have F (x, x)(f (x, y) F (y, y)) a i b i (x i )g i (x i ) + g i = max (ξ i)(x i y i ) + h i (x) h i (y) i n a i + b i (x i )g i (x i ) a i b i (x i )g i (x i ) + g i max (ξ i + c i x y i n = max i n a i + b i (x i )g i (x i ) + c i g i (ξ i) a i + b i (x i )g i (x i ) } y x, where in the first equality we used Lagrange s mean theorem g i (ξ i) = g i (x i ) g i (y i ) with some ξ i between x i and y i ; in the second inequality we used condition (.) and in the last equality we used hypothesis (.). Since c i < + g i,min and a i + b i (x i )g i (x i ) + g i,max > 0, we have c i g i (ξ i) a i + b i (x i )g i (x i ) c i g i,min a i + b i (x i )g i (x i ) < 0 (.) for any ξ i, x i R; this implies max + c i g i (ξ } i) + c i g i,min i n a i + b i (x i )g i (x i ) + g i,max = c i + g i,max g i,min + g i,max <. From this, we complete our proof. IJNS for contribution: editor@nonlinearscience.org.uk

5 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic 47 From (.3) and Theorem. we have x k x ( max i n + c i g i,min a i + b i (x i )g i (x i ) }) k x 0 x. (.3) Note that in practical implementation of relaxation Newton algorithm (.6), the Jacobi matrix F (x k, x k ) is a diagonal matrix with elements a i + b j (x i )g i (x i ), i =,,..., n. And thus, it is unnecessary to determine the parameters a i, b i (x i ) separately but just need to determine the quantities a i +b j (x i )g i (x i ). Further more, according to (.3) we know that the optimal diagonal elements of the matrix F should be + g i,max, i =,,..., n. However, in many cases we can not calculate each quality g i,max accurately, even some reliable upper bound. For these nonlinear equations we need deeper research for the convergence conditions of the relaxation Newton algorithm and this is our further work. At present moment, we roughly set a i +b j (x i )g i (x i ) = φ i + φ i g (x i ) with some φ i, φ i, i =,,..., n. Fortunately, for many such nonlinear equations this simply strategy still lead to convergence of the algorithm as shown in section 4. 3 Application to nonlinear reaction diffusion equations In this section, we apply the relaxation Newton algorithm to the implicit discretization of the reaction diffusion equations with nonlinear reaction term as u t νu xx + R(u, x, t) = 0, (x, t) (0, L) (0, T ), u(x, 0) = ψ 0 (x), x [0, L], u(0, t) = ψ (t), u(l, t) = ψ (t), t [0, T ], where ν > 0. We apply the method of lines to discretize the diffusion term u xx using the central difference discretization on M points x j = j/m, j =,..., M, Δx = L/M. Then we obtain a system of M differential equations u j (t) ν [u Δx j+ (t) u j (t) + u j (t)] + R(u j (t), x j, t) = 0, t (0, T ), (3.) u j (0) = ψ 0 (x j ), j =,,..., M, where u j (t) = u(x j, t), u 0 (t) = ψ (t), u M (t) = ψ (t). Define U(t) = (u (t), u (t),..., u M (t)) T, U(t) ( ν = Δx ψ ν ) T (t), 0,..., 0, Δx ψ (t), ( ) ν G(Δx, U) = Δx u ν T + R(u, x, t),..., Δx u M + R(u M, x M, t), H = ν Δx , U 0 = (ψ 0 (x ),..., ψ 0 (x M )) T Then we have U (t) + G(Δx, U(t)) + HU(t) U(t) = 0, U(0) = U 0, By applying some implicit method, for example the backward Euler method, to discrete system (3.3) in time we obtain U n+ + ΔtG(Δx, U n+ ) + ΔtHU n+ Δt U(t n+ ) U n = 0. (3.4) (3.) (3.3) IJNS homepage:

6 48 International Journal of Nonlinear Science,Vol.8(009),No.,pp Let g(u n+ ) = ΔtG(Δx, U n+ ), h(u n+ ) = ΔtHU n+ Δt U(t n+ ) U n. (3.) Then equations (3.4) can be rewritten as U n+ + g(u n+ ) + h(u n+ ) = 0. (3.6) Clearly, nonlinear equations (3.6) takes the form of (.4). Therefore, we may apply the relaxation Newton method with the splitting function given in (.0) to solve (3.6) from t n to t n+. The following result guarantees the convergence of the relaxation Newton method for equations (3.6). Theorem 3. Let γ(x i, t n+ ) = inf u R R(u,x i,t n+ ) u. Then for nonlinear equations (3.4) the relaxation Newton method with the splitting function defined in (.0) is convergent from t n to t n+ provided Δtγ(x i, t n+ ) + > 0 holds for i =,,..., M. Proof. With definitions (3.), it is easy to get g i,min dg i (u) = inf = νδt R(u, x i, t n+ ) u R du Δx + Δt inf = νδt u R u Δx + Δtγ(x i, t n+ ) and h(x) h(y) νδt Δx x y. Thus, we may state by applying Theorem. that the relaxation Newton method applied to nonlinear equations (3.4) is convergent if Δtγ(x i, t n+ ) + > 0 holds for i =,,..., M. Similar to the analysis given in the end of section 3, we know that the optimal ( Jacobi matrix F of the ) ν R(u,x relaxation Newton algorithm should be diagonal matrix with elements + Δt + sup i,t n+ ) Δx u, i =,,..., M. In practical implementation of the relaxation Newton algorithm, if we can not get R(u,x sup i,t n+ ) R(u,x u accurately or a reliable upper bound, we may roughly replace sup i,t n+ ) u by φ i + u R u R R(u φ i,n+,x i,t n+ ) i u with some φ i, φ i, where u i,n+ is the i th component of the vector U n+. In our numerical experiments presented in the next section, we choose φ i = φ i =, i =,,..., M. 4 Numerical results In this section, we test some problems to illustrate the efficiency of the relaxation Newton algorithm in the sense of iterative number and CPU time. 4. Relaxation Newton algorithm for nonlinear ODEs u R Consider the following system consisting of M differential equations y (t) + G(t, y(t)) + H(t, y(t)) = 0, t 0, y(0) = 0, t = 0, (4.) with and H j (t, y(t)) = cos t + G j (t, y(t)) = e M+ j arctan M i=,i =j y i (t) (e M+ M j y j (t)) IJNS for contribution: editor@nonlinearscience.org.uk

7 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic 49 j =,,..., M. Applying the implicit Euler method to (4.) we get y n+ = y n τg(t n+, y n+ ) τh(t n+, y n+ ), n = 0,,..., N. (4.) Therefore, from t n to t n+ we need to solve the following nonlinear equations x + g(x) + h(x) = 0, (4.3) with g j (x j ) = τe M+ j arctan (e M+ ) M j x j and Routine calculation yields h j (x) = τ cos t + dg j dx j = M i=,i =j ( + x i y n,j, j =,..., M. τe M j M e M+ M j x j ). From this we have g j,min = 0, g j,max = τem j M. With Lagrange s mean theorem, it is easy to get h i (x) h i (y) τ(m ) x y, (4.4) i =,..., M. Thus, by Theorem. we know that τ < M is sufficient to guarantee the global convergence of the relaxation Newton algorithm. Experiment A Let M = 0. We know that, to solve nonlinear equations (4.3), fixed point iteration method is a good choice provided the functions g, h satisfy the following contraction Lipschitz condition g i (x i ) g i (y i ) + h i (x) h i (y) η i x y, η i <, i =,..., 0. (4.) This contraction condition theoretically indicates that τ < e is necessary to guarantee the convergence of the fixed point method on each time point t n. Therefore, if we solve system (4.) in interval I = [0, ], we just need to calculate 0 steps by solving the resulting nonlinear equations with the relaxation Newton method, while with the fixed point iteration we need to calculate 0 4 steps. By our analysis stated in the end of section, we know that the Jacobi matrix F (x k, x k ) in (.6) should be diagonal matrix with elements + τe M j M, j =,,..., 0. In this experiment, we solve system (4.) numerically in interval [0, 0] by the backward Euler method and the resulting nonlinear equations (4.) are solved by the relaxation Newton method and the fixed-point method, respectively. For these two iterative methods, the initial approximation of y n+ is y n, and the tolerance of residual is 0 4, i.e., for k from 0 to k max if the stop criterion RES n = y k n+ + τg(t n+, y k n+) + τh(t n+, y k n+) y n 0 4 holds for some k, we let y n+ = y k n+, where k max is the maximal iterative number. We set k max = 0 in all experiments. Set τ = 0, The required iterative number and the finial residual at every time point t n for these two methods are plotted in Figs. 4. and 4.. We see from the left panel of figure 4. that to achieve the tolerance 0 4 the iterative number of the relaxation Newton method is at every step. However, the fixed-point method is not convergent at every time point t n. We see from the left panel of figure 4. that, for τ =. 0 4, the residual of the relaxation Newton algorithm is less than 0 after only one iteration. However, from the right panel we see that the fixed-point method is still not convergent. Experiment B Next we compare the computational efficiency of the relaxation Newton method with the classical Newton s method coupled with inner linear problem solvers, the stabilized bi-conjugate gradient IJNS homepage:

8 0 International Journal of Nonlinear Science,Vol.8(009),No.,pp Residual Iterative Number: k Residual 0 Iterative Number: k Relaxation Newton Method Fixed Point Method Figure 4.: Step size τ = Residual Iterative Number: k Residual Iterative Number: k Relaxation Newton Method Fixed Point Method Figure 4.: Step size τ =. 0 4 method (BICGSTAB) [6, 7] and the generalized minimal residual method (GMRES) [3]; for convenience, we denote the methods derived by combining the classical Newton s method and the inner solvers BICGSTAB and GMRES by Newton BICGSTAB and Newton GMRES, respectively. In this and the next three experiments, for the classical Newton s method we refer the iterative number at every time point as the number of summation of inner linear iterations; moreover, for Newton s method coupled with some linear solver and the relaxation Newton method, the tolerance T OL outer for outer nonlinear iteration is 0 4 and the tolerance T OL inner for inner linear iteration is 0 3. In this experiment, we consider system (4.) with M = 00. Similar to the analysis stated in experiment A we know that in this case τ 0.00 is sufficient to guarantee the convergence of the relaxation Newton method. We choose τ = 0 and solve system (4.) numerically in interval [0, 0]. The resulting nonlinear equations (4.) are solved by the relaxation Newton method and the methods Newton BICGSTAB and Newton GMRES. The residual and iterative number at every time point t n for these three methods are plotted in figure 4.3, left panel for the relaxation Newton method and right panel for the methods Newton BICGSTAB and Newton GMRES. We see on the left panel of figure 4.3 that the residual of the relaxation Newton method at every time point is less than 0 6 after only one iteration, while the Newton BICGSTAB and Newton GMRES methods only achieve 0 8 after and 4 linear iterations, respectively. Moreover, the computational time costed at every time point of these three methods is significantly different which is shown in figure 4.4. Actually, the average computational time costed at every time point of the Newton BICGSTAB and Newton GMRES methods is about 68 times of that of the relaxation Newton method. IJNS for contribution: editor@nonlinearscience.org.uk

9 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic Residual Iterative Number:k. 0. Residual Newton BICGSTAB Newton GMRES Iterative Number:k Newton BICGSTAB Newton GMRES Relaxation Newton Method Newton BICGSTAB/GMRES Method Figure 4.3: Step size τ = 0 Relaxation Newton Newton BICGSTAB Newton GMRES 0 0 CPU time [s] Figure 4.4: CPU time costed at every time point of the Newton-BICGSTAB /GMRES methods and the relaxation Newton method 4. Relaxation Newton algorithm for nonlinear reaction diffusion equations Next, we test some reaction diffusion equations with nonlinear reaction term to compare the efficiency of the relaxation Newton algorithm with the classical Newton s method; the comparison is focused on two aspects: the iterative number and the CPU time at every time point. For the classical Newton s method, we consider at present moment the iterative methods GMRES, SYMMLQ [8], MINRES [, 8], BICG, BICGSTAB, CGS, LSQR as the inner linear problem solver. For nonlinear system (3.4), we note that for each Newton iteration we actually need to solve a tridiagonal linear system, and thus direct solver the cyclic reduction algorithm proposed by Hockney [8] is also a good choice. We denote the methods derived by combining the classical Newton s method and the inner linear problem solvers GMRES, SYMMLQ, MINRES, BICG, BICGSTAB, CGS, LSQR and Hockney s direct method by Newton GMRES, Newton SYMMLQ, Newton MINRES, Newton BICG, Newton BICGSTAB, Newton CGS, Newton LSQR and Newton DIRECT, respectively. Moreover, to show explicitly the advantages of the relaxation Newton method in the sense of CPU time and iterative number, we define the ratios about average iterative number and CPU time of the Newton type methods to the relaxation Newton method as γ GMRES = T GMRES, γ SY MMLQ = T SY MMLQ γ BICGST AB = T BICGST AB, γ CGS = T CGS, γ MINRES = T MINRES, γ BICG = T BICG,, γ LSQR = T LSQR, γ DIRECT = T DIRECT, IJNS homepage:

10 International Journal of Nonlinear Science,Vol.8(009),No.,pp and κ GMRES = K GMRES, κ SY MMLQ = K SY MMLQ κ BICGST AB = K BICGST AB, κ CGS = K CGS, κ MINRES = K MINRES, κ BICG = K BICG,, κ LSQR = K LSQR, κ DIRECT = K DIRECT, respectively, where, stand for the average CPU time and iterative number costed at every time point of the relaxation Newton method (similar interpretation for the other symbols). Experiment C Consider the following reaction diffusion problem [6, ] where c(x) = u t = ε u xx + u( u)(u c(x)), (x, t) (0, 80) (0, 0), u(x, 0) = sin ( πx L), x [0, 80] u(0, t) = 0., u(l, t) = 0.4, t [0, 0], 0.7, x < 90, 0., x 90. Let ε = 4. We choose mesh parameters Δx = 0.0 and Δt = 0.0 to solve (4.6) numerically. In Figure 4.6 we plot the CPU time and iterative number of the Newton type method and the relaxation Newton method at every time in the left and right panels, respectively. In this and the following experiments, the relation of the line types to the methods is specified in Figure 4.. (4.6) Relaxation Newton Newton GMRES Newton SYMMLQ Newton MINRES Newton BICG Newton BICGSTAB Newton CGS Newton LSQR Newton DIRECT 0 0 Figure 4.: Relation of the line types to the methods CPU Time [s] Iterative Number Figure 4.6: CPU time (left panel) and iterative number (right panel) of the methods at every time point IJNS for contribution: editor@nonlinearscience.org.uk

11 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic 3 We see in figure 4.6 that the advantages of the relaxation method in the sense of CPU time and iterative number are obvious. Particularly, we list in tables 4. and 4. the ratios about average CPU time and iterative number of the Newton type methods to the relaxation Newton method, respectively. Table 4.: Ratios of average CPU time γ GMRES γ SY MMLQ γ MINRES γ BICG γ BICGST AB γ CGS γ LSQR γ DIRECT Table 4.: Ratios of average iterative number κ GMRES κ SY MMLQ κ MINRES κ BICG κ BICGST AB κ CGS κ LSQR κ DIRECT From the results listed in table 4., one may see that the relaxation Newton method outperforms the Newton type method even though compared with some algorithms, such as Newton SYMMLQ, Newton MINRES, Newton BICGSTAB and Newton DIRECT, more iterations are required (see the results in table 4.). Example D Our last experiment is to test the well known fisher reaction diffusion equation [,, 4, 9, 4] as u t = Du xx + ku(c u), (x, t) (0, 30) (0, 0), u(x, 0) = u 0 (x), x [0, 30], (4.7) u(0, t) = φ (t), u(l, t) = φ (t), t [0, 0], where D, k, c are positive parameters. We complete (4.7) by the following initial boundary conditions ( πx ) ( ) π u 0 (x) = 0.8 sin, φ (t) =, φ (t) = 0.8 sin. (4.8) 0 We consider the parameters D = 0.0, k = 0.78, c = 0.9, if u, 4., if u >. (4.9) We choose mesh parameters Δx = 0.08 and Δt = 0.0 to solve (4.7) (4.9) numerically. In figure 4.7, we plot the profiles of the solution u(x, t) computed by the finite difference method coupled with the relaxation Newton iteration, where one can see that (4.7) (4.9) is really a challenge problem Space: x 0 0 Time: t Figure 4.7: Profiles of the solution of problme(4.7) (4.9) computed by the relaxation Newton method We compare the computational efficiency of the relaxation Newton method with the Newton type methods by showing the CPU time and the iterative number at every time point on the left and right panels of IJNS homepage:

12 4 International Journal of Nonlinear Science,Vol.8(009),No.,pp figure 4.8, respectively. In tables 4.3 and 4.4 we list the ratios about average CPU time and iterative number of the Newton type methods to the relaxation Newton method, respectively. It is shown clearly in figure 4.8 and these two tables that the relaxation Newton algorithm has significantly advantages in the sense of CPU time and iteration number. CPU Time [s] Iterative Number Figure 4.8: CPU time (left panel) and iterative number (right panel) of the methods at every time point Table 4.3: Ratios of average CPU time γ GMRES γ SY MMLQ γ MINRES γ BICG γ BICGST AB γ CGS γ LSQR γ DIRECT Table 4.4: Ratios of average iterative number κ GMRES κ SY MMLQ κ MINRES κ BICG κ BICGST AB κ CGS κ LSQR κ DIRECT Acknowledgements The authors are grateful to the anonymous referee for the careful reading of a preliminary version of the manuscript and their valuable suggestions and comments, which really improve the quality of this paper. References [] M. Ablowitz, A. Zepetella: Explicit solution of Fisher s equation for a special wave speed. Bull. Math. Biol. 4: (979) [] N. F. Britton: Reaction diffusion equations and their applications to biology. Academic Press, New York.(986) [3] P. Deulfhard: Newton Methods for Nonlinear Problems: Affine Invariant and Adaptive Algorithms. Springer, Berlin. (004) [4] P. C. Fife: Mathematical aspects of reacting and diffusing systems, Lectures Notes in Biomathematics, vol. 8. Springer, Berlin. (979) [] A. Greenbaum, M. Rozložník, Z. Strakoš: Numerical behavior of the modified Gram Schmidt GMRES implementation. BIT. 37: (997) IJNS for contribution: editor@nonlinearscience.org.uk

13 S.L. Wu, B.C. Shi, C.M. Huang: Relaxation Newton Iteration for A Class of Algebraic [6] J. K. Hale, José Domingo Salazar González: Attractors of some reaction diffusion problems. SIAM J. Math. Anal. 30: (999) [7] S. Hakkaev, K. Kirchev: On the well posedness and stability of Peakons for a generalized Camassa Holm equation. International Joural of Nonlinear Science. 3: (006) [8] R. W. Hockney: A fast direct solution of Poisson s equation using Fourier analysis. Journal of the ACM(JACM). : 9 3 (96) [9] A. N. Kolmogorov, I. G. Petrovskii, N. S. Piskunov: A study of the diffusion equation with increase in the quantity of matter and its application to a biological problem. Bull. Moscow State Univ. 7: 7 (937) [0] E. Lelarasmee, A. E. Ruehli, A. L. Sangiovanni Vincentelli. The waveform relaxation methods for time domain analysis of large scale integrated circuits. IEEE Trans. Computer Aided Design. : 3 4 (98) [] U. Miekkala, O. Nevanlinna: Convergence of dynamic iteration methods for initial value problems. SIAM J. Sci. Statist. Comput. 8: (987) [] U. Miekkala, O. Nevanlinna: Sets of convergence and stability regions. BIT. 7: 7 84 (987) [3] U. Miekkala: Dynamic iteration methods applied to linear DAE systems. J. Comput. Appl. Math. : 3 (989) [4] J. D. Murray: Mathematical biology. New York: Springer. (993) [] O. Nevanlinna: Remarks on Picard-Lindel of iteration, Part I. BIT. 9: (989) [6] O. Nevanlinna: Remarks on Picard Lindel of of iteration, Part II. BIT. 9: 3 6 (989) [7] O. Nevanlinna, Linear acceleration of Picard Lindel of iteration. Numer. Math. 7: 47 6 (990) [8] C. C. Paige, M. A. Saunders: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. : (97) [9] H. O. Peitgen (Ed.): Newton s method and complex dynamics systems. Springer, Berlin. (989) [0] B. T. Polyak: Newton s method and its use in optimization. European Journal of Operational Research. 8: (007) [] W. C. Rheinboldt: Methods for Solving Systems of Nonlinear Equations. SIAM, Philadelphia. (998) [] C. Rocha: Generic Properties of Equilibria of Reaction Diffusion Equations. Proc. of the Roy. Soc. Edinburgh Sect. A. 0: 4 (98) [3] Y. Saad, M. H. Schultz: GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Comput. 7: (986) [4] S. L. Wu, C. M. Huang, Y. Liu: Newton waveform relaxation method for solving algebraic nonlinear equations. Appl. Math. Comput. 0:3 60 (008) [] Ali R. Soheili, S.A. Ahmadian, J. Naghipoor: A Family of Predictor Corrector Methods Based on- Weight Combination of Quadratures for Solving Nonlinear Equations. International Journal of Nonlinear Science. 6:9 33 (008) [6] H. A. van der Vorst: Bi CGStab: a fast and smoothly convergent variant of Bi CG for the solution of non symmetric linear systems. SIAM J. Sci. Statist. Comput. 3: (99) [7] H. A. van der Vorst: Iterative Krylov Methods for Large Linear Systems. Cambridge. (003) IJNS homepage:

14 6 International Journal of Nonlinear Science,Vol.8(009),No.,pp [8] S. Vandewalle: Parallel multigrid waveform relaxation for parabolic problems. B. G. Teubner, Stuttgart. (993) [9] Y. Wang, L. Wang, W. Zhang: Application of the Adomian Decomposition Method to Fully Nonlinear Sine Gordon Equation. International Joural of Nonlinear Science. :9 38 (006) [30] L. Wang: Comparison Results for AOR Iterative Method with a New Preconditioner. International Journal of Nonlinear Science. :6 8 (006) [3] T. J. Ypma: Historical development of the Newton-Raphson method. SIAM Rev. 37:3 (99) [3] H. Zhu, S. Wen: A Class of Generalized Quasi Newton Algorithms with Superlinear Convergence. International Journal of Nonlinear Science. :40 46 (006) IJNS for contribution: editor@nonlinearscience.org.uk

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations

A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations MATHEMATICAL COMMUNICATIONS Math. Commun. 2(25), 5 A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations Davod Khojasteh Salkuyeh, and Zeinab Hassanzadeh Faculty

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Quadrature based Broyden-like method for systems of nonlinear equations

Quadrature based Broyden-like method for systems of nonlinear equations STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Research Article Two-Step Relaxation Newton Method for Nonsymmetric Algebraic Riccati Equations Arising from Transport Theory

Research Article Two-Step Relaxation Newton Method for Nonsymmetric Algebraic Riccati Equations Arising from Transport Theory Mathematical Problems in Engineering Volume 2009, Article ID 783920, 7 pages doi:0.55/2009/783920 Research Article Two-Step Relaxation Newton Method for Nonsymmetric Algebraic Riccati Equations Arising

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Abstract and Applied Analysis Volume 212, Article ID 391918, 11 pages doi:1.1155/212/391918 Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations Chuanjun Chen

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics

Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras DAMTP 1995/NA4 August 1995 Department of Applied Mathematics and Theoretical

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Homotopy Perturbation Method for the Fisher s Equation and Its Generalized

Homotopy Perturbation Method for the Fisher s Equation and Its Generalized ISSN 749-889 (print), 749-897 (online) International Journal of Nonlinear Science Vol.8(2009) No.4,pp.448-455 Homotopy Perturbation Method for the Fisher s Equation and Its Generalized M. Matinfar,M. Ghanbari

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 4, pp. 373-38, 23. Copyright 23,. ISSN 68-963. ETNA A NOTE ON THE RELATION BETWEEN THE NEWTON HOMOTOPY METHOD AND THE DAMPED NEWTON METHOD XUPING ZHANG

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

The Hausdorff Measure of the Attractor of an Iterated Function System with Parameter

The Hausdorff Measure of the Attractor of an Iterated Function System with Parameter ISSN 1479-3889 (print), 1479-3897 (online) International Journal of Nonlinear Science Vol. 3 (2007) No. 2, pp. 150-154 The Hausdorff Measure of the Attractor of an Iterated Function System with Parameter

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

NUMERICAL METHODS FOR ENGINEERING APPLICATION

NUMERICAL METHODS FOR ENGINEERING APPLICATION NUMERICAL METHODS FOR ENGINEERING APPLICATION Second Edition JOEL H. FERZIGER A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Travelling Wave Solutions for the Gilson-Pickering Equation by Using the Simplified G /G-expansion Method

Travelling Wave Solutions for the Gilson-Pickering Equation by Using the Simplified G /G-expansion Method ISSN 1749-3889 (print, 1749-3897 (online International Journal of Nonlinear Science Vol8(009 No3,pp368-373 Travelling Wave Solutions for the ilson-pickering Equation by Using the Simplified /-expansion

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

New Solutions for Some Important Partial Differential Equations

New Solutions for Some Important Partial Differential Equations ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.4(2007) No.2,pp.109-117 New Solutions for Some Important Partial Differential Equations Ahmed Hassan Ahmed Ali

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Numerical Simulation of the Generalized Hirota-Satsuma Coupled KdV Equations by Variational Iteration Method

Numerical Simulation of the Generalized Hirota-Satsuma Coupled KdV Equations by Variational Iteration Method ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.7(29) No.1,pp.67-74 Numerical Simulation of the Generalized Hirota-Satsuma Coupled KdV Equations by Variational

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

New Exact Travelling Wave Solutions for Regularized Long-wave, Phi-Four and Drinfeld-Sokolov Equations

New Exact Travelling Wave Solutions for Regularized Long-wave, Phi-Four and Drinfeld-Sokolov Equations ISSN 1749-3889 print), 1749-3897 online) International Journal of Nonlinear Science Vol.008) No.1,pp.4-5 New Exact Travelling Wave Solutions for Regularized Long-wave, Phi-Four and Drinfeld-Sokolov Euations

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS INVOLVING A NEW FRACTIONAL DERIVATIVE WITHOUT SINGULAR KERNEL

ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS INVOLVING A NEW FRACTIONAL DERIVATIVE WITHOUT SINGULAR KERNEL Electronic Journal of Differential Equations, Vol. 217 (217), No. 289, pp. 1 6. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Hopf Bifurcation and Limit Cycle Analysis of the Rikitake System

Hopf Bifurcation and Limit Cycle Analysis of the Rikitake System ISSN 749-3889 (print), 749-3897 (online) International Journal of Nonlinear Science Vol.4(0) No.,pp.-5 Hopf Bifurcation and Limit Cycle Analysis of the Rikitake System Xuedi Wang, Tianyu Yang, Wei Xu Nonlinear

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

2. The generalized Benjamin- Bona-Mahony (BBM) equation with variable coefficients [30]

2. The generalized Benjamin- Bona-Mahony (BBM) equation with variable coefficients [30] ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.12(2011) No.1,pp.95-99 The Modified Sine-Cosine Method and Its Applications to the Generalized K(n,n) and BBM Equations

More information

DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL DIFFUSION EQUATION

DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL DIFFUSION EQUATION Journal of Fractional Calculus and Applications, Vol. 6(1) Jan. 2015, pp. 83-90. ISSN: 2090-5858. http://fcag-egypt.com/journals/jfca/ DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS *

MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS * International Journal of Bifurcation and Chaos, Vol 14, No 1 (24) 3587 365 c World Scientific Publishing Company MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS * S-L CHANG Center

More information

CONSEQUENCES OF TALENTI S INEQUALITY BECOMING EQUALITY. 1. Introduction

CONSEQUENCES OF TALENTI S INEQUALITY BECOMING EQUALITY. 1. Introduction Electronic Journal of ifferential Equations, Vol. 2011 (2011), No. 165, pp. 1 8. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu CONSEQUENCES OF

More information

Research Article Residual Iterative Method for Solving Absolute Value Equations

Research Article Residual Iterative Method for Solving Absolute Value Equations Abstract and Applied Analysis Volume 2012, Article ID 406232, 9 pages doi:10.1155/2012/406232 Research Article Residual Iterative Method for Solving Absolute Value Equations Muhammad Aslam Noor, 1 Javed

More information