Investigation on the Most Efficient Ways to Solve the Implicit Equations for Gauss Methods in the Constant Stepsize Setting

Similar documents
Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

Remark on the Sensitivity of Simulated Solutions of the Nonlinear Dynamical System to the Used Numerical Method

A Study on Linear and Nonlinear Stiff Problems. Using Single-Term Haar Wavelet Series Technique

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations

Solving Orthogonal Matrix Differential Systems in Mathematica

Quadratic SDIRK pair for treating chemical reaction problems.

Explicit Expressions for Free Components of. Sums of the Same Powers

Solving Ordinary Differential equations

An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method

Symplectic Runge-Kutta methods satisfying effective order conditions

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS.

Numerical solution of stiff systems of differential equations arising from chemical reactions

A New Embedded Phase-Fitted Modified Runge-Kutta Method for the Numerical Solution of Oscillatory Problems

Advanced methods for ODEs and DAEs

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Another Sixth-Order Iterative Method Free from Derivative for Solving Multiple Roots of a Nonlinear Equation

Reducing round-off errors in symmetric multistep methods

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

An Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

An Embedded Fourth Order Method for Solving Structurally Partitioned Systems of Ordinary Differential Equations

Reducing round-off errors in rigid body dynamics

Numerical Integration of Equations of Motion

Runge Kutta Collocation Method for the Solution of First Order Ordinary Differential Equations

Scientific Computing: An Introductory Survey

NUMERICAL SOLUTION OF ODE IVPs. Overview

Z. Omar. Department of Mathematics School of Quantitative Sciences College of Art and Sciences Univeristi Utara Malaysia, Malaysia. Ra ft.

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

THREE- AND FOUR-STAGE IMPLICIT INTERVAL METHODS OF RUNGE-KUTTA TYPE

IMPLICIT INTERVAL MULTISTEP METHODS FOR SOLVING THE INITIAL VALUE PROBLEM

Research Article P-Stable Higher Derivative Methods with Minimal Phase-Lag for Solving Second Order Differential Equations

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Solving scalar IVP s : Runge-Kutta Methods

On the Diagonal Approximation of Full Matrices

One-Step Hybrid Block Method with One Generalized Off-Step Points for Direct Solution of Second Order Ordinary Differential Equations

CHAPTER 10: Numerical Methods for DAEs

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Double Total Domination in Circulant Graphs 1

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

ONE- AND TWO-STAGE IMPLICIT INTERVAL METHODS OF RUNGE-KUTTA TYPE

Runge-Kutta Theory and Constraint Programming Julien Alexandre dit Sandretto Alexandre Chapoutot. Department U2IS ENSTA ParisTech SCAN Uppsala

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

arxiv: v1 [math.na] 31 Oct 2016

Construction and Implementation of Optimal 8 Step Linear Multistep

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

On a Certain Representation in the Pairs of Normed Spaces

Parallel Methods for ODEs

Recurrence Relations between Symmetric Polynomials of n-th Order

AIMS Exercise Set # 1

Exam in TMA4215 December 7th 2012

A Class of Multi-Scales Nonlinear Difference Equations

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

The Milne error estimator for stiff problems

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

A quantitative probabilistic investigation into the accumulation of rounding errors in numerical ODE solution

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems

Why Bellman-Zadeh Approach to Fuzzy Optimization

Geometric Numerical Integration

Multilevel Structural Equation Model with. Gifi System in Understanding the. Satisfaction of Health Condition at Java

Optimal Implicit Strong Stability Preserving Runge Kutta Methods

The family of Runge Kutta methods with two intermediate evaluations is defined by

Numerical Methods for Differential Equations

Efficient implementation of Radau collocation methods

ON THE IMPLEMENTATION OF SINGLY IMPLICIT RUNGE-KUTTA METHODS

Adaptive nested implicit Runge Kutta formulas of Gauss type

A NOTE ON EXPLICIT THREE-DERIVATIVE RUNGE-KUTTA METHODS (ThDRK)

Southern Methodist University.

An initialization subroutine for DAEs solvers: DAEIS

Numerical Algorithms as Dynamical Systems

Fifth-Order Improved Runge-Kutta Method With Reduced Number of Function Evaluations

Applied Math for Engineers

The Expansion of the Confluent Hypergeometric Function on the Positive Real Axis

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs

CS520: numerical ODEs (Ch.2)

Delay Differential Equations with Constant Lags

Study the Numerical Methods for Solving System of Equation

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

Course in. Geometric nonlinearity. Nonlinear FEM. Computational Mechanics, AAU, Esbjerg

Generalized RK Integrators for Solving Ordinary Differential Equations: A Survey & Comparison Study

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Math 411 Preliminaries

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands

Adaptive Integration of Stiff ODE

Improvements in Newton-Rapshon Method for Nonlinear Equations Using Modified Adomian Decomposition Method

Sums of Tribonacci and Tribonacci-Lucas Numbers

Strong Stability of Singly-Diagonally-Implicit Runge-Kutta Methods

Dynamics of Adaptive Time-Stepping ODE solvers

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3

Fourth Order RK-Method

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Transcription:

Applied Mathematical Sciences, Vol. 12, 2018, no. 2, 93-103 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2018.711340 Investigation on the Most Efficient Ways to Solve the Implicit Equations for Gauss Methods in the Constant Stepsize Setting Mohd Hafizul Muhammad Mathematics Department Faculty of Science and Mathematics Universiti Pendidikan Sultan Idris Sultan Azlan Shah Campus, Proton City Tanjong Malim 35900 Perak, Malaysia Annie Gorgey Mathematics Department Faculty of Science and Mathematics Universiti Pendidikan Sultan Idris Sultan Azlan Shah Campus, Proton City Tanjong Malim 35900 Perak, Malaysia Copyright c 2018 Mohd Hafizul Muhammad and Annie Gorgey. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract This article focuses on the implementation of the implicit Runge- Kutta (RK) Gauss methods. The Jacobian evaluations using full Newton (FN) is laborious, expensive and therefore suffer iteration errors that leads to less efficient implementation. The use of a very small stepsize can accumulate the round off errors and destroy the solutions. Therefore, compensated summation (CS) is required to minimize these roundoff errors. This research investigate the best implementation technique for Gauss methods using simplified Newton-Raphson (SN) and FN together with compensated summation (CS). The investigation involved with four different implementation techniques that are the combinations of FN, SN with and without CS. The numerical results on Brusselator

94 Mohd Hafizul Muhammad and Annie Gorgey and Oregenator problems showed that the implementation of G2 and G3 using SN and FN are efficient with compensated summation especially when the stepsize used is very small. This article also showed that the most efficient ways to implement G2 and G3 is using SN for stiff problems and FN for nonstiff problems. Keywords: Gauss methods, simplified Newton, implicit equations, compensated summation 1 Introduction Runge-Kutta (RK) methods are one big family in numerical methods. RK methods can be categorized into two types that are explicit and implicit methods. The explicit and implicit RK methods are different in terms of the number of stages, stability, order and the style of the coefficient A of the RK methods. If A is lower triangular such as the explicit methods, then the implementation is cheaper. Else if A is a fully block matrix, such as the Gauss method, the implementation will become costly. The implicit RK methods are better than the explicit RK methods because they are A stable and therefore suitable in solving stiff problems [1]. However, one of the most important issue in the implementation of implicit RK method is solving the nonlinear equations which makes the implementation costly. In order to reveal the lower cost of implementation, several Newton based approach were studied since 1970 [4]. This article focuses more on the implementation strategy of the 2-stage Gauss (G2) and 3-stage Gauss (G3) methods in solving stiff problems. Generally, the s-stage Runge-Kutta method is defined by the following equations: Y [n] = e y n 1 + h(a I N )F ( x n 1 + ch, Y [n]), y n = y n 1 + h(b T I N )F ( x n 1 + ch, Y [n]), (1) where refers to the Kronecker product, e = (1,..., 1) T and I N an identity matrix of order N for an initial value problem y (x) = f(x, y), y(x 0 ) = y 0. The update is known as y n while Y [n] is the vector of internal stages with Y [n] f ( x n 1 + c 1 h, Y [n] ) 1 Y [n] =, F =. (2) 1. Y [n] s. f(x n 1 + c s h, Y [n] s )

Solving implicit equations for Gauss methods 95 The matrix A is used to find the internal stages of RK methods. Let G(Y ) = Y e y n 1 h(a I N )F (Y ). (3) The solutions to the implicit equations (2) by the Newton method yields Y = G(Y )[D G (Y )] 1, (4) where Y = (Y [k+1] Y [k] ) and D G (Y ) contain the block diagonal matrix of the Jacobian such that D G (Y ) = (I N h(a J)). (5) The Jacobian matrix, J f y (x n 1 + c i h, Y ) is given by J ( x n 1 + c 1 h, Y ) J =.. (6) J(x n 1 + c s h, Y ) Equation (4) is a linear system of dimension s N (s stages and N differential equations). Implicit RK methods are difficult to solve because one had to solve the nonlinear algebraic system of order sn. To computational cost to solve the system of nonlinear equations involves the following factors. Evaluation of the F and G, Evaluation of the Jacobian, J, Evaluation of D G, LU factorization of the iteration matrix D G and Back substitution to get the Newton increment vector, Y. 2 Implementation Technique The computation cost in solving the nonlinear part of the internal stage of the implicit Runge-Kutta methods are expensive [3] because operations like LU factorization and backward substitutions are involved. Before 1967, fixedpoint iteration was a command method when solving the nonlinear system of equations. However, this approach is no longer popular because the method

96 Mohd Hafizul Muhammad and Annie Gorgey crush the stability of the implicit Runge-Kutta [5]. The fixed point iteration is only popular when solving nonstiff problems. In 1970, the advantage of Newton-Raphson iteration method revealed by Liniger and Willoughby [9]. To solve the nonlinear system of equations using Newton-Raphson approach, it involve the evaluation of the Jacobian matrix. Taking the inverse of the Jacobian is very expensive. If the Jacobian is evaluated at every iteration then this approach is also known as full Newton (FN). Evaluating Jacobian for a large scale ordinary differential equations needs high computational cost and the technique also contributes to the amplification of the iteration error [10]. In contrast to that, if the Jacobian is only evaluated once at the start of the iteration, then the approach is known as simplified Newton (SN). Hairer and Wanner had implemented the simplified Newton on Radau IIA method of order-5 and the approach has become popular ever since in solving stiff problems [5]. This also has opened the doors of many researchers to find the best implementation strategy that minimizes the computational cost in solving the nonlinear system of equations by the implicit methods [3, 7]. Although SN is efficient than FN, but for certain types of problems FN still give efficient results. Therefore, this paper is important because it give a detailed explanation on the implementation technique using simplified Newton and full Newton for the solution of stiff problems. The Algorithm 1 and Algorithm 2 represent the pseudo code for full Newton method and simplified Newton method respectively that have been implemented in our code. The tolerance, τ = 1 10 10 is chosen based on the problems that need to be solved and it is also depends on the desired overall accuracy. The starting value is the initial estimates given for the problem. Algorithm 1 and Algorithm 2 are efficient enough to solve certain problems. The numerical experiments done for many types of problems concluded that Algorithm 1 is efficient to solve nonstiff problems. However for stiff problems, Algorithm 2 is advantageous over Algorithm 1. However, to solve stiff problems, one required to use a very small stepsize especially in the constant stepsize setting. Decreasing stepsize increases the round-off errors and thus destroy the solutions. In order to fix the round-off errors, compensated summation is introdued in the next section. 3 Enhancement by compensated summation The solution produce by computer arithmetic is not exactly same as the exact arithmetic because of the rounding occurs in the computation [2]. The finite precision arithmetic on the computer is always rounding the value to certain decimal place. As a result of the rounding, the decimal place will become slightly shorter. The accumulation of round-off errors in the computation may

Solving implicit equations for Gauss methods 97 Algorithm 1 FULL NEWTON ITERATION Set Y = y, τ = 1 10 10, N be any positive integers. for i = 1 to N Evaluate F Evaluate G Evaluate J Evaluate D do G Evaluate Y if ( Y )< τ max(1,( Y )) Y Y + Y break ynew update y n xnew x + h return (ynew=y) return (xnew=x) Algorithm 2 SIMPLIFIED NEWTON ITERATION Set Y = y, τ = 1 10 10, N be any positive integers. Evaluate the J Evaluate D G for i = 1 to N Evaluate F Evaluate G Evaluate Y do if ( Y )< τ max(1,( Y )) Y Y + Y break ynew update y n xnew x + h return (ynew=y) return (xnew=x)

98 Mohd Hafizul Muhammad and Annie Gorgey produce solution which is far different from the exact solution. In the computational practice, the numerical error is computed by the difference between the numerical solution and the exact solution. The presentation of the efficiency graph could determine that whether the implementation of the IRK method is infected by the significant round-off errors. The round-off errors occurred will cause a big change in the slope of the efficiency graph, normally the slope will change from negative slope to positive slope or zero slope. However, not all computation significantly infected by the round off error. The use of incredibly small stepsize is the factor that cause the significant round off error and destruct the computation. The small stepsize is needed in obtaining highly accurate numerical solution for some complicated problems especially in the constant stepsize setting. However, it is found that the implementation method become less efficient when stepsize used is very small. In 1951, Gill- Moller algorithm had been introduced to the computational analysis [6]. The effort aimed to reduce the numerical error growth which is contributed by the round off error. In later year, the term compensated summation was used by N. Higham [8]. Butcher had given a detailed explanation about the mechanism of this approach [2] for implicit Euler method. Therefore, compensated summation is introduced to the algorithm to see the results of the solutions. The modification done to Algorithm 1 and Algorithm 2 are by introducing the terms r, rx and z where Z = Y y. It is suggested in [5] as well as in [2] that the influence of round-off errors can be reduced by using smaller quantities z. The aim of compensated summation is to capture the round-off errors in each individual step. The r quantity is where the round-off errors are stored for the y-values and r x are the round-off errors stored for the x-values. The accumulation of this errors can be reduced by working with smaller quantities: Z = Y e y n 1. The stage equation (1) becomes The update y n then becomes Z = h(a I N )F (Z + e y n 1 ). y n = y n 1 + (b T A 1 I N )Z. The solution to the implicit equations by the Newton methods with compensated summation now yields D G (Z) Z = G(Z), (7)

Solving implicit equations for Gauss methods 99 where D G (Z) = (I N I S ) h(a J), and Z = (Z [k+1] Z [k] ), and G(Z) = Z e y n 1 h(a I N )F (Z). Several test to the Jacobian has been done, and it is proven that for simplified Newton the choice of the Jacobian with J f y (x 0, y 0 ) gives the best approximations. Since the Jacobian is only evaluated once at the beginning of the iteration, therefore the increment on Z does not help much in the computational. The Algorithm 3 represent the pseudo code for the simplified Newton applied together with compensated summation. Algorithm 3 SIMPLIFIED NEWTON WITH COMPENSATED SUMMA- TION Set Z = 0, r = 0, rx = 0, Z = Y y, τ = 1 10 10 and N be any positive integers. Evaluate the Jacobian J Evaluate D G for i = 1 to N Evaluate F Evaluate G Evaluate Z do if ( Z )< τ max(1,( Z )) Z Z + Z break ynew update y n xnew x + h return (ynew=y) return (xnew=x) 4 Numerical experiments The numerical experiments are done for the 2-stage Gauss (G2) and 3-stage Gauss (G3) using full Newton (FN) and simplified Newton (SN) with and without compensated summation (CS) as given in Table 1. The efficiency graphs measuring the computational time (CPU) are shown in all the plots.

100 Mohd Hafizul Muhammad and Annie Gorgey The efficiency of SN and FN with and without CS are obtained by plotting the loglog plot of errors versus cputime. The stepsize is decreased by half after every iterations and the round-off errors are minimized using compensated summation technique. FNWCS FNCS SNWCS SNCS full Newton and without compensated summation full Newton and with compensated summation simplified Newton and without compensated summation simplified Newton and with compensated summation Table 1: Implementations tactics Although only constant stepzise is considered, it is important to see the behaviour of G2 and G3 using SN and FN with and without CS. This result can provide the best choice of implementation technique to used in the future when using variable stepsize. Compensated summation may not be required for variable stepsize since the stepsize will be automatically adjusted. However, it is not known yet how bad the solutions can be effected and we wish to extend the results for variable stepsize setting in the future. The numerical results are given for the following problems taken in [5]. Oreganator problem Oregonator model is proposed for the Belousov-Zhabotinskii (BZ) reaction. The BZ reaction is one of a class of reactions that serve as a classical example of non-equilibrium thermodynamics, resulting in the establishment of a nonlinear chemical oscillator. OREGO is a nonlinear stiff system of 3 dimensions [5]. y 1 = 77.27(y 2 + y 1 (1 8.375 10 6 y 1 y 2 )), y 2 = 1 77.27 (y 3 (1 + y 1 )y 2 ), y 3 = 0.161(y 1 y 3 ), x [0, 3600], (8) with y 1 (0) = 1, y 2 (0) = 2 and y 3 (0) = 3. The problem is integrated to x n = 6 using initial stepsize h = 0.03 for G2 and G3. Brusselator problem Brusselator is a problem in a chemical reaction which is oscillating catalyst in automatic system modelled by IIya Prigogine and his collaborators at the Free University of Brussels [5]. The problem is defined by two ODEs y 1 = 1 + y 2 1y 2 4y 1, y 2 = 3y 1 y 2 1y 2,

Solving implicit equations for Gauss methods 101 with y 1 (0) = 1.5 and y 2 (0) = 3. The problem is integrated to x n = 5 using h = 0.05 for G2 and G3. Problems/Methods G2 G3 Oreganator SNCS SNCS Brusselator SNCS FNCS Table 2: Efficiency of G2 and G3 for the Oregonator and Brusselator problems Log 10 (Error) -9.5-10 -10.5-11 -11.5-12 -12.5-13 OREGANATOR -13.5 1.2 1.4 1.6 1.8 2 2.2 2.4 Log 10 (Cpu Time) (a) G2 G2FNWCS G2FNCS G2SNWCS G2SNCS Log 10 (Error) -10.5-11 -11.5-12 -12.5 OREGANATOR -13 G3FNWCS G3FNCS -13.5 G3SNWCS G3SNCS -14 1.4 1.6 1.8 2 2.2 2.4 2.6 Log 10 (Cpu Time) (b) G3 Figure 1: The efficiency of G2 and G3 applied to the Oregonator problem. Log 10 (Error) -6-8 -10-12 -14 BRUSSELATOR G2FNWCS G2FNCS G2SNWCS G2SNCS Log 10 (Error) -9-10 -11-12 -13-14 BRUSSELATOR G3FNWCS G3FNCS G3SNWCS G3SNCS -16-15 -18-0.5 0 0.5 1 1.5 2 2.5 3 Log 10 (Cpu Time) (a) G2-16 0.8 1 1.2 1.4 1.6 1.8 2 Log 10 (Cpu Time) (b) G3 Figure 2: The efficiency of G2 and G3 applied to the Brusselator problem. 5 Conclusion In conclusion, based on the numerical results and some other problems that are not mentioned in this article, SNCS is important to be implemented when

102 Mohd Hafizul Muhammad and Annie Gorgey solving stiff problems especially when a very small stepsize is required. However, FNCS is enough to give efficient results in solving nonstiff and midly stiff problems. In both cases, compensated summation is a must and needed to be implemented in all the methods so that round-off errors would not destroy the solutions. Acknowledgments. The authors would like to thank the Ministry of Higher Education in Malaysia for providing the funding. This research was supported by the research grant FRGS, Vote No: 2015-0158-104-02. References [1] J.C. Butcher, On the implementation of implicit Runge-Kutta methods, BIT Numerical Mathematics, 16 (1976), no. 3, 237-240. https://doi.org/10.1007/bf01932265 [2] J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, New Zealand, 2016. https://doi.org/10.1002/9781119121534 [3] G.J. Cooper and J.C. Butcher, An Iteration Scheme for Implicit Runge- Kutta Methods, IMA Journal of Numerical Analysis, 3 (1983), no. 2, 127-140. https://doi.org/10.1093/imanum/3.2.127 [4] X. Dexuan, An improved approximate Newton method for implicit Runge- Kutta formulas, Journal of Computational and Applied Mathematics, 235 (2011), no. 17, 5249-5258. https://doi.org/10.1016/j.cam.2011.05.027 [5] E. Hairer and G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems, Springer Series in Computational Mathematics, Vol. 14, Springer-Verlag, Berlin, 1991. https://doi.org/10.1007/978-3-662-09947-6 [6] S. Gill, A process for the step-by-step integration of differential equations in an automatic digital computing machine, Mathematical Proceedings of the Cambridge Philosophical Society, 47 (1951), no. 1, 96-108. https://doi.org/10.1017/s0305004100026414 [7] S. González-Pinto, J.I. Montijano and L. Rández, Iterative schemes for three-stage implicit Runge-Kutta methods, Applied Numerical Mathematics, 17 (1995), no. 4, 363-382. https://doi.org/10.1016/0168-9274(95)00070-b

Solving implicit equations for Gauss methods 103 [8] N.J. Higham, The accuracy of floating point summation, SIAM Journal on Scientific Computing, 14 (1993), no. 4, 783-799. https://doi.org/10.1137/0914050 [9] W. Liniger and R.A. Willoughby, Efficient integration methods for stiff systems of ordinary differential equations, SIAM Journal on Numerical Analysis, 7 (1970), no. 1, 47-66. https://doi.org/10.1137/0707002 [10] L.F. Shampine, Implementation of implicit formulas for the solution of ODEs, SIAM Journal on Scientific and Statistical Computing, 1 (1980), no. 1, 103-118. https://doi.org/10.1137/0901005 Received: December 1, 2017; Published: January 17, 2018