Study the Numerical Methods for Solving System of Equation

Similar documents
Numerical Integration of Equations of Motion

Geometric Numerical Integration

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Initial value problems for ordinary differential equations

Unit I (Testing of Hypothesis)

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

NUMERICAL SOLUTION OF ODE IVPs. Overview

Accuracy Enhancement Using Spectral Postprocessing for Differential Equations and Integral Equations

arxiv: v1 [math.na] 31 Oct 2016

GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

The family of Runge Kutta methods with two intermediate evaluations is defined by

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Math 111, Introduction to the Calculus, Fall 2011 Midterm I Practice Exam 1 Solutions

Scientific Computing: An Introductory Survey

Solution: (a) Before opening the parachute, the differential equation is given by: dv dt. = v. v(0) = 0

1 Ordinary differential equations

Study of Numerical Accuracy of Runge-Kutta Second, Third and Fourth Order Method

Syllabus (Session )

Validated Explicit and Implicit Runge-Kutta Methods

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Exam in TMA4215 December 7th 2012

Southern Methodist University.

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

Solving scalar IVP s : Runge-Kutta Methods

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs

Fourth-order symplectic exponentially-fitted modified Runge-Kutta methods of the Gauss type: a review

NAG Library Chapter Introduction d02 Ordinary Differential Equations

Solution: (a) Before opening the parachute, the differential equation is given by: dv dt. = v. v(0) = 0

Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations

Chapter 6 - Ordinary Differential Equations

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Differential Equations

Investigation on the Most Efficient Ways to Solve the Implicit Equations for Gauss Methods in the Constant Stepsize Setting

Numerical Methods for ODEs. Lectures for PSU Summer Programs Xiantao Li

The Milne error estimator for stiff problems

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

Development of a New One-Step Scheme for the Solution of Initial Value Problem (IVP) in Ordinary Differential Equations

Partitioned Methods for Multifield Problems

Modified Milne Simpson Method for Solving Differential Equations

Delay Differential Equations with Constant Lags

COMPOSITION CONSTANTS FOR RAISING THE ORDERS OF UNCONVENTIONAL SCHEMES FOR ORDINARY DIFFERENTIAL EQUATIONS

Numerical Methods for Differential Equations

Solving Orthogonal Matrix Differential Systems in Mathematica

Ordinary Differential Equations

MAE 305 Engineering Mathematics I

Backward error analysis

GEOMETRIC INTEGRATION METHODS THAT PRESERVE LYAPUNOV FUNCTIONS

THEWAY in which time evolution acts on observables may be rephrased in

Symplectic integration with Runge-Kutta methods, AARMS summer school 2015

Differential Equations

Mathematical Methods for Numerical Analysis and Optimization

THE Hamilton equations of motion constitute a system

Exact and Approximate Numbers:

CHAPTER 10: Numerical Methods for DAEs

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS.

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

SRI RAMAKRISHNA INSTITUTE OF TECHNOLOGY DEPARTMENT OF SCIENCE & HUMANITIES STATISTICS & NUMERICAL METHODS TWO MARKS

Numerical Algorithms as Dynamical Systems

Section 7.4 Runge-Kutta Methods

Numerical Methods for Differential Equations

A NOTE ON EXPLICIT THREE-DERIVATIVE RUNGE-KUTTA METHODS (ThDRK)

Ordinary Differential Equations II

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

NUMERICAL METHODS FOR ENGINEERING APPLICATION

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Numerical Solution of Differential Equations

Numerical Integration Schemes for Unequal Data Spacing

ENERGY PRESERVING METHODS FOR VOLTERRA LATTICE EQUATION

The collocation method for ODEs: an introduction

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS

Derivation of Euler's Method - Numerical Methods for Solving Differential Equations

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

Energy-Preserving Runge-Kutta methods

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

A Study on Linear and Nonlinear Stiff Problems. Using Single-Term Haar Wavelet Series Technique

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

Quadratic SDIRK pair for treating chemical reaction problems.

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Optimal Preconditioners for Interval Gauss Seidel Methods


What we ll do: Lecture 21. Ordinary Differential Equations (ODEs) Differential Equations. Ordinary Differential Equations

Math 266: Ordinary Differential Equations

High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Computational Methods

Stability regions of Runge-Kutta methods. Stephan Houben Eindhoven University of Technology

A Zero-Stable Block Method for the Solution of Third Order Ordinary Differential Equations.

Symplectic exponentially-fitted modified Runge-Kutta methods of the Gauss type: revisited

Transcription:

Study the Numerical Methods for Solving System of Equation Ravi Kumar 1, Mr. Raj Kumar Duhan 2 1 M. Tech. (M.E), 4 th Semester, UIET MDU Rohtak 2 Assistant Professor, Dept. of Mechanical Engg., UIET MDU Rohtak ABSTRACT This Paper concentrates on numerical methods for solving ordinary differential equa-tions. Firstly, we discuss the concept of system of equations using Jacobi and Gauss Seidal method. we also discuss the method for solving the ordinary differential equation using Euler and Runge Kutta 4 th order method. The given ordinary differential equation is analyzed on Euler and Runge-Kutta method to find the approximated solution with the given initial conditions. Then, the stability of each method.we also focus on numerical methods for systems. After investigating the numerical methods, we gave advantages and disadvantages of Euler method and Fourth Order Runge-Kutta method. The approximated solutions with different step-size and analytical solutions of methods are computed in c language. The computation of approximated solution so methods are compared with analytical solutions. -Kutta method is more accurate than the Explicit Euler method. Keywords: Ordinary Differential Equations,Numerical solutions, Euler s method, Runge-Kutta method, Jacobi Method, Gauss -Seidal Method. INTRODUCTION The first order differential equation that we ll be looking at is exact differential equations. Before we get into the full details behind solving exact differential equations it s probably best to work an example that will help to show us just what an exact differential equation is. It will also show some of the behind the scenes details that we usually don t bother with in the solution process. The vast majority of the following example will not be done in any of the remaining examples and the work that we will put into the remaining examples will not be shown in this example. The whole point behind this example is to show you just what an exact differential equation is, how we use this fact to arrive at a solution and why the process works as it does. The majority of the actual solution details will be shown in a later example. Okay, so what did we learn from the last example? Let s look at things a little more generally. Suppose that we have the following differential equation (1) (2) Note that it s important that it be in this form! There must be an = 0 on one side and the sign separating the two terms must be a +. Now, if there is a function somewhere out there in the world,ψ(x,y), so that, then we call the differential equation exact. In these cases we can write the differential equation. 21

LITERATURE REVIEW Runge-Kutta formula is among the oldest and best understood schemes in numerical analysis. Owing to the evolution of a vast and comprehensive body of knowledge, Runge-Kutta still continues to be a source of active research. The most suitable way of solving most initial value problems for a system of ordinary differential equations are mostly provided by Runge-Kutta methods sometimes referred to as RK methods. This is based on some reasons; Firstly, Runge-Kutta methods are convergent given that the approximate solution approaches the exact solution. Secondly, they are accurate due to the closeness between the approximate solution and the exact solution. Adesola O. Anidu, Samson A. Arekete, Ayomide O. Adedayo and Adekunle O. Adekoya Department of Computer Science ( 2015 ) J.C. Butcher. General linear methods, Acta Numerica 15 (2006), Cambridge University Press. (2006 ) L. Jay. Specialized Runge-Kutta methods for index 2 differential-algebraic equations, Mathematics of Computation 75 (2006) A. Hindmarsh, P. Brown, K. Grant, S. Lee, R. Serban, D. Shumaker, and C. Woodward. SUNDIALS: Suite of Nonlinear and Differential/Algebraic Equation Solvers, ACM Transactions on Mathematical Software 31 (2005) K. Atkinson and W. Han. Elementary Numerical Analysis, 3rd ed., John Wiley, New York, (2004) H. Brunner. Collocation Methods for Volterra Integral and Related Functional Equations, Cambridge Univ. Press, ( 2004). E. Hairer, C. Lubich, and G. Wanner. Geometric numerical integration illustrated by the St ormer-verlet method, Acta Numerica 12 (2003), Cambridge University Press. (2003 )W. Boyce and R. DiPrima. Elementary Differential Equations, 7th edition, John Wiley & Sons, (2003).W. Kelley and A. Peterson. Difference Equations, 2nd ed., Academic Press, Burlington, Massachusetts, 2001. 57. R. Kress. Numerical (2001 ) OBJECTIVE OF THE WORK From the literature review it is found that the iterative method such as Jacobi and Gauss Seidal are very important method for solving the system of equation. but as we know that gauss Seidal method converges faster as compared to Jacobi iterative method. On the other hand Runge Kutta 4 th order method for solving ordinary differential equation gives better accuracy. These numerical methods for solving system of equation and ordinary differential equation are useful in c language program, Matlab program and also c++ program. It is very difficult to find the answer when there is mistake of some value in the steps of solution. Jacobi Iterative Method NUMERICAL METHODS DISCUSSION Perhaps the simplest iterative method for solving Ax = b is.jacobi method Note that the simplicity of this method is both good and bad: good, because it is relatively easy to understand and thus is a good first taste of iterative methods; bad, because it is not typically used in practice (although its potential usefulness has been reconsidered with the advent of parallel computing). Still, it is a good starting point for learning about more useful, but more complicated, iterative methods. Given a current approximation x (k) = (x 1 (k), x 2 (k), x 3 (k),, x n (k) ) for x, the strategy of Jacobi's Method is to use the first equation and the current values of x 2 (k), x 3 (k),, x n (k) to find a new value x 1 (k+1), and similarly to find a new value x i (k) using the i th equation and the old values of the other variables. That is, given current values x (k) = (x 1 (k), x 2 (k),, x n (k) ), find new values by solving for 22

x (k+1) = (x 1 (k+1), x 2 (k+1),, x n (k+1) ) in Gauss Seidal Method Let us take Jacobi s Method one step further. Where the true solution is x = (x 1, x 2,, x n ), ifx 1 (k+1) is a better approximation to the true value of x 1 than x 1 (k) is, then it would make sense that once we have found the new value x 1 (k+1) to use it (rather than the old value x 1 (k) ) in finding x 2 (k+1),, x n (k+1).so x 1 (k+1) is found as in Jacobi's Method, but in finding x 2 (k+1), instead of using the old value of x 1 (k) and the old values x 3 (k),, x n (k), we now use the new value x 1 (k+1) and the old values x 3 (k),, x n (k), and similarly for findingx 3 (k+1),, x n (k+1). Let's apply the Gauss-Seidel Method to the system from Example 1: At each step, given the current values x 1 (k), x 2 (k), x 3 (k), we solve for x 1 (k+1), x 2 (k+1), x 3 (k+1) in. To compare our results from the two methods, we again choose x (0) = (0, 0, 0). We then find x (1) = (x 1 (1), x 2 (1), x 3 (1) )by solving. Let us be clear about how we solve this system. We first solve for x 1 (1) in the first equation and find that x 1 (1) = 3/4 = 0.750. We then solve for x 2 (1) in the second equation, using the new value of x 1 (1) = 0.750, and find that x 2 (1) = [9 + 2(0.750)] / 6 = 1.750.. Finally, we solve for x 3 (1) in the third equation, using the new values of x 1 (1) = 0.750 and x 2 (1) = 1.750, and find thatx 3 (1) = [-6 + 0.750 1.750] / 7 = 1.000.The result of this first iteration of the Gauss-Seidel Method isx (1) = (x 1 (1), x 2 (1), x 3 (1) ) = (0.750, 1.750, 1.000)We iterate this process to generate a sequence of increasingly better approximations x (0), x (1), x (2), 23

EULER METHOD Since we're after a set of points which lie along the true solution, as stated above, we must now derive a way of generating more solution points in addition to the solitary initial condition point shown in red in the picture. How could we get more points? Well, look back at the original initial value problem at the top of the page! So far we have only used the initial condition, which gave us our single point. Maybe we should consider the possibility of utilizing the other part of the initial value problem the differential equation itself:y = f(x, y)remember that one interpretation of the quantity y appearing in this expression is as the slope of the tangent line to the function y. But, the function y is exactly what we are seeking as a solution to the problem. This means that we not only know a point which lies on our elusive solution, but we also know a formula for its slope: slope of the solution = f(x, y)all we have to do now is think of a way of using this slope to get those "other points" that we've been after! Well, look at the right hand side of the last formula. It looks like you can get the slope by substituting values for x and y into the function f. These values should, of course, be the coordinates of a point lying on the solution's graph they can't just be the coordinates of any point anywhere in the plane. Do we know of any such points points lying on the solution curve? Of course we do! The initial condition point that we already sketched is exactly such a point! We could use it to find the slope of the solution at the initial condition. We would get:slope of the solution at (x o, y o ) = f(x o, y o ) Remembering that this gives us the slope of the function's tangent line at the initial point we could put this together with the initial point itself to build the tangent line at the initial point, like this: Once again, let's remind ourselves of our goal of finding more points which lie on the true solution's Runge Kutta 4th Order Method Fig 1: Euler Method By using a similar strategy to the trapezoidal rule to find a better approximation to an IVP in Heun's method, consider now Simpson's rule, where not only the end points, but also the interior points of the interval are sampled. The 4th-order Runge Kutta method is similar to Simpson's rule: a sample of the slope is made at the mid-point on the interval, as well as the end points, and a weighted average is taken, placing more weight on the slope at the mid point. It should be noted that Runge-Kutta refers to an entire class of IVP solvers, which includes Euler's method and Heun's method. We are looking at one particularly effective, yet simple, case. Given the IVP y (1) (t) = f( t, y(t) y(t 0 ) = y 0 if we want to estimate y(t 1 ), we set h = t 1 t 0. Remember that f( t, y ) gives the slope at the point (t, y). Thus, we can find the slope at (t 0, y 0 ): K 0 = f( t 0, y 0 ) Next, we use this slope to estimate y(t 0 + h/2) y 0 + ½ hk 0 and sample the slope at this intermediate point: K 1 = f( t 0 + ½h, y 0 + ½h K 0 ) Using this new slope, we estimate y(t 0 + h/2) y 0 + ½ hk 1 and sample the slope at this new point: K 2 = f( t 0 + ½h, y 0 + ½h K 1 ) Finally, we use this last approximation of the slope to estimate y(t 1 ) = y(t 0 + h) y 0 + hk 2 and sample the slope at this point: K 3 = f( t 0 + h, y 0 + h K 2 ) All four of these slopes, K 0, K 1, K 2, and K 3 approximate the slope of the solution on the interval [t 0, t 1 ], and therefore we take the following weighted average: Therefore, we approximate y(t 1 ) by 24

Advantage and Disadvantage of Euler Method Advantage 1. Euler s method is simple and direct. 2. Can be used for nonlinear initial value problem. Disadvantage 1. It is less accurate and numerically unstable. 2. Approximation error is proportional to step size h. Hence good approximation obtained with very small value of h. This require large number of time Advantage and Disadvantage of Simple runge kutta method Advantage 1. They are easy to implement. 2. They are stable. Disadvantage 1. They require relatively large computer time. 2. Error estimation are not easy to be done. 3. The simple Runge Kutta method do not work for stiff differential equation (linear differential equation with widely used eigen value. CONCLUSION In this Paper, we have discussed the numerical methods for solving system of equation and ordinary differential equation. Some necessary condition and definition are given to examine the numerical method. After that, considering these definition Jacobi iteration method, Gauss Seidal method, Euler method and Runge Kutta 4 th order method developed and their basic feature discussed Jacobi and gauss Seidal method used for system of equation in three or four variable. Euler s method and Runge Kutta 4 th order method used for ordinary differential equation we also see that in Euler s method excessively small steps size use therefore largenumber of computation is needed. In contrast Runge Kutta gives better result and it converge faster to analytical solution and has less iteration to get accuracy solution. REFERENCES [1]. Adesola O. Anidu, Samson A. Arekete, Ayomide O. Adedayo and Adekunle O. Adekoya Department of Computer Science ( 2015 ) [2]. J.C. Butcher. General linear methods, Acta Numerica 15 (2006), Cambridge University Press. (2006 ) [3]. L. Jay. Specialized Runge-Kutta methods for index 2 differential-algebraic equations, Mathematics of Computation 75 (2006). [4]. Hindmarsh, P. Brown, K. Grant, S. Lee, R. Serban, D. Shumaker, and C. Woodward. SUNDIALS: Suite of Nonlinear and Differential/Algebraic Equation Solvers, ACM Transactions on Mathematical Software 31 (2005) [5]. K. Atkinson and W. Han. Elementary Numerical Analysis, 3rd ed., John Wiley, New York, (2004) [6]. H. Brunner. Collocation Methods for Volterra Integral and Related Functional Equations, Cambridge Univ. Press, ( 2004). [7]. E. Hairer, C. Lubich, and G. Wanner. Geometric numerical integration illustrated by the St ormer-verlet method, Acta Numerica 12 (2003), Cambridge University Press. (2003 ) [8]. W. Boyce and R. DiPrima. Elementary Differential Equations, 7th edition, John Wiley & Sons, (2003). [9]. W. Kelley and A. Peterson. Difference Equations, 2nd ed., Academic Press, Burlington, Massachusetts, 2001. 57. R. Kress. Numerical (2001 ) [10]. A. Quarteroni, R. Sacco, and F. Saleri. Numerical Mathematics, Springer-Verlag, New York, (2000). [11]. E. Platen. An introduction to numerical methods for stochastic differential equations, Acta Numerica 8 (1999), Cambridge University Press (1999 ) [12]. R. Kress. Numerical Analysis, Springer-Verlag, New York, (1998). 25

[13]. L. Petzold, L. Jay, and J. Yen. Numerical solution of highly oscillatory ordinary differential equations,acta Numerica 6 (1997), Cambridge University Press. (1997 ) [14]. L. Jay. Symplectic partitioned Runge-Kutta methods for constrained Hamiltonian systems, SIAM Journal on Numerical Analysis 33 (1996), [15]. L. Jay. Convergence of Runge-Kutta methods for differential-algebraic systems of index 3, Applied Numerical Mathematics 17 (1995), [16]. P.N. Brown, A.C. Hindmarsh, and L.R. Petzold. Using Krylov methods in the solution of large-scale differentialalgebraic systems, SIAM J. Scientific Computing 15 (1994) [17]. J. Sanz-Serna. Symplectic integrators for Hamiltonian problems: an overview, Acta Numerica 1992, Cambridge University Press, (1992). [18]. J. Cash. On the numerical integration of nonlinear two-point boundary value problems using iterated deferred corrections. II. The development and analysis of highly stable deferred correction formulae, SIAM J. Numer. Anal. 25 (1988) [19]. P. L otstedt and L. Petzold. Numerical solution of nonlinear differential equations with algebraic constraints. I. Convergence results for backward differentiation formulas, Mathematics of Computation (1986). [20]. C.W. Gear, B. Leimkuhler, and G.K. Gupta. Automatic integration of Euler Lagrange equations with constraints, in Proceedings of the International Conference on Computational and Applied Mathematics (Leuven ), Vol. 12/13 (1985). [21]. R. Aiken (editor). Stiff Computation, Oxford University Press, Oxford, (1985). [22]. I. Gladwell and D. Sayers. Computational Techniques for Ordinary Differential Equations, Academ Press, New York, (1980). [23]. J. Dormand and P. Prince. A family of embedded Runge-Kutta formulae, J. Comp. Appl. Math. 6 (1980). 26