Modified Picard Iteration Applied to Boundary Value Problems and Volterra Integral Equations.

Similar documents
On linear and non-linear equations. (Sect. 1.6).

SOME PROPERTIES OF SOLUTIONS TO POLYNOMIAL SYSTEMS OF DIFFERENTIAL EQUATIONS

Ordinary Differential Equation Theory

Existence and uniqueness of solutions for nonlinear ODEs

SOLUTION OF FRACTIONAL INTEGRO-DIFFERENTIAL EQUATIONS BY ADOMIAN DECOMPOSITION METHOD

Superconvergence analysis of multistep collocation method for delay Volterra integral equations

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels

Consistency and Convergence

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland

APPLICATIONS OF FD APPROXIMATIONS FOR SOLVING ORDINARY DIFFERENTIAL EQUATIONS

Theory of Ordinary Differential Equations

This work has been submitted to ChesterRep the University of Chester s online research repository.

First-Order Ordinary Differntial Equations: Classification and Linear Equations. David Levermore Department of Mathematics University of Maryland

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Validated Explicit and Implicit Runge-Kutta Methods

On the General Solution of Initial Value Problems of Ordinary Differential Equations Using the Method of Iterated Integrals. 1

Math 341 Fall 2008 Friday December 12

13 Implicit Differentiation

Analytic solution of fractional integro-differential equations

Xinfu Chen. Topics in Differential Equations. Department of mathematics, University of Pittsburgh Pittsburgh, PA 15260, USA

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Ordinary Differential Equations

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

The collocation method for ODEs: an introduction

Lecture 4: Numerical solution of ordinary differential equations

Numerical Methods for the Solution of Differential Equations

Laplace Transforms Chapter 3

A convolution test equation for double delay integral equations

Chapter 2 Analytical Approximation Methods

Ordinary Differential Equations

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS I: Introduction and Analytic Methods David Levermore Department of Mathematics University of Maryland

Superconvergence Results for the Iterated Discrete Legendre Galerkin Method for Hammerstein Integral Equations

POSITIVE SOLUTIONS FOR NONLOCAL SEMIPOSITONE FIRST ORDER BOUNDARY VALUE PROBLEMS

Fast-slow systems with chaotic noise

LMI Methods in Optimal and Robust Control

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

Mathematical description of differential equation solving electrical circuits

A Modification in Successive Approximation Method for Solving Nonlinear Volterra Hammerstein Integral Equations of the Second Kind

Homework Solutions:

FINAL EXAM MATH303 Theory of Ordinary Differential Equations. Spring dx dt = x + 3y dy dt = x y.

Ordinary Differential Equations

Numerical Fractional Calculus Using Methods Based on Non-Uniform Step Sizes. Kai Diethelm. International Symposium on Fractional PDEs June 3 5, 2013

ORDINARY DIFFERENTIAL EQUATIONS

Approximate Solution of an Integro-Differential Equation Arising in Oscillating Magnetic Fields Using the Differential Transformation Method

Numerical Methods - Boundary Value Problems for ODEs

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

CS520: numerical ODEs (Ch.2)

Half of Final Exam Name: Practice Problems October 28, 2014

Method of Successive Approximations for Solving the Multi-Pantograph Delay Equations

Advection-diffusion-reaction in a porous catalyst

Ordinary Differential Equations. Session 7

Fibonacci tan-sec method for construction solitary wave solution to differential-difference equations

On linear and non-linear equations.(sect. 2.4).

Rigorous numerical computation of polynomial differential equations over unbounded domains

On the convergence of the homotopy analysis method to solve the system of partial differential equations

First-Order ODEs. Chapter Separable Equations. We consider in this chapter differential equations of the form dy (1.1)

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

The Linear Shooting Method -(8.4)

Propagation of Uncertainties in Nonlinear Dynamic Models

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

ORDINARY DIFFERENTIAL EQUATIONS

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

37. f(t) sin 2t cos 2t 38. f(t) cos 2 t. 39. f(t) sin(4t 5) 40.

Honors Differential Equations

7.3 Singular points and the method of Frobenius

A Maple Program for Solving Systems of Linear and Nonlinear Integral Equations by Adomian Decomposition Method

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

Report on Numerical Approximations of FDE s with Method of Steps

ORDINARY DIFFERENTIAL EQUATION: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

1.6 Computing and Existence

OR MSc Maths Revision Course

UCLA Math 135, Winter 2015 Ordinary Differential Equations

MA108 ODE: Picard s Theorem

Solving systems of ODEs with Matlab

Numerical Differential Equations: IVP

ORDINARY DIFFERENTIAL EQUATIONS: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

CHAPTER 5: Linear Multistep Methods

AN AUTOMATIC SCHEME ON THE HOMOTOPY ANALYSIS METHOD FOR SOLVING NONLINEAR ALGEBRAIC EQUATIONS. Safwan Al-Shara

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

EXISTENCE OF POSITIVE SOLUTIONS OF A NONLINEAR SECOND-ORDER BOUNDARY-VALUE PROBLEM WITH INTEGRAL BOUNDARY CONDITIONS

ETNA Kent State University

Practice problems from old exams for math 132 William H. Meeks III

The First Derivative and Second Derivative Test

( ) and then eliminate t to get y(x) or x(y). Not that the

Section 4.3 version November 3, 2011 at 23:18 Exercises 2. The equation to be solved is

Regularity of the density for the stochastic heat equation

Math 128A Spring 2003 Week 12 Solutions

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

Laplace Transforms. Chapter 3. Pierre Simon Laplace Born: 23 March 1749 in Beaumont-en-Auge, Normandy, France Died: 5 March 1827 in Paris, France

Set-based adaptive estimation for a class of nonlinear systems with time-varying parameters

Nonlinear Systems Theory

Chemical Engineering 436 Laplace Transforms (1)

Numerical Analysis of Riccati equation using Differential Transform Method, He Laplace Maethod and Adomain Decomposition Method

A Maple program for computing Adomian polynomials

High-Order Representation of Poincaré Maps

Generalized Local Regularization for Ill-Posed Problems

Ordinary Differential Equations

Transcription:

Modified Picard Iteration Applied to Boundary Value Problems and Volterra Integral Equations. Mathematical Association of America MD-DC-VA Section, November 7 & 8, 2014 Bowie State University Bowie, Maryland Hamid Semiyari November 7, 2014

TABLE OF CONTENTS Introduction Algorithms and Theorems for approximating solutions of two-point boundary value problems An Algorithm for Approximating Solutions On the Long Intervals Volterra Equation Using Auxiliary Variables According To Parker-Sochacki

INTRODUCTION In this talk we present a new algorithm for approximating solutions of two-point boundary value problems and provide a theorem that gives conditions under which it is guaranteed to succeed. We show how to make the algorithm computationally efficient and demonstrate how the full method works both when guaranteed to do so and more broadly. In the first section the original idea and its application are presented. We also show how to modify the same basic idea and procedure in order to apply it to problems with boundary conditions of mixed type. In the second section we introduce a new algorithm for the case if the original algorithm failed to converge on a long interval. We split the long interval into subintervals and show the new algorithm gives convergence to the solution. Finally, we repose a Volterra equation using auxiliary variables according to Parker-Sochacki in such a way that the solution can be approximated by the Modified Picard iteration scheme.

The algorithm that will be developed in here for the solution of certain boundary value problems is based on a kind of Picard iteration as just described. It is well known that in many instances Picard iteration performs poorly in actual practice, even for relatively simple differential equations. A great improvement can sometimes be had by exploiting ideas originated by G. Parker and J. Sochacki [7]. The Parker-Sochacki method (PSM) is an algorithm for solving systems of ordinary differential equations (ODEs). The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The Parker-Sochacki method convert the initial value problem to a system that the right hand side is polynomials, consequently the right hand side then are continuous and satisfy the Lipschitz condition on an open region containing the initial point t = 0. Moreover, the system guarantees that the iterative integral is easy to integrate. We begin with the following example, the initial value problem y = siny, y(0) = π/2. (1)

To approximate it by means of Picard iteration, from the integral equation that is equivalent to (1), namely, y(t) = π 2 + t 0 siny(s)ds we define the Picard iterates The first few iterates are y 0 (t) π 2, y k+1(t) = π 2 + t 0 siny k(s)ds for k 0. y 0 (t) = π 2 y 1 (t) = π 2 + t y 2 (t) = π 2 cos( π 2 + t) t y 3 (t) = π 2 + cos(sins)ds, 0 the last of which cannot be expressed in closed form. We define variables u and v by u = siny and v = cosy so that y = u, u = (cosy)y = uv, and v = ( siny)y = u 2, hence the original problem is embedded as the first component in the three-dimensional problem y = u y(0) = π 2 u = uv v = u 2 u(0) = 1 v(0) = 0 Although the dimension has increased, now the right hand sides are all polynomial functions so quadratures can be done easily.

In particular, the Picard iterates are now π 21 π t 21 t u k (s) y 0 (t) = y k+1 (t) = y 0 + y k(s)ds = + u k (s)v k (s) ds. 0 0 0 0 u 2 k (s) The first component of the first few iterates are y 0 (t) = π 2 y 1 (t) = π 2 + t y 2 (t) = π 2 + t y 3 (t) = π 2 + t 1 6 t3 y 4 (t) = π 2 + t 1 6 t3 + 1 24 t5 y 5 (t) = π 2 + t 1 6 t3 + 1 24 t5 y 6 (t) = π 2 + t 1 6 t3 + 1 24 t5 61 5040 t7 The exact solution to the problem (1) is y(t) = 2arctan(e t ) the first nine terms of its Maclaurin series are π 2 + t 1 6 t3 + 1 24 t5 61 5040 t7 + O(t 9 ) As we can see the system of ODE proposed by Parker and Sochacki is generating a Maclaurin series solution for problem (1).

AN EFFICIENT ALGORITHM FOR APPROXIMATING SOLUTIONS OF TWO-POINT BOUNDARY VALUE PROBLEMS Consider a two-point boundary value problem of the form y = f (t,y,y ), y(a) = α, y(b) = β. (2) If f is locally Lipschitz in the last two variables then by the Picard-Lindelöf Theorem, for any γ R the initial value problem y = f (t,y,y ), y(a) = α, y (a) = γ (3) will have a unique solution on some interval about t = a. Introducing the variable u = y we obtain the first order system that is equivalent to (3), y = u y(a) = α, u = f (t,y,u) u(a) = γ (4)

The equivalent integral equation to (3) will be t y(t) = α +γ(t a)+ (t s)f a (s,y(s),y (s))ds, (5) which we then solve, when evaluated at t = b, for γ: ( b ) γ = b a 1 β α (b s)f a (s,y(s),y (s))ds (6) The key idea is we use picard iteration to obtain successive approximations to the value of γ. Thus the iterates are y [0] (t) α u [0] (t) β α b a γ [0] β α b a (7a) and t y [k+1] (t) = α + a u[k] (s)ds t u [k+1] (t) = γ [k] + γ [k+1] = 1 b a f a (s,y[k] (s),u [k] (s))ds ( b β α ) (b s)f a (s,y[k] (s),u [k] (s))ds. (7b)

THEOREM Theorem Let f : [a,b] R 2 R : (t,y,u) f (t,y,u) be Lipschitz in y = (y,u) with Lipschitz constant L with respect to absolute value on R and the sum norm on R 2. If 0 < b a < (1 + 3 2 L) 1 then for any α, β R the boundary value problem has a unique solution. y = f (t,y,y ), y(a) = α, y(b) = β (8)

EXAMPLES In this example we treat a problem in which the function f on the right hand side fails to be Lipschitz yet Algorithm nevertheless performs well. Consider the boundary value problem y = e 2y, y(0) = 0, y(1.2) = lncos1.2 1.015123283, (9) for which the unique solution is y(t) = lncost, yielding γ = 0. Introducing the dependent variable u = y to obtain the equivalent first order system y = u, u = e 2y and the variable v = e 2y to replace the transcendental function with a polynomial we obtain the expanded system y = u u = v v = 2uv

and y [0] (t) 0 u [0] (t) lncos1.2 1.2 v [0] (t) 1 γ [0] = lncos1.2 1.2 y [k+1] (t) 0 u [k+1] (t) = 1 v [k+1] 1.2 (β α + 1.2 0 (1.2 s)v [k] (s)ds (11) (t) 1 t u [k] (s) + v [k] (s) ds, 0 2u [k] (s)v [k] (s) where we have shifted the update of γ and incorporated it into the update of u [k] (t). The first eight iterates of γ are: (10) γ [1] = 0.24594, γ [2] = 0.16011, γ [3] = 0.19297, γ [4] = 0.04165, γ [5] = 0.04272, γ [6] = 0.04012, γ [7] = 0.00923, γ [8] = 0.01030, The maximum error in the approximation is y exact y [8] sup 0.0065.

EXAMPLE In this example the right hand side is not autonomous and does not satisfy a global Lipschitz condition. Consider the boundary value problem y = 1 8 (32 + 2t 3 yy ), y(1) = 17, y(3) = 43 3 (12) The unique solution is y(t) = t 3 + 16 t. We have y [0] (t) 17 u [0] (t) 4 3 (13a) γ [0] = 4 3 and ( ) y [k+1] ( ) (t) 17 t u [k+1] = (t) γ [k] + 1 γ [k+1] = 4 3 1 2 3 ( u [k] (s) 4 + 1 4 s3 y [k] (s)u [k] (s) ) 1 (3 s)(4 + 1 4 s3 y [k] (s)u [k] (s))ds. ds, (13b) After n = 12 iterations, γ [12] = 8.443, whereas the exact value is γ = 76 9 8.444. As illustrated by the error plot, see the Figure in the next slide for the maximum error in the twelfth approximating function is y exact y [12] sup 0.0012.

MIXED TYPE BOUNDARY CONDITIONS The ideas developed at the beginning can also be applied to two-point boundary value problems of the form y = f (t,y,y ), y (a) = γ, y(b) = β, (14) so that in equation (3) the constant γ is now known and α = y(a) is unknown. Thus in (5) we evaluate at t = b but now solve for α instead of γ, obtaining in place of (6) the expression b α = β γ(b a) (b s)f a (s,y(s),y (s))ds. In the Picard iteration scheme we now successively update an approximation of α starting with some initial value α 0. The iterates are y [0] (t) α 0 u [0] (t) γ (15a) and α [0] = α 0 t y [k+1] (t) = α [k] + a u[k] (s)ds t u [k+1] (t) = γ + f a (s,y[k] (s),u [k] (s))ds b α [k+1] = β γ(b a) (b s)f a (s,y[k] (s),u [k] (s))ds. (15b)

Consider the boundary value problem y = y e y, y (0) = 1, y(1) = ln2 (16) The exact solution is y(t) = ln(1 + t), for which y(0) = ln1 = 0. Introducing the dependent variable u = y as always to obtain the equivalent first order system y = u, u = ue y and the variable v = e y to replace the transcendental function with a polynomial we obtain the expanded system y = u u = uv v = uv with initial conditions y(0) = α, u(0) = γ, v(0) = e α, so that, making the initial choice α 0 = 1 y [0] (t) α 0 u [0] (t) 1 (17) v [0] (t) e α 0 and

y [k+1] (t) u [k+1] (t) = ln2 1 + 1 0 (1 s)u [k] (s)v [k] (s)ds 1 (18) v [k+1] (t) e α 0 t u [k] (s) + u [k] (s)v [k] (s) ds, 0 u [k] (s)v [k] (s) where we have shifted the update of α and incorporated it into the update of y [k] (t). We list the first eight values of α [k] to show the rate of convergence to the exact value α = 0. α [1] = 0.00774, α [2] = 0.11814, α [3] = 0.07415, α [4] = 0.08549, α [5] = 0.08288, α [6] = 0.08339, α [7] = 0.08330, α [8] = 0.08331

MIXED TYPE BOUNDARY CONDITIONS: A SECOND APPROACH In actual applications the Algorithm converges relatively slowly because of the update at every step of the estimate of the initial value y(a) rather than of the initial slope y (a). A different approach that works better in practice with a problem of the type y = f (t,y,y ), y (a) = γ, y(b) = β, (19) is to use the relation b y (b) = y (a) + f a (s,y(s),y (s))ds (20) between the derivatives at the endpoints to work from the right endpoint of the interval [a,b], at which the value of the solution y is known but the derivative y unknown. That is, assuming that (19) has a unique solution and letting the value of its derivative at t = b be denoted δ, it is also the unique solution of the initial value problem y = f (t,y,y ), y(b) = β, y (b) = δ. (21)

We introduce the new dependent variable u = y to obtain the equivalent system y = u u = f (t,y,u) with initial conditions y(b) = β and y (b) = δ and apply Picard iteration based at t = b, using (20) with y (a) = γ to update the approximation of y (b) are each step. Choosing a convenient initial estimate δ 0 of δ, the successive approximations of the solution of (21), hence of (19), are given by y [0] (t) β u [0] (t) δ 0 (22a) and δ [0] = δ 0 t y [k+1] (t) = β + b u[k] (s)ds t u [k+1] (t) = δ [k] + f b (s,y[k] (s),u [k] (s))ds b δ [k+1] = γ + f a (s,y[k] (s),u [k] (s))ds. (22b)

EXAMPLE Reconsider the boundary value problem y = y e y, y (0) = 1, y(1) = ln2 (23) with exact solution y(t) = ln(1 + t). The relation (20) is 1 y (1) = 1 0 y (s)e y(s) ds. (24) We set u = y and v = e y to obtain the expanded system y = u u = uv v = uv Since, our standard way is to have initial condition at t = 0. Therefore we use change of variable τ = 1 t and Example becomes y = y e y, y(0) = ln2, y (1) = 1 (25) with exact solution y(t) = ln(τ 2). The relation (20) is y (1) = 1 + 1 0 y (s)e y(s) ds. (26)

We set u = y and v = e y to obtain the expanded system y = u u = uv v = uv y [0] (t) ln2 u [0] (t) 1 v [0] (t) 1 2 (27a) and δ [0] 11 y [k+1] (t) ln2 u [k+1] t u [k] (s) (t) = δ [k] + u [k] (s)v [k] (s) ds, v [k+1] 1 0 (t) 2 u [k] (s)v [k] (s) 1 δ [k+1] = 1 + 0 u[k] (s)v [k] (s)ds. (27b) The exact value of δ = y (1) is 1 2. The first eight values of δ[k] are: δ [0] = 0.50000, δ [1] = 0.41667, δ [2] = 0.50074, δ [3] = 0.51592, δ [4] = 0.49987, δ [5] = 0.49690, δ [6] = 0.50003, δ [7] = 0.50060, δ [8] = 0.50000

LONG INTERVALS Consider the following Boundary Value Ordinary Differential Equation, w (t) = f (t,w(t),w (t)), a t b w(a) = α, w(b) = β (28) Here we introduce a new algorithm for the case if the original algorithm failed to converge on a long interval. We split the long interval into subintervals and show the new algorithm gives convergence to the solution. For simplicity we divided interval [a,b] into three subintervals [a,t 1 ], [t 1,t 2 ] and [t 2,b]. Where t 0 = a < t 1 < t 2 < t 3 = b. Where h = t i t i 1 for 1 i 3

We would like to show that the boundary value problem (28) on the long interval has a unique solution if the following Initial value problems can simultaneously generate recursively a convergence sequence of y(t) restricted to each subintervals. w 1 (t) = f (t,w 1(t),w 1 (t)), a t t 1 w 1 (a) = α, w 1 (a) = γ 1 (29) w 2 (t) = f (t,w 2(t),w 2 (t)), t 1 t t 2 w 2 (t 1 ) = β 1, w 2 (t 1) = γ 2 (30) and w 3 (t) = f (t,w 3(t),w 3 (t)), t 2 t b w 3 (t 2 ) = β 2, w 3 (t 2) = γ 3 (31) We convert interval [t i 1,t i ] to [0,h] by τ = t t i 1. This transformation can help us to simplify the proof. Hence (29) becomes (30) becomes and (31) becomes y 1 (τ) = f 1(τ,y 1 (τ),y 1 (τ)), 0 τ h y 1 (0) = α, y 1 (0) = γ 1 y 2 (τ) = f 2(τ,y 2 (τ),y 2 (τ)), 0 τ h y 2 (0) = β 1, y 2 (0) = γ 2 y 3 (τ) = f 3(τ,y 3 (τ),y 3 (τ)), 0 τ h y 3 (0) = β 2, y 3 (0) = γ 3 (32) (33) (34)

The initial conditions for each of the equations (32), (33) and (34) will be obtained so that the following conditions are satisfied y 1 (h;β 1,γ 1 ) = y 2 (0;β 1 ;γ 2 ) Continuity Condition y 2 (h;β 2,γ 2 ) = y 3 (0;β 2 ;γ 3 ) Continuity Condition y 1 (h;β 1,γ 1 ) = y 2 (0;β 1;γ 2 ) Smoothness Condition y 2 (h;β 2,γ 2 ) = y 3 (0;β 2;γ 3 ) Smoothness Condition y 3 (h;β 2 ;γ 3 ) = β (35) These conditions will be satisfied automatically inside of the algorithm. The recursion initialization becomes y [0] 1 (τ) α u [0] 1 (τ) β α b a γ [0] 1 u[0] 1 β [0] 1 u[0] 1 h + α u [0] 2 (τ) γ[0] 1 u [0] 3 (τ) γ[0] 1 (36)

( y [k+1] ) ( ) 1 (τ) α τ u [k+1] = 1 (τ) γ [k] + 1 0 ( y [k+1] 2 (τ) u [k+1] 2 (τ) ( u [k] 1 (s) f 1 (s,y [k+1] 1 (s),u [k] 1 (s)) ) ds (37a) φ [k] 1 y[k+1] 1 (h), γ [k] 2 u[k+1] 1 (h) (37b) ) ( ) ( ) = φ [k] 1 γ [k] 2 τ + 0 u [k] 2 (s) f 2 (s,y [k+1] 2 (s),u [k] 2 (s)) ds (37c) φ [k] 2 y[k+1] 2 (h), γ [k] 3 u[k+1] 2 (h) (37d) ( y [k+1] ) ( 3 (τ) φ [k] ) ( τ u [k+1] = 2 u [k] ) 3 (τ) γ [k] + 3 (s) 0 3 f 3 (s,y [k+1] 3 (s),u [k] 3 (s)) ds (37e) β [k] h 2 = β γ[k] 3 h (h s)f 3(s,y [k+1] 0 3 (s),u [k+1] 3 (s))ds β [k+1] 1 = β [k] h 2 γ[k] 2 h γ [k+1] 1 = 1 h [ β 1 α (h s)f 2(s,y [k+1] 0 2 (s),u [k+1] 2 (s))ds h ] (h s)f 1(s,y [k+1] 0 1 (s),u [k+1] 1 (s))ds. (37f)

Example Consider the boundary value problem, we y = 2y 3, y(0) = 1, y(2) = β (38) 4 Our goal is to divide a long intervals into n subintervals, with relatively small length, that satisfies our original Theorem. We divided the interval into three subintervals [0, 5 4 ] [ 5 4, 7 4 ] [ 4 7,2]. For n = 18. The exact value of γ = 0.0625. The first 10 values of γ [k] are: γ [1] = 0.12500, γ [2] = 0.10156, γ [3] = 0.06154, γ [4] = 0.04716, γ [5] = 0.06917, γ [6] = 0.06903, γ [7] = 0.05852, γ [8] = 0.05948, γ [9] = 0.06515, γ [10] = 0.06366,

VOLTERRA INTEGRAL A Volterra equation of the second kind is an equation of the form t y(t) = ϕ(t) + K(t, s)f (s, y(s)) ds (39) 0 where ϕ, K, and f are known functions of suitable regularity and y is an unknown function. Such equations lend themselves to solution by successive approximation using Picard iteration, although in the case that the known functions are not polynomials the process can break down when quadratures that cannot be performed in closed form arise. If the kernel K is independent of the first variable t then the integral equation is equivalent to an initial value problem. Indeed, in precisely the reverse of the process.

We have that is equivalent to t y(t) = ϕ(t) + k(s)f (s,y(s))ds 0 y (t) = ϕ (t) + k(t)f (t,y(t)), y(0) = ϕ(0). In this chapter we provide a method for introducing auxiliary variables in (39), in the case that K factors as K(t,s) = j(t)k(s), in such a way that it embeds in a vector-valued polynomial Volterra equation, thus extending the Parker-Sochacki method to this setting and thereby obtaining a computationally efficient method for closely approximating solutions of (39).

EXAMPLE (VIE) Linear Consider the following second kind linear volterra integral. t 2 + cos(t) y(t) = exp(t)sin(t)+ y(s)ds (40) 0 2 + cos(s) φ(t) = exp(t)sin(t) k(t,s) = 2+cos(t) 2+cos(s) f (t,y(t)) = y(t) (41) The exact solution is ( )( y(t) = exp(t)sin(t)+exp(t) 2 + cos(t) ln(3) ln ( 2 + cos(t) )) (42)

EXAMPLE { y(t) = exp(t)sin(t) + (2 + cos(t)) t 1 0 2+cos(s) y(s)ds y(0) = exp(0)sin(0) = 0 (43) We define the following variables v 1 = exp(t), v 1 (0) = 1 v 2 = cos(t), v 2 (0) = 1 v 3 = sin(t), v 3 (0) = 0 v 4 = 2 + v 2, v 4 (0) = 3 (44) v 5 = 1 v 4, v 5 (0) = 1 3 v 1 = v 1 v 2 = v 3 v 3 = v 2 v 4 = v 2 = v 3 v 5 = v 4 v 2 4 = v 3 v 2 5

EXAMPLE v1 (t) = v 1 (0) + t 0 v 1 (s)ds v2 (t) = v 2 (0) t 0 v 3 (s)ds v3 (t) = v 3 (0) + t 0 v 2 (s)ds v4 (t) = v 4 (0) t 0 v 3 (s)ds v5 (t) = v 5 (0) + t 0 v 3 (s)v 2 5 ds y [1] = 0 = α [1] v 1 = exp(0) = 1 [1] v 2 = cos(0) = 1 [1] v 3 = sin(0) = 0 [1] v 4 = 2 + v[1] 2 = 3 [1] v 5 = 3 1

EXAMPLE VIE Thus the iterates are y [k+1] = q [k] 1 v[k] 3 + v[k] 4 t0 v [k] 5 y[k] ds [k+1] v 1 = 1 + t 0 v [k] 1 ds [k+1] v 2 = 1 t 0 v [k] 3 ds [k+1] v 3 = t 0 v [k] 2 ds [k+1] v 4 = 3 t 0 v [k] 3 ds [k+1] v 5 = 3 1 + t 0 v [k] 3 (v[k] 5 )2 ds The approximation of solution for the first 4 iterations y [2] = 0 y [3] = t 2 + t y [4] = 1 180 t7 1 144 t6 1/45t 5 1/24t 4 + 5/6t 3 + 3/2t 2 + t y [5] 1 1 59 41 = 9797760 t16 + 7278336 t15 + 29393280 t14 + 13343616 t13 1 87480 t12 1 42768 t11 1 2880 t10 1 1512 t9 1 480 t8 1 336 t7 49 1080 t6 1/8t 5 + 1/6t 4 + 5/6t 3 + 3/2t 2 + t (45)

EXAMPLE VIE For n = 5 y approx = y [n+1], Exact = y(t)

EXAMPLE VIE By increasing n tto n = 7 y approx = y [n+1], Exact = y(t)

EXAMPLE VIE Non-Linear Biazar on his paper [11] has introduced the following second kind non-linear volterra integral. φ(t) = 1/2sin(2t) k(t,s) = cos(s t) f (t,y) = y 2 (46) THE EXACT SOLUTION IS y(t) = sin(t) (47) To solve this non-linear Volterra integral we set up the integral y(t) = 1 2 sin(2t) + 3 t0 2 cos(s t)y 2 (s)ds y(0) = 1 2 sin(2(0)) = 0 (48)

EXAMPLE VIE For n = 5 Error = y [n+1] y(t)

EXAMPLE VIE By increasing n to n = 7 we have y approx = y [n+1], Exact = y(t)

THANK YOU Thank you

(http://en.wikipedia.org/wiki/parker-sochacki-method) Herbert B. Keller; Numerical Methods For Two Point Boundary Value Problems David C. Carothers, G. Edgar Parker, James S. Sochacki, Paul G. Warne; Some Properties Of Solutions To Polynomial Systems Of Differential Equations Abdul Majid Wazwaz;First Course in Integral Equations, (December 1997) Publisher: World Scientific Publishing Company, New Jersey and Singapore http : //en.wikipedia.org/wiki http : //en.wikipedia.org/wiki/paul du Bois Reymond http : //calculus7.org/2012/06/18/ivp vs bvp/ G. Edgar Parker And James S. Sochacki; A Picard-Maclaurin Theorem For Initial Value Pdes

David C. Carothers, G. Edgar Parker, James S. Sochacki, D. A. Warne, P. G. Warne; Explicit A-Priori Error Bounds and Adaptive Error Control for Approximation of Nonlinear Initial Value Differential Systems David Carothers, William Ingham, James Liu, Carter Lyons, John Marafino, G. Edgar Parker, David Pruett, Joseph Rudmin, James S.Sochacki, Debra Warne, Paul Warne, ; An Overview Of The Modified Picard Method G. Edgar Parker, James S.Sochacki; Implementing Picard Methods Jafar Biazar and Mostafa Eslami; Homotopy Perturbation and Taylor Series for Volterra Integral of Second Kind. Sohrab Effati, Mohammad Hadi Noori Skandari; Optimal Control Approach for Solving Linear Volterra Integral Equations Dennis Aebersold ; Integral Equations in Quantum Chemistry

Peter Linz; Numerical Methods For Volterra Integral Equations Of The First Kind Alexander John Martyn Little; Analytical And Numerical Solution Methods For Integral Equations Using Maple Symbolic Algebriac Package Sivaji Ganesh Sista; Ordinary Differential Equations, Lecture Notes Chapter 3 Denise Gutermuth; Picard s Existence and Uniqueness Theorem, Lecture Notes