The solution of linear differential equations in Chebyshev series

Similar documents
ƒ f(x)dx ~ X) ^i,nf(%i,n) -1 *=1 are the zeros of P«(#) and where the num

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Next topics: Solving systems of linear equations

Series Solutions of Differential Equations

Notes on Special Functions

MATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces.

Massachusetts Institute of Technology Instrumentation Laboratory Cambridge, Massachusetts

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Solution of System of Linear Equations & Eigen Values and Eigen Vectors

(308 ) EXAMPLES. 1. FIND the quotient and remainder when. II. 1. Find a root of the equation x* = +J Find a root of the equation x 6 = ^ - 1.

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

2 Eigenvectors and Eigenvalues in abstract spaces.

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Transactions on Modelling and Simulation vol 12, 1996 WIT Press, ISSN X

Physics 741 Graduate Quantum Mechanics 1 Solutions to Midterm Exam, Fall x i x dx i x i x x i x dx

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Fall Math 3410 Name (Print): Solution KEY Practice Exam 2 - November 4 Time Limit: 50 Minutes

VH-G(A)=IIVHGF(S)"J=1.

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Digital Workbook for GRA 6035 Mathematics

Generated for Bernard Russo;Bernard Russo (University of California, Irvine) on :28 GMT /

Column 3 is fine, so it remains to add Row 2 multiplied by 2 to Row 1. We obtain

2 Series Solutions near a Regular Singular Point

LINEAR SYSTEMS (11) Intensive Computation

Power Series and Analytic Function

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Series Solutions Near a Regular Singular Point

THREE DIMENSIONAL SIMILARITY SOLUTIONS OF THE NONLINEAR DIFFUSION EQUATION FROM OPTIMIZATION AND FIRST INTEGRALS

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

The Conjugate Gradient Method

1.2. Indices. Introduction. Prerequisites. Learning Outcomes

Solving Linear Systems of Equations

A LAGRANGIAN FORMULATION OF THE JOOS-WEINBERG WAVE EQUATIONS FOR SPIN-S PARTICLES. D. Shay

The Exponential of a Matrix

x = x y and y = x + y.

JACOBI S ITERATION METHOD

Eigenvalue and Eigenvector Homework

Solving Differential Equations Using Power Series

Solving Differential Equations Using Power Series

Coupled differential equations

A MATHEMATICAL NOTE. Algorithm for the efficient evaluation of the trace of the inverse of a matrix

Solutions Preliminary Examination in Numerical Analysis January, 2017

ON THE DIVERGENCE OF FOURIER SERIES

INTEGRAL EQUATIONS. Introduction à la Théorie des Equations intégrales. By T.

CHAPTER III THE CURVE AND THE EQUATION. The bisectors of the adjacent angles formed by two lines is. To solve any locus problem involves two things :

Further Mathematical Methods (Linear Algebra) 2002

Quantum Mechanics for Scientists and Engineers. David Miller

Lecture Notes in Linear Algebra

M.SC. PHYSICS - II YEAR

Unit I (Testing of Hypothesis)

Math 0312 EXAM 2 Review Questions

Polynomial Solutions of Nth Order Nonhomogeneous Differential Equations

6 Second Order Linear Differential Equations

Chebyshev finite difference method for a Two Point Boundary Value Problems with Applications to Chemical Reactor Theory

Solving Linear Systems Using Gaussian Elimination

Math Exam 2, October 14, 2008

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Determinants connected with the Periodic Solutions of Mathieu's Equations. (Read 11th June Received 28th June 1915.)

Rectangular Systems and Echelon Forms

Systems of Linear Equations

(Received 11 October 1996; accepted 7 March 1997)

COMPOSITIO MATHEMATICA

Week_4: simplex method II

L=/o U701 T. jcx3. 1L2,Z fr7. )vo. A (a) [20%) Find the steady-state temperature u(x). M,(Y 7( it) X(o) U ç (ock. Partial Differential Equations 3150

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

arxiv: v4 [math.nt] 31 Jul 2017

ALEXANDER E. HOLROYD AND THOMAS M. LIGGETT

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

LECTURE 14: REGULAR SINGULAR POINTS, EULER EQUATIONS

Circular Membranes. Farlow, Lesson 30. November 21, Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431

ON DIVISION ALGEBRAS*

Infinite series, improper integrals, and Taylor series

Random variables (discrete)

Linear Algebra Homework and Study Guide

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Lecture 8: Discrete-Time Signals and Systems Dr.-Ing. Sudchai Boonto

Examination paper for TMA4130 Matematikk 4N: SOLUTION

On Systems of Diagonal Forms II

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Equations with regular-singular points (Sect. 5.5).

A L A BA M A L A W R E V IE W

CHAPTER XVI The search for consistency for intricate exponential algebras.

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression

Vector Spaces. (1) Every vector space V has a zero vector 0 V

Math /Foundations of Algebra/Fall 2017 Numbers at the Foundations: Real Numbers In calculus, the derivative of a function f(x) is defined

I I. R E L A T E D W O R K

2-f-eV. 542 Mr Thomas, The calculation of atomic fields

and A", where (1) p- 8/+ 1 = X2 + Y2 = C2 + 2D2, C = X=\ (mod4).

ON THE AVERAGE NUMBER OF REAL ROOTS OF A RANDOM ALGEBRAIC EQUATION

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Physics 250 Green s functions for ordinary differential equations

In numerical analysis quadrature refers to the computation of definite integrals.

Linear Algebra Practice Problems

Transcription:

The solution of linear differential equations in Chebyshev series By R. E. Scraton* The numerical solution of the linear differential equation Any linear differential equation can be transformed into an infinite set of simultaneous equations in the Chebyshev coefficients of its solution. In suitable ces these equations can be solved by iterative procedures, which are particularly suitable for automatic computation. Similar iterative methods can also be used for certain eigenvalue problems. y M + "l! Pi(x)y U) = F( X ) (l) in the form of a Chebyshev series y = a T (x) + a T (x) = ' a r T r {x) r= h been the subject of a number of papers in recent years. The so-called 'direct' method of determining the coefficients a r w first proposed by Clenshaw (97), h been subsequently discussed by Fox (96). This method is only practicable when the functions P,-(JC) are polynomials of small degree. A later paper by Clenshaw Norton (96) gave an iterative procedure bed on Picard's method the principle of collocation; this is applicable to a wider cls of differential equations, but is usually considerably more lengthy than the direct method. The purpose of the present paper is to consider the extension of Clenshaw's original method to the ce the Pi(x) are general functions with nown Chebyshev expansions. It will be sumed throughout that the independent variable h been suitably transformed so that the solution is required in the range < x <. It should be noted that in writing the differential equation in the form () suming that the Pi(x) have Chebyshev expansions, it h been implicitly sumed that the differential equation h no singularities in the range < JC <. First-order equations The first-order linear equation may be put in the form y + P{x)y = F{x) = S' p,.txx) r= = ' f r T r {x). It will be sumed for the sae of definiteness that the initial condition is specified at x =, i.e. y(l) = -q. The modifications to the equations below when the initial condition is specified at some other point should be reonably obvious. If in the usual way the Chebyshev coefficients of y are denoted by a n those of y' by a' r, it is eily seen that a' r + ' (j> f+i +p\ r - s \)a s = /,, r >. s= The d r term can be eliminated in the customary manner by maing use of the relation a,_, = ra r to yield ra r + ^ r r+, +»,->, = <A r, r >, () -n r = />,._, p r+u TT =, ir_ r = -n r <t>r=f,-\-fr+\- The initial condition is given by,?o' s = ^ if this is used to eliminate a equation () becomes rs)a s = {<f> r 7)TT r ), r>\. () Now () is an infinite set of equations in the infinite set of unnowns a u a, a,... It is always possible to get approximate values for these unnowns by suming that a s is negligible for s > n, solving the first n of the set of equations () for a u a,... a n. In fact Fox (96) derived sets of equations equivalent to () for a few simple differential equations, obtained solutions by this method. If, however, P(x) is not too large if, say, P(x) is of order the terms n r+s, -n r, -n r^s in () will be small compared with the term r. In other words, the set of equations will have a "strong diagonal." This suggests that an iterative procedure of the Gauss- Seidel type might be appropriate. Such a process would be defined follows: (r + n r - T7>W Mathematics Department, Northampton College of Advanced Technology, St. John St., London, E.C.I. 7 Downloaded from https://academic.oup.com/comjnl/article-abstract/8//7/89878 by guest on 8 November 8

Table Steps in the iterative solution of y' %e*y =, y{\) = a\ a* 6 ( s=r+l - - -8-77 -88-8 9 8 88 89 89 d denotes the /cth approximation to a r. The initial approximations a^ would normally be taen zero. It is not necessary to decide beforeh how many of the coefficients a r will be significant; at each stage in the iteration the a r s are calculated successively, those which are negligible to the number of decimal places being retained are ignored. The whole process reduces to the repetitive use of equation (), such it is very simple to program for automatic computation. As an example, Table shows the steps in the solution of the differential equation The final solution is 6 () y> - $e*y =, () y = -678 + -8 T y {x) +89 T (x) + T (x) + T (x) + T s (x) + 8 T 6 (x) + 6 r 7 (jc) + T 8 (x) +... (6) The correct solution of () is y = exp [^(e* e)], so that X) = -6966 y(- ) = -689; the corresponding values of y given by the series (6) are -697-68 respectively. Simultaneous differential equations For the set of simultaneous differential equations y'm + S P mn (x)y n = F m (x), m =,,,... N, 8 6 y m = S' 77^", (f> are related to P mn {x), F m (x) in the same way that -n r, <f> r are related to P(x), F(x). This equation will, of course, need some changes if any of the initial conditions are specified at points other than x =, but the bic approach remains the same. The solution of the set of equations (7) may be obtained by an iterative procedure, before. Furthermore, since a differential equation of any order can be rewritten a set of first-order simultaneous differential equations, the general linear equation () can be dealt with by a simple extension of the method described above for first-order equations. The second-order equation y" + P(x)y = F(x) Equations of the form y" + P(x) y = F(x) occur sufficiently often to merit special consideration. With the same notation before H' + S' (p r+s +/v_,,)fl, = /,, r >, s = hence Ara' r + ' (*>+, + *-,_,K = <f>r, After elimination of a' r, this reduces to oo r j Sra r + ' :K +J - r>\. Downloaded from https://academic.oup.com/comjnl/article-abstract/8//7/89878 by guest on 8 November 8 y m ( = Vm, the equivalent of the set of equations () is (7) 8 The initial or boundary conditions attached to a particular problem give two more equations in the coefficients a r ; these may be used to eliminate a a, from (8), so to leave a set of equations for the unnowns a, a, a,... For example, the initial-value problem

Table Steps in the iterative solution of /' (x 6 + x )y =, y(l) = X" V) = a OlO an 887 888 8 8 78 77 6 in which XI) = V>.V'O) = V'h the additional equations s flj = "<)' s=l Equation (8) therefore becomes -l V ~ oo (s - r_ s+,] rs \a r - V r _ (77 Tj')"-,- V("V r On the other h, for the boundary-value problem in which y(l) = 7 y{ ) = T^_,, the additional equations are a + a a 6 a + a + a + a 7 -+-... = v /x = 77, + ry_, v = ^(77, TI_,). ce equation (8) can be put in the form (U r _ Us t/ r>, = -("" r+j?r r + TT-r.^) if s is even (9) In this () U r, s = -(«v + s 77 r+, 77 r _, + 7T r _,) if j is odd. A similar procedure may be adopted in the more general 9 ce the boundary conditions involve linear combinations of y y'. The sets of equations (9) () may again be solved by an iterative Gauss-Seidel procedure. It should be noted, however, that the coefficient of a s in (9) contains terms of order s. This means that the set of equations h large terms away from the diagonal, consequently the efficiency of the iterative process will be diminished; indeed, in some ces the process may not converge at all. In these circumstances, it may be preferable to discard the iterative method use an elimination procedure to solve the first («) of equations (9) for the unnowns a, a, a,... a n. A similar situation arises in boundary-value problems when the boundary conditions involve y' well y. When, however, the boundary conditions involve y only, the s terms do not occur; it will be seen that the set of equations () h a very strong diagonal, the iterative process is therefore rapidly convergent. (Strictly speaing, of course, the convergence of the Gauss-Seidel process depends on the magnitude of the latent roots of an sociated matrix, rather than on the strength of the diagonal; but a strong diagonal is usually a fair indication that the process will converge quicly.) As an example, the equation /' - (x* + x )y =, XI) = X- ) =, () will be considered. The solution is clearly an even function of x, so that only a, a, a 6,... need be found. The steps in the solution are set out in Table ; the final solution is y = -889 + 888 T (x) + 8 T {x) + 77 T 6 (x) +6 T 8 (x) + T lq (x) + r, (JC) +... () The correct solution of () is y = exp {K* )}, so that y() = -7788; the value of y() obtained from the series () is -7788. Eigenvalue problems Since any linear differential equation can be transformed into an infinite set of simultaneous equations in the coefficients a n any linear eigenvalue problem can similarly be transformed into a determinantal equation of infinite dimensions. In general this equation can be solved approximately by ignoring all except the first n Downloaded from https://academic.oup.com/comjnl/article-abstract/8//7/89878 by guest on 8 November 8

Table Steps in the solution of the eigenvalue problem y" + A(.r + \)y =, y(l) = y( ) = A = I/A a, Of, a, -7 - -8 - - - 7 - - 6-8 86 8 rows columns of the determinant. When, however, the eigenvalue problem involves a differential equation of the form y" + XP(x)y = a more powerful method of solution is available, bed on the iterative procedure already described. This type of problem will therefore be considered in detail. Suppose first that the boundary conditions are of the form y{ ) = y{\) =. With the same notation before, it follows immediately from equation () that A (tf,-i,, - </, s= This may be written b rs a s = Aa n r >, s=l r =, r >. A = I/A. The eigenvalues A of the differential equation are therefore given immediately by the latent roots A of the infinite matrix [b rs \; the determination of the fundamental eigenvalue (which is often the only one required) is equivalent to the evaluation of the largest latent root of the matrix. The iterative method suggested below for determining the fundamental eigenvalue is in fact a slight modification of the familiar iterative procedure for calculating a dominant latent root. In general a may be expected to be the largest of the a r s when a a x have been eliminated; it is convenient therefore to tae a unity throughout. (A few obvious changes will be necessary if a vanishes, e.g. if y is an odd function of x.) The iterative procedure is then defined by the formulae t-i) = s= df = «)-6 rr )a»)=.= ^+ () 6 - - - 9-87 7 7 - - 6-6 - - It will usually be convenient to start with a ), aw = for r >. As before many a r s are significant are retained at each stage of the iteration. As an example, consider the equation y" + X(x + l)y =, = y(~ ) =. () The steps in the solution are set out in Table. It will be seen that A = -, so A = -696; the correct fundamental eigenvalue of () is in fact -69. The corresponding solution of the differential equation is y = A[- -9-8 T,(x) + T (x) + - T-lx) - 8 T (x) - 87 T (x) - O-OOO7 T 6 (x) + 6 T 7 (x) + T 8 (x)...]. The above method can also be used when the boundary conditions involve y' well y. For differential equations of the form y" + XP(x)y =, equation (8) can be written in the form A = I/A before. The two boundary conditions may be used to eliminate a a t from (), so that it reduces to the form GO b t fl s = Aa r, r >. s= The values of A a r can then be determined by the iterative procedure (). For example, for the equation /' + A.y =, y{\) =, y'(- ) =, equation () reduces to a r-, a r a r+ r(r ) T {r - r(r = Aa r, Downloaded from https://academic.oup.com/comjnl/article-abstract/8//7/89878 by guest on 8 November 8

Table Steps in the solution of the eigenvalue problem y" + \y =, y(l) =, y'( l) = A = l/a. a 6 6 7 - -67-6 -6-6 -6-6 the initial conditions are given by $a + a x + a + a + a +.. = a x o + 9a 6a +.. =. The iterative procedure is therefore A<«= - ^ - 88a<*-» + (8A<«= 8 7-6 - -7-6 = - (r + \)a«± - (r - l)a» + -!), r >. The steps in the solution are set out in Table. It will be seen that A = -6, so A = -669; the correct value of A is, of course, 77 /6 = -668. In order to find eigenvalues other than the fundamental, it is necessary to eliminate the dominant latent root from the matrix [b rs ]\ the method of doing this h been well described in Modern Computing Methods (96), page 6, need not be repeated here. When References 7 7 this root h been removed, the next eigenvalue may be found by an iterative procedure similar to that described above. Conclusions It h been shown that Chebyshev expansions may be used to find the numerical solutions of a general cls of linear differential equations; but the method appears to be particularly advantageous for the following three types of problem: (a) First-order equations of the form y' -\-P(x)y=F(x), P(x) is of order or smaller, systems of such equations. (b) Second-order equations of the form y" + P(x)y = F{x), especially when the boundary conditions specify values of y only. (c) Eigenvalue problems involving differential equations of the form y" + hp(x)y =. For each of these types of problem, iterative procedures are available which are particularly suitable for automatic computation. CLENSHAW, C. W. (97). "The Numerical Solution of Linear Differential Equations in Chebyshev Series," Proc. Catnb. Phil. Soc, Vol., p.. CLENSHAW, C. W., NORTON, H. J. (96). "The solution of non-linear ordinary differential equations in Chebyshev series," The Computer Journal, Vol. 6, p. 88. Fox, L. (96). "Chebyshev Methods for Ordinary Differential Equations," The Computer Journal, Vol., p. 8. Modern Computing Methods (96). N.P.L. Notes on Applied Science, No. 6 (H.M.S.O.). Downloaded from https://academic.oup.com/comjnl/article-abstract/8//7/89878 by guest on 8 November 8 6