A collocation method for solving some integral equations in distributions

Similar documents
Electromagnetic wave scattering by many small bodies and creating materials with a desired refraction coefficient

Inverse scattering problem with underdetermined data.

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007),

Creating materials with a desired refraction coefficient: numerical experiments

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

Heat Transfer in a Medium in Which Many Small Particles Are Embedded

How large is the class of operator equations solvable by a DSM Newton-type method?

A method for creating materials with a desired refraction coefficient

Abstract. 1. Introduction

A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data

Q. Zou and A.G. Ramm. Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA.

Phys.Let A. 360, N1, (2006),

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

Outline of Fourier Series: Math 201B

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Vectors in Function Spaces

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

February 12, :52 WSPC/Book Trim Size for 9in x 6in book. Publishers page

7: FOURIER SERIES STEVEN HEILMAN

An introduction to some aspects of functional analysis

Finding discontinuities of piecewise-smooth functions

Orthonormal series expansion and finite spherical Hankel transform of generalized functions

CHAPTER 10 Shape Preserving Properties of B-splines

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Math The Laplacian. 1 Green s Identities, Fundamental Solution

GENERAL QUARTIC-CUBIC-QUADRATIC FUNCTIONAL EQUATION IN NON-ARCHIMEDEAN NORMED SPACES

Polynomial Approximation: The Fourier System

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems

Starting from Heat Equation

Solving Analytically Singular Sixth-Order Boundary Value Problems

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

1 Continuity Classes C m (Ω)

Creating materials with a desired refraction coefficient: numerical experiments. Sapto W. Indratno and Alexander G. Ramm*

Chapter 4: Interpolation and Approximation. October 28, 2005

where c R and the content of f is one. 1

MATH34032 Mid-term Test 10.00am 10.50am, 26th March 2010 Answer all six question [20% of the total mark for this course]

Problem 1A. Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1. (Hint: 1. Solution: The volume is 1. Problem 2A.

Title: Localized self-adjointness of Schrödinger-type operators on Riemannian manifolds. Proposed running head: Schrödinger-type operators on

Green s Functions and Distributions

TEST CODE: PMB SYLLABUS

Laplace s Equation. Chapter Mean Value Formulas

Course 212: Academic Year Section 1: Metric Spaces

Linear Algebra Massoud Malek

SOLUTIONS OF SELECTED PROBLEMS

Encyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht,

Mathematics Department Stanford University Math 61CM/DM Inner products

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

Overview of normed linear spaces

A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators

NOTES ON SCHAUDER ESTIMATES. r 2 x y 2

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

be the set of complex valued 2π-periodic functions f on R such that

MTH 503: Functional Analysis

How to create materials with a desired refraction coefficient?

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

MATH 101A: ALGEBRA I PART C: TENSOR PRODUCT AND MULTILINEAR ALGEBRA. This is the title page for the notes on tensor products and multilinear algebra.

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

THE INVERSE FUNCTION THEOREM

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Lecture 4: Fourier Transforms.

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

4 Riesz Kernels. Since the functions k i (ξ) = ξ i. are bounded functions it is clear that R

AN INVERSE PROBLEM FOR THE WAVE EQUATION WITH A TIME DEPENDENT COEFFICIENT

Key Words: cardinal B-spline, coefficients, moments, rectangular rule, interpolating quadratic spline, hat function, cubic B-spline.

Existence and Multiplicity of Solutions for a Class of Semilinear Elliptic Equations 1

Immerse Metric Space Homework

Real Analysis Problems

Problem Set 5: Solutions Math 201A: Fall 2016

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation

SOLUTION OF POISSON S EQUATION. Contents

Lecture 7.3: Ring homomorphisms

PH.D. PRELIMINARY EXAMINATION MATHEMATICS

CHAPTER VIII HILBERT SPACES

Functional Analysis HW #3

Fourier Series. 1. Review of Linear Algebra

INVERSE FUNCTION THEOREM and SURFACES IN R n

Algebra Exam Syllabus

2 Two-Point Boundary Value Problems

Where is matrix multiplication locally open?

PART IV Spectral Methods

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Continuous Functions on Metric Spaces

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

There are two things that are particularly nice about the first basis

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

arxiv:math/ v1 [math.dg] 6 Jul 1998

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1)

EXTENSIONS OF ABSOLUTE VALUES

Math 209B Homework 2

Scientific Computing: Numerical Integration

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS


Homework 8 Solutions to Selected Problems

Transcription:

A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract A collocation method is presented for numerical solution of a typical integral equation Rh := R(x, y)h(y)dy = f(x), x D of the class D R, whose kernels are of positive rational functions of arbitrary selfadjoint elliptic operators defined in the whole space R r, D R r is a bounded domain. Several numerical examples are given to demonstrate the efficiency stability of the proposed method. MSC: 45A05, 45P05, 46F05, 62M40, 65R20, 74H15 Key words: integral equations in distributions, signal estimation, collocation method. 1 Introduction In [4] a general theory of integral equations of the class R was developed. The integral equations of the class R are written in the following form: Rh := R(x, y)h(y)dy = f(x), x D := D Γ, (1) D where D R r is a (bounded) domain with a (smooth) boundary Γ. Here the kernel R(x, y) has the following form [4, 5, 6, 7]: R(x, y) = P (λ)q (λ)φ(x, y, λ)dρ(λ), (2) Λ where P (λ), Q(λ) > 0 are polynomials, degp = p, degq = q, q > p, Φ, ρ, Λ are the spectral kernel, spectral measure, spectrum of a selfadjoint 1

elliptic operator L on L 2 (R r ) of order s. It was also proved in [4] that R : Ḣ α (D) H α (D) is an isomorphism, where H α (D) is the Sobolev space Ḣ α (D) its dual space with respect to the L 2 (D) inner product, α = s(q p) 2. In this paper we consider a particular type of integral equations of the class R with D = (, 1), r = 1, L = i, :=, Λ (, ), dρ(λ) = dλ, Φ(x, y, λ) = eiλ(x y) 2π, P (λ) = 1, Q(λ) = λ2 +1 2, s = 1, p = 0,q = 2, α = 1, i.e., Rh(x) := d dx e x y h(y)dy = f(x), (3) where h Ḣ ([, 1]) f H 1 ([, 1]). We denote the inner product norm in H 1 ([, 1]) by u, v 1 := ( ) u(x)v(x) + u (x)v (x)dx dx u, v H 1 ([, 1]), (4) u 2 1 := ( u(x) 2 + u (x) 2) dx, (5) respectively, where the primes denote derivatives the bar sts for complex conjugate. If u v are real-valued functions in H 1 ([, 1]) then the bar notations given in (4) can be dropped. Note that if f is a complex valued function then solving equation (3) is equivalent to solving the equations: e x y h k (y)dy = f k (x), k = 1, 2, (6) where h 1 (x) :=Reh(x), h 2 (x) :=Imh(x), f 1 (x) :=Ref(x), f 2 (x) :=Imf(x) h(x) = h 1 (x)+ih 2 (x), i =. Therefore, without loss of generality we assume throughout that f(x) is real-valued. It was proved in [5] that the operator R defined in (3) is an isomorphism between Ḣ ([, 1]) H 1 ([, 1]). Therefore, problem (3) is well posed in the sense that small changes in the data f(x) in the H 1 ([, 1]) norm will result in small in Ḣ ([, 1]) norm changes to the solution h(y). Moreover, the solution to (3) can be written in the following form: where h(x) = a δ(x + 1) + a 0 δ(x 1) + g(x), (7) a := f() f () 2, a 0 := f (1) + f(1), (8) 2 g(x) := f (x) + f(x), (9) 2 δ(x) is the delta function. Here throughout this paper we assume that f C α ([, 1]), α 2. It follows from (8) that h(x) = g(x) if only if f() = f () f(1) = f (1). 2

In [6, 7] the problem of solving equation (3) numerically have been posed solved. The least squares method was used in these papers. The goal of this paper is to develop a version of the collocation method which can be applied easily numerically efficiently. In [8] some basic ideas for using collocation method are proposed. In this paper some of these ideas are used new ideas, related to the choice of the basis functions, are introduced. In this paper the emphasis is on the development of methodology for solving basic equation (1) of the estimation theory by a version of the collocation method. The novelty of this version consists in minimization of a discrepancy functional (26), see below. This methodology is illustrated by a detailed analysis applied to solving equation (3), but is applicable to general equations of the class R. One of the goals of this paper is to demonstrate that collocation method can be successfully applied to numerical solution of some integral equations whose solutions are distributions, provided that the theoretical analysis gives sufficient information about the singular part of the solutions. Since f C α ([, 1]), α 2, it follows from (9) that g C([, 1]). Therefore, there exist basis functions ϕ j (x) C([, 1]), j = 1, 2,..., m, such that where max g(x) g m(x) 0 as m, (10) x [,1] g m (x) := m j ϕ j (x), (11) j, j = 1, 2,..., m, are constants. Hence the approximate solution of equation (3) can be represented by h m (x) = δ(x + 1) + c(m) 0 δ(x 1) + g m (x), (12) where j, j =, 0, are constants g m (x) is defined in (11). The basis functions ϕ j play an important role in our method. It is proved in Section 3 that the basis functions ϕ j in (12) can be chosen from the linear B-splines. The usage of the linear B-splines reduces the computation time, because computing (12) at a particular point x requires at most two out of the m basis functions ϕ j. For a more detailed discussion of the family of B-splines we refer to [10]. In Section 2 we derive a method for obtaining the coefficients j, j =, 0, 1,..., m, given in (12). This method is based on solving a finite-dimensional least squares problem ( see equation (33) below ) differs from the usual collocation method discussed in [2] [3]. We approximate f Rh m 2 1 by a quadrature formula. The resulting finite-dimensional linear algebraic system depends on the choice of the basis functions. Using linear B-splines as the basis functions, we prove the existence uniqueness of the solution to this linear algebraic system for all m = m(n) depending on the number n of collocation points used in the left rectangle quadrature rule. The convergence of our collocation method is proved in this Section. An example of the choice of the basis functions which yields the convergence of our version of the collocation method is given in Section 3. In 3

Section 4 we give numerical results of applying our method to several problems that discussed in [7]. 2 The collocation method In this Section we derive a collocation method for solving equation (3). From equation (3) we get Rh(x) = a e (x+1) + a 0 e (1 x) x + (e x e y g(y)dy + e x x ) e y g(y)dy = f(x). (13) Assuming that f C 2 ([, 1]) differentiating the above equation, one obtains (Rh) (x) = a e (x+1) + a 0 e (1 x) x + (e x e y g(y)dy e x x ) e y g(y)dy = f (x). (14) Thus, f(x) f (x) are continuous in the interval [, 1]. Let us use the approximate solution given in (12). From (13), (14) (12) we obtain Rh m (x) = e (x+1) + 0 e (1 x) n x + (e x e y ϕ j (y)dy + e x j x ) e y ϕ j (y)dy := f m (x), (15) (Rh m ) (x) = e (x+1) + 0 e (1 x) m x + [e x e y ϕ j (y)dy e x j x ] e y ϕ j (y)dy := (f m ) (x). (16) Thus, Rh m (x) (Rh m ) (x) are continuous in the interval [, 1]. Since f(x) Rh m (x) f (x) (Rh m ) (x) are continuous in the interval [, 1], we may assume throughout that the functions are Riemann-integrable over the interval [, 1]. J 1,m := [f(x) Rh m (x)] 2 (17) J 2,m := [f (x) (Rh m ) (x)] 2 (18) 4

Let us define q (x) := e (x+1), q 0 (x) := e (1 x), q j (x) := e x y ϕ j (y)dy, j = 1, 2,..., m, (19) define a mapping H n : C 2 ([, 1]) R n 1 by the formula: φ(x 1 ) φ(x 2 ) H n φ =., φ(x) C2 ([, 1]), (20) φ(x n ) where R n 1 := z 1 z 2. z n Rn z j := z(x j ), z C 2 ([, 1]) (21) x j are some collocation points which will be chosen later. We equip the space R n 1 with the following inner product norm u, v w (n),1 := u 2 w (n),1 := n n w (n) j (u j v j + u jv j), u, v R n 1, (22) w (n) j [u 2 j + (u j) 2 ], u R n 1, (23) respectively, where u j := u(x j ), u j := u (x j ), v j := v(x j ), v j := v (x j ), w j > 0 are some quadrature weights corresponding to the collocation points x j, j = 1, 2,..., n. Applying H n to Rh m, one gets (H n Rh m ) i = e (1+xi) + 0 e (1 xi) + m m j e (y xi) ϕ j (y)dy, i = 1, 2,..., n, x i where m = m(n) is an integer depending on n such that xi j e (xi y) ϕ j (y)dy+ (24) Let m(n) + 2 n, lim m(n) =. (25) n G n ( ) := H n (f Rh m ) 2 w (n),1, (26) 5

where f Rh m are defined in (3) (15), respectively, H n defined in (20), c c 0 w (n),1 defined in (23) = c 1 R m+2. Let us choose. c m w (n) j = 2, j = 1, 2,..., n, (27) n x j = + (j 1)s, s := 2, j = 1, 2,..., n, (28) n so that H n (f Rh m ) 2 w (n),1 is the left Riemannian sum of f Rh m 2 1, i.e., f Rh m 2 1 H n (f Rh m ) 2 w (n),1 := δ n 0 as n. (29) Remark 2.1. If J 1,m J 2,m are in C 2 ([, 1]), where J 1,m J 2,m are defined in (17) (18), respectively, then one may replace the weights w (n) j with the weights of the compound trapezoidal rule, get the estimate n δ n = (J 1,m (x) + J 2,m (x))dx w (n) j (J 1,m (x j ) + J 2,m (x j )) (30) 1 3n 2 D J, where δ n is defined in (29) D J := J 1,m(1) + J 2,m(1) (J 1,m() + J 2,m()). (31) Here we have used the following estimate of the compound trapezoidal rule [1, 9]: b n η(x)dx w (n) j η(x j ) (b b a)2 a 12n 2 η (b a)2 (x)dx = a 12n 2 η (b) η (a), (32) where η C 2 ([a, b]). Therefore, if D J C for all m, where C > 0 is a constant, then δ n = O ( ) 1 n. 2 The constants j in the approximate solution h m, see (12), are obtained by solving the following least squares problem: where G n is defined in (26). min G n ( ), (33) 6

A necessary condition for the minimum in (33) is n ( 0 = w (n) E m,l l E m,l + E E ) m,l m,l k =, 0, 1,..., m, (34) c k c k where l=1 E m,l := (f Rh m )(x l ), E m,l := (f Rh m ) (x l ), l = 1, 2,..., n. (35) Necessary condition (34) yields the following linear algebraic system (LAS): A m+2 = F m+2, (36) where R m+2, A m+2 is a square, symmetric matrix with the following entries: n (A m+2 ) 1,1 := 2 w (n) l e 2(xl+1), (A m+2 ) 1,2 = 0, (A m+2 ) 1,j := 2 (A m+2 ) 2,2 := 2 (A m+2 ) 2,j := 2 l=1 n l=1 n l=1 n l=1 w (n) l C l,j 2 e (x l+1), j = 3,..., m + 2, w (n) l e 2(1 x l), w (n) l B l,j 2 e (1 x l), j = 3,..., m + 2, (37) (38) (A m+2 ) i,j := n l=1 2w (n) l (B l,i 2 B l,j 2 + C l,i 2 C l,j 2 ), i = 3,..., m + 2, j = i,..., m + 2, (A m+2 ) j,i = (A m+2 ) i,j, i, j = 1, 2,..., m + 2, F m+2 is a vector in R m with the following elements: n (F m+2 ) 1 := w (n) l (f(x l ) f (x l ))e (xl+1) = H n Rh, Hq w (n),1 (F m+2 ) 2 := (F m+2 ) i := l=1 n l=1 n l=1 w (n) l (f(x l ) + f (x l ))e (1 x l) = H n Rh, Hq 0 w (n),1, w (n) l [f(x l )(C l,i 2 + B l,i 2 ) + f (x l )(B l,i 2 C l,i 2 )] = H n Rh, Hq i w (n),1, i = 3,..., m + 2, (39) (40) B l,j := x l e (y x l) ϕ j (y)dy, C l,j := xl e (x l y) ϕ j (y)dy. (41) 7

Theorem 2.2. Assume that the vectors H n q j, j =, 0, 1,..., m are linearly independent. Then linear algebraic system (36) is uniquely solvable for all m, where m is an integer depending on n such that (25) holds. Proof. Consider q j H 1 ([, 1]) defined in (19). Using the inner product in R n 1, one gets (A m+2 ) i,j = H n q i 2, H n q j 2 w (n),1, i, j = 1, 2,..., m + 2, (42) i.e., A m+2 is a Gram matrix. We have assumed that the vectors H n q j R n 1, j =, 0, 1,..., m, are linearly independent. Therefore, the determinant of the matrix A m+2 is nonzero. This implies linear algebraic system (36) has a unique solution. Theorem 2.2 is proved. It is possible to choose basis functions ϕ j such that the vectors H n q j, j =, 0, 1,..., m, are linearly independent. An example of such choice of the basis functions is given in Section 3. Lemma 2.3. Let y m := min be the unique minimizer for problem (33). Then G n (y m ) 0 as n, (43) where G n is defined in (26) m is an integer depending on n such that (25) holds. Proof. Let h(x) = a δ(x + 1) + a 0 δ(x 1) + g(x) be the exact solution to (3), Rh = f, where g(x) C([, 1]), define h m (x) = a δ(x + 1) + a 0 δ(x 1) + g m (x), (44) where Choose g m (x) so that m g m (x) := a j ϕ j (x). (45) Then max g(x) g m(x) 0 as m. (46) x [,1] G n (y m ) H n (f R h m ) 2 w (n),1, (47) because y m is the unique minimizer of G n. Let us prove that H(f R h m ) 2 0 as n. Let w (n),1 W 1,m (x) := f(x) R h m (x), W 2,m := f (x) (R h m ) (x). 8

Then x W 1,m (x) = e x e y (g(y) g m (y))dy + e x e y (g(y) g m (y))dy (48) x W 2,m (x) = e x e y (g(y) g m (y))dy e x e y (g(y) g m (y))dy. (49) x Thus, the functions [W 1,m (x)] 2 [W 2,m (x)] 2 are Riemann-integrable. Therefore, δ n := f R h m 2 1 H n (f R h m ) 2 w (n),1 0 as n. (50) Formula (50) the triangle inequality yield H n (f R h m ) 2 w (n),1 δ n + f R h m 2 1. (51) Let us derive an estimate for f R h m 2 1. From (48) (49) we obtain the estimates: x ) W 1,m (x) max g(y) g m(y) (e x e y dy + e x e y dy y [,1] x = max g(y) g m(y) [ e x (e x e ) + e x (e x e ) ] (52) y [,1] = max y [,1] g(y) g m(y) [ (2 e x e +x ) ] δ m,1 W 2,m (x) max g(y) g m(y) (e x y [,1] where Therefore, it follows from (52) (53) that x x e y dy + e x x ) e y dy δ m,1, (53) δ m,1 := 2 max g(y) g m(y). (54) y [,1] f R h m 2 1 = W 1,m (x) 2 dx + W 2,m (x) 2 dx 4δm,1, 2 (55) where δ m,1 is defined in (54). Using relation (46), we obtain lim m δ m,1 = 0. Since m = m(n) lim n m(n) =, it follows from (51) (55) that H(f R h m ) 2 w,1 0 as n. This together with (47) imply G n (y m ) 0 as n. Lemma 2.3 is proved. 9

Theorem 2.4. Let the vector min := system (36) 0 1. m h m (x) = δ(x + 1) + c(m) 0 δ(x 1) + R m+2 solve linear algebraic m j ϕ j (x). Then h h m H 0 as n. (56) Proof. We have h h m 2 H ([,1]) = R (f Rh m ) 2 H ([,1]) R 2 H 1 ([,1]) Ḣ ([,1]) f Rh m 2 1 ( ) C G n ( f min ) + Rhm 2 1 G n ( min ) C[G n ( min ) + δ n] 0 as n, (57) where C > 0 is a constant Lemma 2.3 was used. Theorem 2.4 is proved. 3 The choice of collocation points basis functions In this section we give an example of the collocation points x i, i = 1, 2,..., n, basis functions ϕ j, j = 1, 2,..., m, such that the vectors H n q j, j =, 0, 1,..., m, are linearly independent, where m + 2 n, H n q j = H n q = e x1 e x2. e xn, H nq 0 = e x1 ey ϕ j (y)dy + e x1 e y ϕ j (y)dy e x2 ey ϕ j (y)dy + e x2 e y ϕ j (y)dy. e xn ey ϕ j (y)dy + e xn e y ϕ j (y)dy e +x1 e +x2. e +xn (58), j = 1, 2,..., m. (59) 10

Figure 1: The structure of the basis functions ϕ j Let us choose w (n) j x j as in (27) (28), respectively, with an even number n 6. As the basis functions in C([, 1]) we choose the following linear B-splines: { ψ2 (x) x ϕ 1 (x) = 1 x x 3, 0, otherwise, ψ 1 (x (j 1)), x 2j 3 x x 2j, ϕ j (x) = ψ 2 (x (j 1)), x 2j x x 2j+1, 0, otherwise, j = 2,..., m 1, { ψ1 (x (m 1)), x ϕ m (x) = n x 1, 0, otherwise, (60) where m = n 2 + 1, s := 2 n, ψ 1 (x) := x x 1 +, ψ 2 (x) := (x x 1 ). Here we have chosen x 2j, j = 1, 2,..., n 2 + 1, as the knots of the linear B- splines. From Figure 1 we can see that at each j = 2,..., m, ϕ j (x) is a hat function. The advantage of using these basis functions is the following: at most two basis functions are needed for computing the solution h m (x), because { h m (x) = l, x = x 2l, l = 2,..., n 2, l ϕ l (x) + l+1 ϕ (62) l+1(x), x 2l < x < x 2l+1. (61) 11

From the structure of the basis functions ϕ j we have ϕ 1 (x) = 0, x 3 x 1, ϕ j (x) = 0, x x 2j 3 x 2j+1 x 1, j = 2, 3,..., m 1, ϕ m (x) = 0, x x n. (63) Let (H n q j ) i be the i-th element of the vector H n q j, j =, 0, 1,..., m. Then Using (63) in (59), we obtain where (H n q ) i = e (1+xi), i = 1, 2,..., n, (64) (H n q 0 ) i = e (1 xi), i = 1, 2,..., n. (65) +e, i = 1, (H n q 1 ) i = 1 e s, i = 2, e 1 (i)s C 1, 3 i n, (H n q j ) i = e +(i)s D j, 1 i 2j 3, j = 2, 3,..., m 1, (H n q j ) 2j 2 = e 3s e s +, j = 2, 3,..., m 1, (H n q j ) 2j = + e +, j = 2, 3,..., m 1, s (H n q j ) 2j = (H n q j ) 2j 2, j = 2, 3,..., m 1, (H n q i ) j = e 1 (i)s C j, 2j + 1 i n, j = 2, 3,..., m 1, { e (H n q m ) i = +(i)s D m, 1 i n 1, 1 e s, i = n, (66) C 1 := C j := x3 x2j+1 e y ϕ 1 (y)dy = + e, 2es e y ϕ j (y)dy = e+2( 2+j)s ( + e ) 2 x 2j 3, j = 2, 3,..., m 1, (67) D 1 := D j := D m := x3 x2j+1 e y ϕ 1 (y)dy = e( 1 + e ), e y ϕ j (y)dy = e1 2js ( + e ) 2, j = 2, 3,..., m 1, x 2j 3 e y ϕ m (y)dy = + e x n 2es. (68) 12

Theorem 3.1. Consider q j defined in (19) with ϕ j defined in (60). Let the collocation points x j, j = 1, 2,..., n, be defined in (28) with an even number n 6. Then the vectors H n q j, j =, 0, 1, 2,..., m, m = s + 1, s = n 2, are linearly independent, where H n is defined in (20). Proof. Let V 0 := {H n q, H n q 0 }, V j := V j {H n q j }, j = 1, 2,..., m. (69) We prove that the elements of the sets V j, j = 0, 1,..., m, are linearly independent. The elements of the set V 0 are linearly independent. Indeed, H n q j 0 j, assuming that there exists a constant α such that H n q = αh n q 0, (70) one gets a contradiction: consider the first the n-th equations of (70), i.e., respectively. It follows from (64), (65) (71) that From (72), (64), (65) (73) it follows that (H n q ) 1 = α(h n q 0 ) 1 (71) (H n q ) n = α(h n q 0 ) n, (72) α = e 2. (73) e 2+s = e 2 s. (74) This is a contradiction, which proves that H n q H n q 0 are linearly independent. Let us prove that the element of the set V j are linearly independent, j = 1, 2, 3,..., m 2. Assume that there exist constants α k, k = 1, 2,..., j + 1, such that H n q j = j k= α k+2 H n q k. (75) Using relations (64)-(66) one can write the (2j 1)-th equation of linear system (75) as follows: (H n q j ) 2j = j k= α k+2 (H n q k ) 2j j = α 1 e (2j 2)s + α 2 e 2+(2j 2)s + e 1 (2j 2)s α k+2 C k. k=1 (76) 13

Similarly, by relations (64)-(66) the (n 1)-th n-th equations of linear system (75) can be written in the following expressions: j (H n q j ) n = e + C j = α 1 e 2+ + α 2 e + e + α k+2 C k (77) j (H n q j ) n = e +s C j = α 1 e 2+s + α 2 e s + e +s α k+2 C k, (78) respectively. Multiply (78) by e s compare with (77) to conclude that α 2 = 0. From (78) with α 2 = 0 one obtains k=1 k=1 j α 1 = ec j e α k+2 C k. (79) k=1 Substitute α 1 from (79) α 2 = 0 into (76) get (H n q j ) 2j = e 1 (2j 2)s C j. (80) From (67) (66) one obtains for 0 < s < 1, j = 1, 2, 3,..., m 2, the following relation e 1 (2j 2)s C j (H n q j ) 2j = e ( + e ) 2 + e + s = e e 4s sinh() = > 0, s (81) which contradicts relation (80). This contradiction proves that the elements of the set V j are linearly independent, j = 1, 2, 3,..., m 2, for 0 < s < 1. Let us prove that the elements of the set V m, are linearly independent. Assume that there exist constants α k, k = 1, 2,..., m, such that H n q m = m 2 k= α k+2 H n q k. (82) Using (64)-(66), the (n 3)-th equation of (82) can be written as follows: (H n q m ) n 3 =e 1 4s D m = m 3 k= α k+2 (H n q k ) n 3 m 3 =α 1 e 2+4s + α 2 e 4s + e +4s k=1 α k+2 C k + α m (H n q m 2 ) n 3. (83) 14

Similarly we obtain the (n 2)-th, (n 1)-th n-th equations, corresponding to vector equation (82): m 3 (H n q m ) n 2 = α 1 e 2+3s + α 2 e 3s + e +3s k=1 α k+2 C k + α m (H n q m 2 ) n 2, (84) m 2 (H n q m ) n = α 1 e 2+ + α 2 e + e + k=1 α k+2 C k (85) m 2 (H n q m ) n = α 1 e 2+s + α 2 e s + e +s k=1 respectively. Multiply (86) by e s compare with (85) to get α k+2 C k, (86) α 2 = (H nq m ) n e s (H n q m ) n e 1 = 1 e + 4se e 3s (1 e, (87) ) where formula (66) was used. Multiplying (86) by e 3s, comparing with equation (83), using (87), we obtain α m = e1 4s D m e 3s (H n q m ) n α 2 (e 4s e ) (H n q m 2 ) n 3 e +4s C m 2 = 2 + 4s 2es s + 4e s 2e 3s s + e 4s ( 2 + 4s) + e 4s 4e. s (88) Another expression for α m is obtained by multiplying (86) by e comparing with (84): α m = (H nq m ) n 2 e (H n q m ) n α 2 (e 3s e s ) (H n q m 2 ) n 2 e +3s C m 2 = 2 + 4s 4es s + e ( 2 + 4s) + e 2e s, s (89) where α 2 is given in (87). In deriving formulas (88) (89) we have used the relation m = 1 s + 1 equation (66). Let us prove that equations (88) (89) lead to a contradiction. Define r 1 := 2 + 4s 2e s s + 4e s 2e 3s s + e 4s ( 2 + 4s), r 2 := + e 4s 4e s, r 3 := 2 + 4s 4e s s + e ( 2 + 4s), r 4 := + e 2e s s. (90) 15

Then from (88) (89) we get We have r 3 r 2 r 1 r 4 = 0. (91) r 3 r 2 r 1 r 4 = 2e s ( + e s ) 2 s(3 + 4s + (4s 3)e e s ) > 0 for s (0, 1). (92) The sign of the right side of equality (92) is the same as the sign of 3+4s+(4s 3)e e s := β(s). Let us check that β(s) > 0 for s (0, 1). One has β(0) = 0, β (0) = 0, β (s) = 4 2e +8se 2e s e s, β = 4e +16se 4e s e s > 0. If β (s) > 0 for s (0, 1) β(0) = 0, β (0) = 0, then β(s) > 0 for s (0, 1). Inequality (92) contradicts relation (91) which proves that H n q j, j =, 0, 1, 2,..., m 1, are linearly independent. Similarly, to prove that H n q j, j =, 0, 1, 2,..., m, are linearly independent, we assume that there exist constants α k, k = 1, 2,..., m + 1, such that H n q m = m k= α k+2 H n q k. (93) Using formulas (64)-(66), one can write the (n 5)-th equation: (H n q m ) n 5 = e 1 6s D m = m k= α k+2 (H n q k ) n 5 m 4 = α 1 e 2+6s + α 2 e 6s + e +6s α j+2 C j + α m (H n q m 3 ) n 5 + α m e 1 6s D m 2 + α m+1 e 1 6s D m, (94) Similarly one obtains the (n 4)-th, (n 3)-th, (n 2)-th, (n 1)-th n-th equations corresponding to the vector equation (93): m 4 (H n q m ) n 4 = e 1 5s D m = α 1 e 2+5s + α 2 e 5s + e +5s α j+2 C j + α m (H n q m 3 ) n 4 + α m (H n q m 2 ) n 4 + α m+1 e 1 5s D m, (95) m 3 (H n q m ) n 3 = e 1 4s D m = α 1 e 2+4s + α 2 e 4s + e +4s + α m (H n q m 2 ) n 3 + α m+1 e 1 4s D m, m 3 (H n q m ) n 2 = e 1 3s D m = α 1 e 2+3s + α 2 e 3s + e +3s + α m (H n q m 2 ) n 2 + α m+1 (H n q m ) n 2, α j+2 C j α j+2 C j (96) (97) 16

m 2 (H n q m ) n = e 1 D m = α 1 e 2+ + α 2 e + e + α j+2 C j (98) + α m+1 (H n q m ) n m 2 (H n q m ) n = α 1 e 2+s + α 2 e s + e +s α j+2 C j + α m+1 (H n q m ) n, (99) respectively. Here we have used the assumption n 6. From (99) one gets m 2 α 1 = (H n q m ) n e 2 s α 2 e 2 e α k+2 C k α m+1 (H n q m ) n e 2 s. (100) k=1 If one substitutes (100) into equations (98), (97) (96), then one obtains the following relations: α 2 = p 1 p 2 α m+1, (101) respectively, where p 1 := e1 D m (H n q m ) n e s e 1 α 2 = p 3 p 4 α m p 5 α m+1 (102) α 2 = p 6 p 7 α m p 8 α m+1, (103), p 2 := (H nq m ) n (H n q m ) n e s e, 1 p 3 := e1 3s D m (H n q m ) n e e 3s e s, p 4 := (H nq m 2 ) n 2 e +3s C m 2 e 3s e s, p 5 := (H nq m ) n 2 (H n q m ) n e e 3s e s, p 6 := e1 4s D m (H n q m ) n e 3s e 4s e, p 7 := (H nq m 2 ) n 3 e +4s C m 2 e 4s e, p 8 := e1 4s D m (H n q m ) n e 3s e 4s e. Another formula for α 1 one gets from equation (96): (104) m 3 α 1 = e 3 8s D m α 2 e 2 8s e α j+2 C j α m (H n q m 2 ) n 3 e 2 4s α m+1 e 3 8s D m. (105) Substituting (105) into equations (95) (94), yields α 2 = p 9 p 10 α m p 11 α m p 12 α m+1 (106) 17

respectively, where p 9 := ed m, α 2 = p 9 p 13 α m p 14 α m p 12 α m+1, (107) p 10 := (H nq m 3 ) n 4 e +5s C m 3 e 5s e 3s, p 11 := (H nq m 2 ) n 4 e s (H n q m 2 ) n 3 e 5s e 3s, p 12 := ed m, p 13 := (H nq m 3 ) n 5 e +6s C m 3 e 6s e, p 14 := e1 6s D m 2 e (H n q m 2 ) n 3 e 6s e. (108) Let us prove that the equations (101) (106) lead to a contradiction. From equations (101) (102) we obtain This together with (103) (101) yield α m = p 3 p 1 p 4 + p 2 p 5 p 4 α m+1. (109) α m+1 = p 3 p 1 p 4 p 2 p 8 p 7 Equations (106), (107) (109) yield α m = p 14 p 11 p 10 p 13 α m = p 14 p 11 p 10 p 13 p6 p1 p 7 p2 p5 p 4. (110) ( p3 p 1 p 4 + p 2 p 5 p 4 α m+1 ). (111) This together with (106) imply ( ) ( p 14 p 11 p3 p 1 α 2 = p 9 p 10 + p 11 + p ) 2 p 5 α m+1 p 12 α m+1, (112) p 10 p 13 p 4 p 4 where α m+1 is given in (110). Let ( ) ( p 14 p 11 p3 p 1 L 1 := p 9 p 10 + p 11 + p ) 2 p 5 α m+1 p 12 α m+1 (113) p 10 p 13 p 4 p 4 Then, from (101) (106) one gets L 2 := p 1 p 2 α m+1. (114) L 1 L 2 = 0. (115) Applying formulas (64)-(66) in (113) (114) using the relation e s =, we obtain j=0 sj j! L 1 L 2 = 2e4s [1 + e ( + s) + s]( + e 2e s s) ( + e )s[3 + 4s 2e s s + e ( 3 + 4s)] [ ] [ ] 2e 4s 2 j ( 1 2 1 j ) j=3 (j)! s j 2( 2j j+1 ) j=2 j! s j+1 = ( + e )s[3 + 4s 2e s s + e > 0, ( 3 + 4s)] 18 (116)

because e > 1 for all 0 < s < 1, 2 j ( 1 2 1 2j j ) > 0 for all j 3, 2( j+1 1) > 0 for all j 2 3+4s 2e s s+e ( 3+4s) > 0 which was proved in (92). Inequality (116) contradicts relation (115) which proves that H n q j, j =, 0, 1, 2,..., m, are linearly independent. Theorem 3.1 is proved. 4 Numerical experiments Note that for all w, u, v R n we have n w l u l v l = v t W u, (117) l=1 where t sts for transpose w 1 0... 0 0 0 w 2 0... 0 W :=.......... 0...... 0 w n 0 (118) 0 0... 0 w n Then DP := H n (f Rh m ) 2 w (n),1 n = w (n) l [(f(x l ) Rh m (x l )) 2 + (f (x l ) (Rh m ) (x l )) 2 ] l=1 = [H n (f Rh m )] t W H n (f Rh m ) + [H n (f (Rh m ) )] t W H n (f (Rh m ) ), (119) where W is defined in (118) with w j = w (n) j, j = 1, 2,..., n, defined in (27). The vectors H n Rh m H n (Rh m ) are computed as follows. Using (64)-(66), the vector H n Rh m can be represented by where = entries: 0 1. m H n Rh m = S m, (120) S m is an n (m + 2) matrix with the following (S m ) i,1 = (H n q ) i, (S m ) i,2 = (H n q 0 ) i, i = 1, 2,..., n, (S m ) i,j = (H n q j 2 ) i, i = 1, 2,..., n, j = 3, 4,..., m + 2. (121) 19

The vector H n (Rh m ) is computed as follows. Let J i,j := e xi x i e y ϕ j (y)dy e xi xi This together with (60) yield e e(+e ), i = 1, e J i,1 = s (1 e s +s) s, i = 2, e x i (+e ) 2es, i 3, e xi e1 2js (+e ) 2, i 2j 3; 2+e 3s 3e s, i = 2j 2; J i,j = 0, i = 2j 1; 2+e 3s 3e s, i = 2j; e xi e+2( 2+j)s (+e ) 2, i 2j + 1, { e x i (+e ) J i,m = 2es, i n 1, e s (+e s s) s, i = n. e y ϕ j (y)dy, i = 1, 2,..., n, j = 1, 2,..., m. (122) 1 i n, 1 < j < m, Then, using (123), the vector H n (Rh m ) can be rewritten as follows: where = entries: 0 1. m (123) H n (Rh m ) = T m, (124) T m is an n (m + 2) matrix with the following (T m ) i,1 = e (1+xi), (T m ) i,2 = e (1 xi), i = 1, 2,..., n, (T m ) i,j = J i,j 2, i = 1, 2,..., n, j = 3, 4,..., m + 2. (125) We consider the following examples discussed in [7]: (1) f(x) = 2 + 2 cos(π(x + 1)) with the exact solution h(x) = + (1 + π 2 ) cos(π(x + 1)). (2) f(x) = 2e x + 2 π sin(π(x + 1)) + 2 cos(π(x + 1)) with the exact solution h(x) = 1 π sin(π(x + 1)) + (1 + π2 ) cos(π(x + 1)). (3) f(x) = cos( π(x+1) 2 ) + 4 cos(2π(x + 1)) 1.5 cos( 7π(x+1) 2 ) with the exact solution h(x) = 1 π2 2 (1 + 4 ) cos( π(x+1) 2 ) + (2 + 8π 2 ) cos(2π(x + 1)) 0.75(1 + 12.25π 2 ) cos( 7π(x+1) 2 ) + 1.75δ(x + 1) + 2.25δ(x 1). 20

(4) f(x) = e x + 2 sin(2π(x + 1)) with the exact solution h(x) = (1 + 4π 2 ) sin(2π(x + 1)) + (e 2π)δ(x + 1) + 2πδ(x 1). In all the above examples we have f C 2 ([, 1]). Therefore, one may use the basis functions given in (60). In each example we compute the relative pointwise errors: RP E(t i ) := g m(t i ) g(t i ) max 1 i M g(t i ), (126) where g(x) g m (x) are defined in (9) (11), respectively, 2 t i := + (i 1), i = 1, 2,..., M. (127) M 1 The algorithm can be written as follows. Step 0. Set k = 3, n = 2k, m = n 2 defined in (119). + 1, ɛ (0, 1) DP 10, where DP is Step 1. Construct the weights w (n) j, j = 1, 2,..., n, defined in (27). Step 2. Construct the matrix A m+2 the vector F m+2 which are defined in (36). 0 Step 3. Solve for c := the linear algebraic system A m+2 c = F m+2. 1. m Step 4. Compute DP = H n (f Rh m ) 2 w (n),1. (128) Step 5. If DP > ɛ then set k = k + 1, n = 2k m = n 2 + 1, go to Step 1. Otherwise, stop the iteration use h m (x) = m j= c(m) j ϕ j (x) as the approximate solution, where ϕ (x) := δ(1 + x), ϕ 0 (x) := δ(x 1) ϕ j (x), j = 1, 2,..., m, are defined in (60) j, j =, 0, 1,..., m, are obtained in Step 3. In all the experiments the following parameters are used: M = 200 ɛ = 10 4, 10 6, 10 8. We also compute the relative error RE := max 1 i M RP E(t i), (129) where RP E is defined in (120). Let us discuss the results of our experiments. Example 1. In this example the coefficients a a 0, given in (8) (9), respectively, are zeros. Our experiments show, see Table 1, that the approximate 21

Table 1: Example 1 n m ɛ a a 0 0 24 13 10 4 0 8.234 10 5 0 5.826 10 3 32 17 10 6 0 3.172 10 5 0.202 10 3 56 29 10 8 0 3.974 10 6 0 6.020 10 5 n m ɛ DP RE 24 13 10 4 8.610 10 6 3.239 10 2 32 17 10 6 7.137 10 7 1.554 10 2 56 29 10 8 6.404 10 9 4.337 10 3 coefficients c(m) 0 converge to a a 0, respectively, as ɛ 0. Here to get DP 10 6, we need n = 32 collocation points distributed uniformly in the interval [, 1]. Moreover, the matrix A m+2 is of the size 19 by 19 which is small. For ɛ = 10 6 the relative error RE is of order 10 2. The RP E at the points t j are distributed in the interval [0, 0.018) as shown in Figure 2. In computing the approximate solution h m at the points t i, i = 1, 2,..., M, one needs at most two out of m = 17 basis functions ϕ j (x). The reconstruction of the continuous part of the exact solution can be seen in Figure 1. One can see from this Figure that for ɛ = 10 6 the continuous part g(x) of the exact solution h(x) can be recovered very well by the approximate function g m (x) at the points t j, j = 1, 2,..., M. Figure 2: A reconstruction of the continuous part g(x) (above) of Example 1 with ɛ = 10 6, the corresponding Relative Pointwise Errors (RPE) (below) 22

Table 2: Example 2 n m ɛ a a 0 0 24 13 10 4 0 1.632 10 4 0.065 10 2 32 17 10 6 0 5.552 10 5 0 2.614 10 3 56 29 10 8 0 6.268 10 6 0.954 10 4 n m ɛ DP RE 24 13 10 4 8.964 10 6 3.871 10 2 32 17 10 6 7.588 10 7 1.821 10 2 56 29 10 8 6.947 10 9 4.869 10 3 Example 2. This example is a modification of Example 1, where the constant 2 is replaced with the function 2e x + 2 π sin(π(x + 1)). In this case the coefficients a a 0 are also zeros. The results can be seen in Table 2. As in Example 1, both approximate coefficients c(m) 0 converge to 0 as ɛ 0. The number of collocation points at each case is equal to the number of collocation points obtained in Example 1. Also the RP E at each observed point is in the interval [0, 0.02). One can see from Figure 3 that the continuous part g(x) of the exact solution h(x) can be well approximated by the approximate function g m (x) with ɛ = 10 6 RE = O(10 2 ). Figure 3: A reconstruction of the continuous part g(x) (above) of Example 2 with ɛ = 10 6, the corresponding Relative Pointwise Errors (RPE) (below) Example 3. In this example the coefficients of the distributional parts a 23

Table 3: Example 3 n m ɛ a a 0 0 80 41 10 4 1.750 1.750 2.250 2.236 128 65 10 6 1.750 1.750 2.250 2.249 232 117 10 8 1.750 1.750 2.250 2.250 n m ɛ DP RE 80 41 10 4 4.635 10 5 2.282 10 2 128 65 10 6 9.739 10 7 7.671 10 3 232 117 10 8 7.804 10 9 2.163 10 3 a 0 are not zeros. The function f is oscillating more than the functions f given in Examples 1 2, the number of collocation points is larger than in the previous two examples, as shown in Table 3. In this table one can see that the approximate coefficients c(m) 0 converge to a a 0, respectively. The continuous part of the exact solution can be approximated by the approximate function g m (x) very well with ɛ = 10 6 RE = O(10 3 ) as shown in Figure 4. In the same Figure one can see that the RP E at each observed point is distributed in the interval [0, 8 10 3 ). Figure 4: A reconstruction of the continuous part g(x) (above) of Example 3 with ɛ = 10 6, the corresponding Relative Pointwise Errors (RPE) (below) Example 4. Here we give another example of the exact solution h having non-zero coefficients a a 0. In this example the function f is oscillating less than the f in Example 3, but more than the f in examples 1 2. As 24

shown in Table 4 the number of collocation points n is smaller than the the number of collocation points given in Example 3. In this example the exact coefficients a a 0 are obtained at the error level ɛ = 10 8 which is shown in Table 4. Figure 5 shows that at the level ɛ = 10 6 we have obtained a good approximation of the continuous part g(x) of the exact solution h(x). Here the relative error RE is of order O(10 2 ). Table 4: Example 4 n m ɛ a a 0 0 40 21 10 4 3.565 3.564 6.283 6.234 72 37 10 6 3.565 3.565 6.283 6.279 128 65 10 8 3.565 3.565 6.283 6.283 n m ɛ DP RE 40 21 10 4 8.775 10 5 3.574 10 2 72 37 10 6 6.651 10 7 1.029 10 2 128 65 10 8 6.147 10 9 3.199 10 3 Figure 5: A reconstruction of the continuous part g(x) (above) of Example 4 with ɛ = 10 6, the corresponding Relative Pointwise Errors (RPE) (below) 25

References [1] P.J. Davis P. Rabinowitz, Methods of numerical integration, Academic Press, London, 1984. [2] L.Kantorovich G.Akilov, Functional Analysis, Pergamon Press, New York, 1980. [3] S. Mikhlin, S. Prössdorf, Singular integral operators, Springer-Verlag, Berlin, 1986. [4] A.G. Ramm, Theory Applications of Some New Classes of Integral Equations, Springer-Verlag, New York, 1980. [5] A.G. Ramm, Rom Fields Estimation, World Sci. Publishers, Singapore, 2005. [6] A.G. Ramm, Numerical solution of integral equations in a space of distributions, J. Math. Anal. Appl., 110, (1985), 384-390. [7] A.G. Ramm Peiqing Li, Numerical solution of some integral equations in distributions, Computers Math. Applic., 22, (1991), 1-11. [8] A.G. Ramm, Collocation method for solving some integral equations of estimation theory, Internat. Journ. of Pure Appl. Math., 62, N1, (2010). [9] E. Rozema, Estimating the error in the trapezoidal rule, The American Mathematical Monthly, 87, 2, (1980), 124-128. [10] L.L. Schumaker, Spline Functions: Basic Theory, Cambridge University Press, Cambridge, 2007. 26