Numerische Mathematik

Similar documents
Quadrature Formula for Computed Tomography

ETNA Kent State University

Scientific Computing

Interpolation by Harmonic Polynomials. on Radon Projections

Numerical Analysis: Approximation of Functions

Received: 2/7/07, Revised: 5/25/07, Accepted: 6/25/07, Published: 7/20/07 Abstract

On the Lebesgue constant of subperiodic trigonometric interpolation

SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

Barycentric coordinates for Lagrange interpolation over lattices on a simplex

12.0 Properties of orthogonal polynomials

Mathematical Methods for Physics and Engineering

Some approximation theorems in Math 522. I. Approximations of continuous functions by smooth functions

ORTHOGONAL SERIES REGRESSION ESTIMATORS FOR AN IRREGULARLY SPACED DESIGN

Interpolation and Cubature at Geronimus Nodes Generated by Different Geronimus Polynomials

Abstract. 1. Introduction

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

ETNA Kent State University

Sequences and Series of Functions

Topological properties

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

Analysis-3 lecture schemes

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

Optimal global rates of convergence for interpolation problems with random design

A Product Integration Approach Based on New Orthogonal Polynomials for Nonlinear Weakly Singular Integral Equations

Bivariate Lagrange interpolation at the Padua points: The generating curve approach

On the remainder term of Gauss Radau quadratures for analytic functions

arxiv: v3 [math.ca] 20 Aug 2015

Mathematical Tripos Part IA Lent Term Example Sheet 1. Calculate its tangent vector dr/du at each point and hence find its total length.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Revision Problems for Examination 1 in Algebra 1

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.

The University of British Columbia Midterm 1 Solutions - February 3, 2012 Mathematics 105, 2011W T2 Sections 204, 205, 206, 211.

Gaussian interval quadrature rule for exponential weights

Trigonometric Gaussian quadrature on subintervals of the period

CHAPTER 3 Further properties of splines and B-splines

Infinite Series. 1 Introduction. 2 General discussion on convergence

A Proof of Markov s Theorem for Polynomials on Banach spaces

88 CHAPTER 3. SYMMETRIES

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Numerische Mathematik

Change of Variables, Parametrizations, Surface Integrals

On the minimum of certain functional related to the Schrödinger equation

UNIVERSITY OF DUBLIN

2 Sequences, Continuity, and Limits

A Reverse Technique for Lumping High Dimensional Model Representation Method

Quadrature formulae of non-standard type

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

Laplace s Equation. Chapter Mean Value Formulas

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

ON THE PRODUCT OF SEPARABLE METRIC SPACES

Partial Differential Equations for Engineering Math 312, Fall 2012

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011

Abstract. We find Rodrigues type formula for multi-variate orthogonal. orthogonal polynomial solutions of a second order partial differential

Math Ordinary Differential Equations

Notes on Complex Analysis

Construction of latin squares of prime order

Mathematical Olympiad Training Polynomials

Reverse engineering using computational algebra

ZERO DISTRIBUTION OF POLYNOMIALS ORTHOGONAL ON THE RADIAL RAYS IN THE COMPLEX PLANE* G. V. Milovanović, P. M. Rajković and Z. M.

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

BASES FROM EXPONENTS IN LEBESGUE SPACES OF FUNCTIONS WITH VARIABLE SUMMABILITY EXPONENT

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

u xx + u yy = 0. (5.1)

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Chapter 2 Interpolation

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

arxiv: v2 [math.fa] 17 May 2016

Corrections to the First Printing: Chapter 6. This list includes corrections and clarifications through November 6, 1999.

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Gauss Hermite interval quadrature rule

Numerical integration formulas of degree two

A Spectral Method for Nonlinear Elliptic Equations

Section 6.6 Gaussian Quadrature

On the power-free parts of consecutive integers

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis.

Formal Groups. Niki Myrto Mavraki

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Public-key Cryptography: Theory and Practice

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

Maximal Entropy for Reconstruction of Back Projection Images

A continuous map γ from an interval I, open or closed, to R n is called a parametric curve. By a continuous map we mean each component of the map

Rademacher functions

Multisoliton Interaction of Perturbed Manakov System: Effects of External Potentials

Lecture 38: Equations of Rigid-Body Motion

Memoirs on Differential Equations and Mathematical Physics

Computability of Koch Curve and Koch Island

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Symmetric functions and the Vandermonde matrix

ETNA Kent State University

Katholieke Universiteit Leuven Department of Computer Science

Perov s fixed point theorem for multivalued mappings in generalized Kasahara spaces

I. Introduction. A New Method for Inverting Integrals Attenuated Radon Transform (SPECT) D to N map for Moving Boundary Value Problems

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp

MATH 4030 Differential Geometry Lecture Notes Part 4 last revised on December 4, Elementary tensor calculus

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

On the positivity of linear weights in WENO approximations. Abstract

Transcription:

Numer. Math. (1998) 80: 39 59 Numerische Mathematik c Springer-Verlag 1998 Electronic Edition Numerical integration over a disc. A new Gaussian quadrature formula Borislav Bojanov 1,, Guergana Petrova, 1 epartment of Mathematics, University of Sofia, Blvd. James Boucher 5, 116 Sofia, Bulgaria, e-mail: bor@bgearn.acad.bg epartment of Mathematics, University of South Carolina, Columbia, SC 908, USA, e-mail: petrova@math.sc.edu Received August 8, 1996 / Revised version received July, 1997 Summary. We construct a quadrature formula for integration on the unit disc which is based on line integrals over n distinct chords in the disc and integrates exactly all polynomials in two variables of total degree n 1. Mathematics Subject Classification (1991): 653, 6530, 41A55 1. Introduction The ordinary type of information data for approximation of functions f or functionals of them in the univariate case consists of function values {f(x 1 ),...,f(x m )}. The classical Lagrange interpolation formula and the Gauss quadrature formula are famous examples. The simplicity of the approximation rules, their universality, the elegancy of the proofs and the beauty of these classical results show that the function values are really the most natural pieces of information in the reconstruction of functions and functionals. The direct transformation of the univariate results to the multivariate setting faces however various difficulties. For example, the problem of constructing a polynomial P (x, y) of degree n, P (x, y) = a ij x i y j i+j n The research of this author was partially supported by the Bulgarian Ministry of Science under Contract No. MM 414/94 The research of this author was supported by the Office of Naval Research Contract N0014-91-J1343 Correspondence to: B. Bojanov page 39 of Numer. Math. (1998) 80: 39 59

40 B. Bojanov, G. Petrova satisfying given Lagrangean interpolation conditions ( ) n + P (x j,y j )=f j, j =1,...,, leads to a linear system in unknowns {a ij } which has not always a unique solution. A more convincing example is the long years struggle of many mathematicians to construct cubature formulas of the form N f(x, y) dx dy C j f(x j,y j ) Ω with preassigned number N of nodes that integrate exactly all polynomials P (x, y) of degree as high as possible over a given simple domain Ω. Only a few cubatures of this type are known explicitly. This indicates that the setting of the multivariate approximation problems did not reach yet its most natural formulation and in particular, the assertion that the sampling of function values is the most natural basis for recovery of functions should be met with a certain doubt. The research practice in mathematics shows that the naturally posed problems have nice solutions and far going extensions. What is then the most natural type of information for reconstruction of functions in IR. The recent development in tomography, as well as the power of the Radon transform and other results in multivariate interpolation suggest as a reasonable choice the data of mean values { f}, I k where {I k } are line segments. A remarkable result in this direction is the Hakopian interpolation formula (see [1]) which can be viewed as a multivariate extension of the Lagrange interpolation formula. It seems that many classical approximation problems in the univariate case dealing with point evaluations should admit natural extensions in the multivariate case (i.e., in IR d ) if the approximation is based on integrals over hyperplanes of dimension d 1. In this paper we consider the extremal problem of Gauss about quadrature formulas of highest algebraic degree of precision, formulated in an appropriate form for numerical integration over the unit disc := {(x, y) : x + y 1} with respect to a mean value information data. Precisely, we construct a quadrature formula of the form f(x, y) dx dy λ k f I k of highest degree of precision with respect to the class of algebraic polynomials of two variables. j=1 page 40 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 41. Preliminaries As usual, we use the notation { π n (IR ):= a ij x i y j} i+j n for the set of all algebraic polynomials of total degree n. The dimension of π n (IR ) is equal to ( n+). Let Ω be a given bounded domain in the plane IR. Our main result concerns the case when Ω is the unit disc. We shall consider integrable functions f(x, y) on Ω. For the sake of simplicity, we suppose that f(x, y) is supported on Ω, that is, f(x, y) vanishes outside Ω. Any pair of parameters (t, θ) defines a line We assume that θ [0,π) and I(t, θ) :={(x, y) :x cos θ + y sin θ = t}. I(t, θ) Ω. The projection P f (t, θ) of f along the line I(t, θ) is defined by P f (t, θ) := + f(t cos θ s sin θ, t sin θ + s cos θ) ds. Sometimes, instead of P f (t, θ), we shall use the notation f(x, y) ds I(t,θ) for the projection when we want to stress that the integral is taken over the line segment I(t, θ) Ω. Given the parameters (t k,θ k ),weset I k := I(t k,θ k ), k =1,...,n, I := {I 1,...,I n }, L Ik L tk,θ k (x, y) :=x cos θ k + y sin θ k t k. Clearly L Ik π 1 (IR ). Assume that I k are n distinct lines defined by the parameters {(t k,θ k )} n. We shall study quadrature formulas of the form (1) f(x, y) dx dy A k f(x, y) ds Ω I k page 41 of Numer. Math. (1998) 80: 39 59

4 B. Bojanov, G. Petrova in the class of integrable functions L 1 (Ω). Here A k are real coefficients. The algebraic degree of precision of the quadrature (1) (abbreviated to AP(1) is the maximal integer m so that the quadrature (1) integrates exactly all polynomials P (x, y) of degree less than or equal to m. Our first observation is: () AP(1) < n for each choice of the coefficients A k and the parameters (t k,θ k ). To show this, we introduce the polynomial ω(x, y) =ω(i; x, y) := n L Ik (x, y). Clearly ω (x, y) π n (IR ) and ω (x, y) 0. Thus Ω ω (x, y) dx dy > 0, while A k ω I (x, y) ds =0. k Hence the quadrature (1) is not exact for ω (x, y) and this proves (). The maximal AP that could be achieved by a quadrature (1), using n evaluations of the projection is n 1. oes there exist a quadrature of AP equal to n 1? There is no strong evidence for an affirmative ( answer, since the dimension of the space of polynomials of degree n 1 is n+1 ) and hence much greater than the number 3n of the parameters, used in the quadrature ( A k,t k,θ k, k =1,...,n). However, we construct here a quadrature with AP =n 1in the case Ω =. Next we derive a simple necessary condition for (1) to have AP = n 1. Lemma 1. If the quadrature formula (1) is exact for all polynomials of degree n 1 then the polynomial ω(x, y) =L I1 (x, y) L In (x, y) is orthogonal to any Q π n 1 (IR ) in Ω. Proof. For an arbitrary element Q π n 1 (IR ) ω(x, y)q(x, y) π n 1 (IR ) and therefore (1) will be exact for it. Namely, ω(x, y)q(x, y) dx dy = A k Ω which proves the lemma. = I k j=1 A k.0=0, n L Ij (x, y)q(x, y) ds page 4 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 43 A quadrature formula of the form (1) with AP =n 1 will be called Gaussian. Lemma 1 asserts that we can construct such a formula only for domains Ω, for which one can find a polynomial of the form n (a k x + b k y + c k ), with a k,b k,c k IR, which is orthogonal on Ω to all polynomials from π n 1 (IR ). Lemma. If (1) is a Gaussian quadrature formula then A k > 0 for k = 1,...,n. Proof. Construct the polynomials n ω k (x, y) =ω k (I; x, y) := L Ij (x, y), j=1,j k k =1,...,n. Since (1) is Gaussian and ω k (x, y) π n (IR ),wehave ωk(x, y) dx dy = A j ωk(x, Ω j=1 I y) ds = A k ωk(x, y) ds, j I k which implies that A k > 0 for k =1,...,n. The following lemma gives a relation between two Gaussian quadrature formulas. Lemma 3. Assume that (1) and the quadrature formula Ω are Gaussian. Then ( n J k ( l=1,l k f(x, y) dx dy n J l ) ( Proof. Consider the polynomial l=1 B k f(x, y) ds J k ) I l ) Ω, k =1,...,n. n ω k (J; x, y) = L Jl (x, y), l=1,l k k =1,...,n. Apply now the both quadratures to ω(i; x, y)ω k (J; x, y) π n 1 (IR ), k =1,...,n. page 43 of Numer. Math. (1998) 80: 39 59

44 B. Bojanov, G. Petrova We have 0= ω(i; x, y)ω k (J; x, y) dx dy Ω = B l ω(i; x, y)ω k (J; x, y) ds J l l=1 = B k J k ω(i; x, y)ω k (J; x, y) ds. But according to the mean value theorem ω(i; x, y)ω k (J; x, y) ds = ω(i; M k )ω k (J; M k )d(j k Ω), J k where M k J k and d(j k Ω) is the length of the line segment J k Ω. Finally, n n M k (( J l ) ( I l )) Ω and the proof is completed. l=1,l k l=1 3. Gaussian quadrature enote by U n (t) the Tchebycheff polynomial of second kind of degree n, i.e., sin()θ U n (x) =, where x = cos θ. sin θ Let η 1,...,η n be the zeroes of U n (x), that is, η k = cos kπ n+1, k =1,...,n. Theorem 1. The quadrature formula ( ) f(x, y) dx dy 1 ηk A k f(η k,y) dy, 1 ηk with A k = π sin kπ k =1,...,n, is exact for each polynomial f π n 1 (IR ). Proof. Assume that f(x, y) is any polynomial from π n 1 (IR ). Then it can be presented in the form f(x, y) = n 1 k=0 a k (x)y k, where a k (x) π n k 1 (IR) page 44 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 45 and thus f(x, y) dx dy = = 1 1 x 1 n 1 k=0,k=s 1 x n 1 s +1 k=0 1 a k (x)y k dy dx 1 1 x a k (x)(1 x ) s dx. Observe that the univariate polynomials a k (x)(1 x ) s under the integral sign are of degree less than or equal to n 1. Then we can compute the integral exactly using the classical Gaussian quadrature formula on [ 1, 1] with weight ξ(t) = 1 t. Precisely, 1 1 Therefore f(x, y) dx dy = l=1 π = sin = = l=1 l=1 l=1 1 x a k (x)(1 x ) s dx = π sin π sin lπ lπ lπ and the proof is completed. π sin n 1 k=0,k=s 1 ηl 1 ηl 1 ηl 1 η l lπ l=1 π n a k(η l )(1 η l ) s. n 1 k=0,k=s a k (η l ) s +1 n 1 k=0 s +1 (1 η l ) s a k (η l ) ( 1 η l a k (η l )y k dy f(η l,y) dy ) s+1 4. Ridge polynomials. Orthogonal polynomials Given the real function ρ(t) on IR and a parameter θ [0,π) we define on IR the associated ridge function ρ(θ; x, y) with direction θ in the following way ρ(θ; x, y) :=ρ(x cos θ + y sin θ). Clearly the ridge function is constant along any line of direction θ. As a consequence of this, one can compute ρ(θ; x, y) dx dy = 1 1 t 1 1 t ρ(t) ds dt = 1 1 1 t ρ(t) dt page 45 of Numer. Math. (1998) 80: 39 59

46 B. Bojanov, G. Petrova and thus, by the Gaussian quadrature, ρ n 1 (θ; x, y) dx dy = π sin kπ ρ n 1(η k ) for every ridge polynomial ρ n 1 (θ; x, y) of degree n 1. This conclusion is just a particular case of Theorem 1, applied to the ridge polynomial ρ n 1. It shows that the use of a certain ridge polynimial basis in π N (IR ) could produce a simple integration rule. The following is a well-known fact (see for example [3]). Lemma 4. Every polynomial of degree n in x and y is a sum of ridge polynomials (with any preassigned directions θ 0,θ 1,...,θ n which are distinct modulo π)ofdegreen. The ridge polynomials, associated with the Tchebycheff polynomial U n (t) play an essential role in this study. A simple consequence of Theorem 1 and Lemma 1 is the following Corollary 1. (3) for each P (x, y) π n 1 (IR ) and θ. U n (θ; x, y) P (x, y) dx dy =0 Now we give an explicit formula for the projection of a ridge polynomial. Lemma 5. Let Q(t) be a polynomial of degree n presented in the form Q(t) = a j U j (t). j=0 Then the projection of the associate ridge polynomial Q(α; x, y) of direction α along the line I(t, θ) is given by the formula P Q (t, θ) = 1 t n j=0 a j j +1 U sin(j + 1)(θ α) j(t). sin(θ α) Proof. It suffices to proof the lemma for α =0. So, we assume further that α =0. Clearly P Q (t, θ) = 1 t 1 t Q(t cos θ y sin θ) dy page 46 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 47 and after the substitution t = cos τ we get P Q (cos τ,θ)= j=0 sin τ sin τ Q(cos τ cos θ y sin θ) dy. Setting u = cos τ cos θ y sin θ, we arrive at the formula P Q (cos τ,θ)= 1 cos(τ θ) Q(u) du, sin θ cos(τ+θ) which can be found also in [3]. Now we perform the integration, taking into account the given presentation of Q(t) and the relation T j+1 (t) = (j +1)U j (t) between the Tchebycheff polynomials of first and second kind. We have P Q (cos τ,θ)= 1 a j sin θ j +1 {T j+1(cos(τ θ)) T j+1 (cos(τ + θ))} j=0 a j sin(j +1)τ sin(j +1)θ = j +1 sin θ = = j=0 j=0 a j j +1 sin(j +1)τ sin τ a j j +1 U j(t) 1 t sin(j +1)θ sin θ sin(j +1)θ, sin θ sin τ which is the wanted formula. The proof is completed. An immediate consequence from Lemma 5 is the following integration formula, derived in []. Corollary. For each t and θ, 1 t U m (t cos θ + s sin θ) ds = 1 t 1 t m +1 U m (t) sin(m +1)θ. sin θ A similar formula can be given for the projection of any polynomial f π n (IR ). Indeed, according to Lemma 4, f can be written in the form (4) f(x, y) = b i Q i (θ i ; x, y), i=0 and consequently, f(x, y) = b i a im U m (θ i ; x, y), i=0 m=0 page 47 of Numer. Math. (1998) 80: 39 59

48 B. Bojanov, G. Petrova where (5) Q i (t) = a im U m (t). m=0 Then, by Corollary, { n b i a } 1 t im P f (t, θ) = m +1 i=0 m=0 U m (t cos θ + s sin θ) ds 1 t = n 1 t sin(m + 1)(θ θ i ) c im U m (t) sin(θ θ m=0 i=0 i ) with c im = b i a im /(m +1). Thus, we proved the following Corollary 3. Let f be any polynomial from π n (IR ). Assume that f is given by (4) with Q i presented in the form (5). Then (6) P f (t, θ) = n 1 t b i a im sin(m + 1)(θ θ i ) U m (t). m +1 sin(θ θ m=0 i=0 i ) One can see from (6) that the projection of any polynomial f π n (IR ) is a function of the form 1 t ϕ(t) where ϕ(t) is an algebraic polynomial of degree n. This fact was used already in the proof of Theorem 1. Moreover, (6) can be used to find the coefficients of this polynomial in terms of the directions {θ m }. Using Corollary, one may construct an orthonormal basis of ridge polynomials in π n (IR ), as shown in the next lemma. Lemma 6. Set θ mj := jπ m+1 for j =0,...,m, m =0,...,n. The ridge polynomials (7) 1 π U m (θ mj ; x, y), m =0,...,n, j =0,...,m, form an orthonormal basis in π n (IR ). Proof. The relation U m (θ mj ; x, y) U k (θ ki ; x, y) dx dy =0, for m k, follows immediately from Corollary 1. In the case m = k, by Corollary, U m (θ mj ; x, y) U m (θ mi ; x, y) dx dy 1 = 1 = m +1 U m (t) 1 1 I(t,θ mj ) U m (θ mi ; x, y)ds 1 t U m(t) dy sin(m + 1)(θ mi θ mj ) sin(θ mi θ mj ) = πδ ij, page 48 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 49 since sin(m + 1)(θ mi θ mj ) sin(θ mi θ mj ) = sin(i j)π sin(i j)π/(m +1) =(m +1)δ ij, where The proof is completed. δ ij = { 0 if i j 1 if i=j Lemma 1 shows that the orthogonal polynomials, as in the univariate case, are closely related with the construction of Gaussian quadratures. We next discuss another orthonormal basis for in π n (IR ) (see [5]). These are the polynomials F mj (x, y), m = 0,...,n, j = 0,...,m, defined via (8) F mj (x, y) :=Q n k (x, k)(1 x ) k y Pk ( )=Q 1 x n k(x, k)w k (x, y), where P k (t) is the k th Legendre polynomial and Q n k (t, k) is the polynomial of degree (n k), orthogonal in ( 1, 1) with weight ξ k (t) = (1 t ) k+ 1. Since Q n k (t, k) is a Jacobi polynomial with parameters α = β = k + 1 (α,β) and for the Jaccobi polynomials J n (t) the equality. holds, then d dt (α,β) {J n (t)} = 1 (α+1,β+1) (n + α + β +1)J n 1 (t) Q n k (t, k) =c k d k dt k U n(t), k =0,...,n, c k IR. Note that (y + x p s,1 p s,1 )...(y + x p s,s p s,s ) if k =s W k (x, y) = y(y + x p s+1,1 p s+1,1 )...(y + x p s+1,s p s+1,s ), if k =s +1 where p k,l,l=1,...,[ k ] are the positive zeroes of P k(t). Therefore the polynomials F mj (x, y) vanish on elipses. This fact leads to the conclusion, that the two orthogonal systems (7) and (8) are different. The next result due to Marr [4] gives the projection of any orthogonal polynomial along any line I(t, θ). page 49 of Numer. Math. (1998) 80: 39 59

50 B. Bojanov, G. Petrova Lemma 7. Assume that ω(x, y) is a polynomial from π n (IR ) which is orthogonal to each polynomial Q π n 1 (IR ) on the unit disc. Then, I(t,θ) ω(x, y) ds = 1 t U n (t)ω(cos θ, sin θ). The proof of this nice observation can be found in [4]. In particular, when θ =0 1 t 1 t ω(t, y) dy = A 1 t U n (t) t ( 1, 1), A IR. We can derive it here easily from our auxiliary results in the previous section. To do this, note that ω(x, y) = b j U n (θ j ; x, y) j=0 with some coefficients {b j }. Then, according to (6), I(t,θ) ω(x, y) ds = 1 t sin(n + 1)(θ θ nj ) U n (t) b j sin(θ θ j=0 nj ) = 1 t U n (t)ω(cos θ, sin θ). Theorem. If the polynomial ω(x, y) :=L I1 (x, y) L In (x, y) is orthogonal to π n 1 (IR ) then t k {η 1,...,η n } (η k = cos kπ,,...,n) or cos(θ k θ i )=t i for some i. Proof. From Marr s Lemma and the fact, that ω(x, y) vanishes on I k,it follows that 0= ω(x, y) ds = 1 t k I k U n(t k )ω(cos θ k, sin θ k ) = n 1 t k U n(t k ) (cos(θ k θ i ) t i ). i=1 Since t k ( 1, 1), Theorem is proved. page 50 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 51 5. Characterization of the Gaussian quadratures It seems that the quadrature formula given in Theorem 1 with AP =n 1 is unique (up to rotation). In this section we give a certain characterization of the Gaussian quadratures which supports our suggestion. We derive a simple observation, that gives a necessary condition for the quadrature (1) for the unit disk to have AP =n 1. Let A 1,...,A s be the common points of the unit circle {(x, y) : x + y =1} and the lines I := {I 1,...,I n }, induced by (1), enumerated with respect to their natural position on the circle. Let A s+1 A 1. Then the folowing criterion is true. Criterion. If there are two consecutive points A j and A j+1,j {1,...,s}, such that the length of the arc, determined by them is greater than π n+1, then (1) is not Gaussian. Proof. Assume the opposite and consider the set Ψ of line segments l θ,j,θ [0, π), 1 j< n+1,j-integer, with endpoints (cos θ, sin θ) and (cos(θ+ jπ jπ n+1 ), sin(θ + n+1 )) and the set I. Then, there is a line segment l θ,j with the property l θ,j I k for each k =1,...,n. Without loss of generality θ =0and therefore, l 0,j I(η j, 0). But by Lemma 3, applied to the Gaussian formulas (1) and (*), l 0,j (( n,k j I(η k, 0)) ( n I k )). The proof is completed. The following necessary condition is also true. Lemma 8. If (1) is a Gaussian quadrature formula for, then the polynomial ω(x, y) :=L I1 (x, y)...l In (x, y) has the property (9) ω( x, y) =( 1) n ω(x, y) and when n there are indexes j, k {1,...,n}, such that t j 1, and t k 1. Proof. Lemma 1 shows that ω(x, y) is orthogonal to any Q π n 1 (IR ) in with weight p(x, y) 1. The domain and p(x, y) are central symmetric and therefore (see [6]) ω(x, y) has the property (9). page 51 of Numer. Math. (1998) 80: 39 59

5 B. Bojanov, G. Petrova When n AP =n 1 3 and because (1) is exact for the polynomials f 1 (x, y) 1, f (x, y) =x, f 3 (x, y) =y, after a simple computation one can obtain A k 1 t k = π, 3 3 A k 1 t k (3t k cos θ k +(1 t k) sin θ k )= π 4, A k 1 t k (3t k sin θ k +(1 t k) cos θ k )= π 4 and therefore, A k 1 t k (4t k 1)=0. Since t k ( 1, 1) and, by Lemma, A k > 0, k =1,...,n, one has that there exist indexes j, k {1,...,n}, satisfying the above mentioned condition. Actually Lemma 8 claims, that the set of lines I := {I 1,...,I n }, induced by a Gaussian quadrature formula (1) is central symmetric with respect to the origin. Namely, after a suitable enumeration, t k = t k+s, k =1,...,s, when n =s, t k = t k+s+1, k =1,...,s, t s+1 =0, when n =s +1. Taking into account all previous results a natural question comes up. oes the set n Γ 1 := {f π n (IR ):f = (a k x + b k y + c k ), coincide with the set Γ := {f π n (IR ):f = f orthogonal to π n 1 (IR )} n L Ik (x, y), I 1,...,I n are induced by a Gaussian quadrature (1)}. Clearly, by Lemma 1, Γ Γ 1. There is an example (see [6]) of a polynomial R(x, y) Γ 1, such that its linear multipliers are lines, passing through the origin and the equidistant points (cos kπ n, sin kπ n ),k =1,...,n, on the unit circle {(x, y) : x + y =1}. Since each line generated by page 5 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 53 this polynomial passes through the origin, by the second part of Lemma 1, R(x, y) fails to be an element of Γ. This means that in order to characterize the Gaussian quadrature formulas of type (1) we need more than the orthogonality and the linear multiplier representation, used so far. 6. Uniqueness In this section we shall examine more closely the uniqueness property of (*) among a certain class of quadrature formulas. Theorem 3. Among all quadrature formulas of the form (10) f(x, y) dx dy 1 t k B k f(t k,y) dy, 1 t k 1 <t 1 <...<t n < 1 (*) is the only one, that is Gaussian. Proof. Let (10) be a Gaussian quadrature formula. enote Ik 0 := I(η k, 0) and Jk 0 := J(t k, 0). Then, by Lemma 3 J 0 k (( n l=1,l k n Jl 0 ) ( Il 0 )) and since all the lines are parallel one another, this is possible only when l=1 t k = η k, k =1,...,n. It is easy to check, that if we apply (*) and (10) to the polynomials ψ k (x, y) = n j=1,j k (x η j ) π n (IR ), k =1,...,n, we derive that A k = B k, k =1,...,n, which proves the theorem. We next discuss quadrature formulas of the form (11) 1 x 1 f(x, y) dx dy B 1 f(x 1,y) dy + B k f(x, y) ds, I k 1 x 1 where x 1 is fixed at the zero of U n (t) with the greatest absolute value. We prove the following theorem about the uniqueness of (*) as a Gaussian quadrature formula among the formulas of the form (11). k= page 53 of Numer. Math. (1998) 80: 39 59

54 B. Bojanov, G. Petrova Theorem 4. (*) is the only Gaussian quadrature formula among the quadrature formulas of type (11). Proof. We shall make use of the orthonormal basis (8) for in π n (IR ). Let AP(11) = n 1. Then, by Lemma 1, ω(x, y) =(x x 1 )L I (x, y)...l In (x, y) is orthogonal to any Q π n 1 (IR ) and therefore ω(x, y) = k=0 a k d k dx k U n(x)w k (x, y) for some coefficients {a k }. On the other hand, ω(x 1,y)=0for each y. After comparisson of the coefficients of the univariate polynomial ω(x 1,y), we arrive at the equality d k a k dx k U n(x 1 )=0, k =0,...,n. According to the fact, that x 1 is the zero of U n (t) with the greatest absolute value, by Rolle s theorem, we have that any zero t d of k U dx k n (t), k = 1,...,n 1 satisfies the condition Therefore d k t <x 1. dx k U n(x 1 ) 0, k =1,...,n, which combined with the previous result gives a 1 =...= a n =0. Since not all a k are zeroes, the only possibility left is when a 0 0. Thus ω(x, y) =a 0 U n (x), which means that the quadrature formula is of the form (10). Then, by Theorem 3, this formula is exactly (*). Now we introduce a new class of quadrature formulas Θ(n) that contains all ones of the type (1) f(x, y) dx dy B k f(x, y) ds, J k where the set of lines J := {J 1,...,J n } satisfies the following condition: There is an index k {1,...,n} such that all the segments J l, l = page 54 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 55 1,...,n, l k, are in one of the halfplanes, determined by J k and there is no one in the other halfplane. For this set we have the following uniqueness theorem. Theorem 5. Formula (*) is the only Gaussian quadrature formula in Θ(n). Proof. Let (1) be a Gaussian quadrature formula. Without loss of generality, we can assume that the line, determining (1) as an element of Θ(n) is the line with equation x = t 1, t 1 0. Let ω(x, y) =L J1 (x, y)...l Jn (x, y). Then, by Lemma 1 and Lemma 7 1 t (13) ω(t, y) dy = A 1 t U n (t), t ( 1, 1), A IR. 1 t Assume, that A =0. From the mean value theorem, it follows that 0= In particular, 1 t ω(t, y) dy = 1 t ω(m(t)), t ( 1, 1), 1 t M(t) I(t, 0). ω(m(t))=0, t > t 1, M(t) I(t, 0). This is impossible, since (1) as an element of the set Θ(n) does not vanish on one of the halfplanes, determined by J 1. Therefore A 0and by (13), when t = t 1 U n (t 1 )=0. Let x 1 be the zero of U n (t) with the greatest absolute value. Assume x 1 t 1 and consider (*), which by Theorem 1 is Gaussian quadrature formula. Hence, using Lemma 3, we have n n I(x 1, 0) (( J l ) ( I(x l, 0))), l=1 which can not be true, since (1) belongs to Θ(n). This implies l= t 1 = x 1. In other words, the formula (1) is of type (11) and then, by Theorem 4, (1) is actually (*). The proof is completed. We have to mention, that (*) has a higher degree of precision than any quadrature formula derived from the multivariate Hakopian interpolation on the unit disk. Let us explain this more precisely. page 55 of Numer. Math. (1998) 80: 39 59

56 B. Bojanov, G. Petrova It was shown in [1], that if Y 1,...,Y n are n points on the unit circle {(x, y) : x + y =1} and {Y i,y j },i =1,...,n,j =1,...,n, are the line segments, connecting Y i and Y j, then for each g(x, y) C() there is a unique polynomial H n (g; x, y) π n (IR ) such that H n (g; x, y) ds = g(x, y) ds, 1 i<j n. {Y i,y j } This polynomial can be written as H n (g; x, y) = 1 k,l n {Y i,y j } σ k,l (x, y) g(x, y) ds, {Y k,y l } where σ k,l (x, y) π n (IR ). After integrating over, we obtain the quadrature formula (14) g(x, y) dx dy H n (g; x, y) dx dy = A k,l g(x, y) ds, where A k,l = 1 k,l n {Y k,y l } σ k,l (x, y) dx dy. Each quadrature of the form (14) uses integration along N = n(n+1) lines and as an element of Θ(N), by Theorem 5, AP(14) <n() 1. On the other hand, if we are allowed to use N = n(n+1) line integrals, we would prefer (*), since it is Gaussian and thus it has a degree of precision n() 1. Therefore the quadrature formula (*) is better than any one of the form (14). 7. Some examples The quadrature formula (*) can be written in the form f(x, y) dx dy A k f, I k where I k, k =1,...,n, are the line segments in, parallel to y axis, passing through the points (cos kπ n+1, 0) and A k = π sin kπ, k =1,...,n. Next we give examples of our formula with calculated weights and nodes. page 56 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 57 n =1 The formula is exact for all P (x, y) π 1(IR ). k Coefficients Line segments 1 1.5707963679415 x=0.00000000000000 n = The formula is exact for all P (x, y) π 3(IR ). k Coefficients Line segments 1 0.9068996811791 x=0.50000000000000 0.9068996811700 x=-0.50000000000000 n =3 The formula is exact for all P (x, y) π 5(IR ). k Coefficients Line segments 1 0.5553603676971 x=0.70710678118655 0.78539816339708 x=0.00000000000000 3 0.5553603676971 x=-0.70710678118655 n =4 The formula is exact for all P (x, y) π 7(IR ). k Coefficients Line segments 1 0.36931636609870 x=0.80901699437495 0.5975664394895 x=0.30901699437495 3 0.5975664394804 x=-0.30901699437495 4 0.36931636609734 x=-0.80901699437495 n =5 The formula is exact for all P (x, y) π 9(IR ). k Coefficients Line segments 1 0.6179938779933 x=0.8660540378444 0.45344984105895 x=0.50000000000000 3 0.5359877559866 x=0.00000000000000 4 0.45344984105850 x=-0.50000000000000 5 0.617993877984 x=-0.8660540378444 n =6 The formula is exact for all P (x, y) π 11(IR ). k Coefficients Line segments 1 0.1947656676044 x=0.900968867904 0.35088514880954 x=0.6348980185873 3 0.437546638198 x=0.5093395631 4 0.437546638198 x=-0.5093395631 5 0.35088514880954 x=-0.6348980185873 6 1.1947656676044 x=-0.900968867904 page 57 of Numer. Math. (1998) 80: 39 59

58 B. Bojanov, G. Petrova n =7 The formula is exact for all P (x, y) π 13(IR ). k Coefficients Line segments 1 0.150794347105 x= 0.93879535119 0.7768018363486 x=0.70710678118655 3 0.3680664401738 x=0.386834336509 4 0.3969908169854 x=0.00000000000000 5 0.3680664401738 x=-0.386834336509 6 0.7768018363486 x=-0.70710678118655 7 0.15079434718 x=-0.93879535119 n =8 The formula is exact for all P (x, y) π 15(IR ). k Coefficients Line segments 1 0.1193875518353 x= 0.939696078591 0.43750360108 x= 0.76604444311898 3 0.3099894039155 x=0.50000000000000 4 0.34376755784618 x=0.17364817766693 5 0.3437675578461 x=-0.17364817766693 6 0.309989403870 x=-0.50000000000000 7 0.43750360086 x=-0.76604444311898 8 0.1193875518364 x=-0.939696078591 n =9 The formula is exact for all P (x, y) π 17(IR ). k Coefficients Line segments 1 0.09708055193641 x= 0.9510565169515 0.18465818304935 x=0.80901699437495 3 0.5416018461601 x=0.5877855947 4 0.987831647448 x=0.30901699437495 5 0.3141596535919 x=0.00000000000000 6 0.98783164740 x=-0.30901699437495 7 0.5416018461556 x=-0.5877855947 8 0.18465818304867 x=-0.80901699437495 9 0.09708055193573 x=-0.9510565169515 n =10 The formula is exact for all P (x, y) π 17(IR ). k Coefficients Line segments 1 0.080466300775 x=0.9594997361450 0.15440665639539 x=0.841535383118 3 0.158415737041 x=0.6548607339459 4 0.597909037080 x=0.41541501300189 5 0.8693474376 x=0.14314838739 6 0.8693474376 x= -0.14314838739 7 0.597909037035 x=-0.41541501300189 8 0.1584157370398 x=-0.6548607339458 9 0.15440665639494 x=-0.841535383118 10 0.080466300775 x=-0.9594997361450 page 58 of Numer. Math. (1998) 80: 39 59

New Gaussian quadrature 59 References 1. Bojanov, B., Hakopian, H., Sahakian, A. (1993): Spline Functions and Multivariate Interpolations. Kluwer, ordrecht. Hamaker, C., Solmom,. (1978): The angles between the null spaces of X rays. J. Math. Anal. Appl. 6, 1 3 3. Logan, B., Shepp, L. (1975): Optimal reconstruction of a function from its projections. uke Math. J. 4(4), 645 659 4. Marr, R. (1974): On the reconstruction of a function on a circular domain from a sampling of its line integrals. J. Math. Anal. Appl. 45, 357 374 5. Mysovskih, I. (1981): Interpolatory Cubature Formulas. Nauka, Moscow, (in Russian) 6. Suetin, P. (1988): Orthogonal polynomials of two variables. Nauka, Moscow, (in Russian) page 59 of Numer. Math. (1998) 80: 39 59