Abstract Minimal degree interpolation spaces with respect to a nite set of

Similar documents
290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Numerical Analysis: Interpolation Part 1

Fraction-free Row Reduction of Matrices of Skew Polynomials

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

2. Intersection Multiplicities

Computing with D-algebraic power series

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Polynomial interpolation in several variables: lattices, differences, and ideals

Roots of Unity, Cyclotomic Polynomials and Applications

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

1 Matrices and Systems of Linear Equations

Chapter 4: Interpolation and Approximation. October 28, 2005

Functions: A Fourier Approach. Universitat Rostock. Germany. Dedicated to Prof. L. Berg on the occasion of his 65th birthday.

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

We consider the problem of finding a polynomial that interpolates a given set of values:

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime

Simple Lie subalgebras of locally nite associative algebras

Abstract Vector Spaces and Concrete Examples

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

10. Smooth Varieties. 82 Andreas Gathmann

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Scientific Computing: An Introductory Survey

Scientific Computing

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University

Chapter 2: Linear Independence and Bases

Mathematical Olympiad Training Polynomials

Notes on the matrix exponential

Abstract. We show that a proper coloring of the diagram of an interval order I may require 1 +

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

Linear Algebra. Min Yan

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

3.1 Interpolation and the Lagrange Polynomial

4.6 Bases and Dimension

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Rewriting Polynomials

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University

1 Solutions to selected problems

Linear Regression and Its Applications

Polynomial Interpolation Part II

4.3 - Linear Combinations and Independence of Vectors

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

response surface work. These alternative polynomials are contrasted with those of Schee, and ideas of

Congurations of periodic orbits for equations with delayed positive feedback

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

2 Real equation solving the input equation of the real hypersurface under consideration, we are able to nd for each connected component of the hypersu

Homework 2 Solutions

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

Systems of Linear Equations

A linear algebra proof of the fundamental theorem of algebra

Topological properties

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

A linear algebra proof of the fundamental theorem of algebra

Properties of Arithmetical Functions

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

MATH 590: Meshfree Methods

Lifting to non-integral idempotents

B y Werner M. Seiler. School of Physics and Materialsy.

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

Solving Systems of Equations Row Reduction

COS 424: Interacting with Data

The 70th William Lowell Putnam Mathematical Competition Saturday, December 5, 2009

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Formal Groups. Niki Myrto Mavraki

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

NOTES (1) FOR MATH 375, FALL 2012

1 Ordinary points and singular points

Pade approximants and noise: rational functions

University of Missouri. In Partial Fulllment LINDSEY M. WOODLAND MAY 2015

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Asymptotic expansion of multivariate conservative linear operators

Boolean Inner-Product Spaces and Boolean Matrices

Chapter 2. Vector Spaces

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness

Transcription:

Numerische Mathematik Manuscript-Nr. (will be inserted by hand later) Polynomial interpolation of minimal degree Thomas Sauer Mathematical Institute, University Erlangen{Nuremberg, Bismarckstr. 1 1, 90537 Erlangen, 2 Germany, sauer@mi.uni-erlangen.de December 19, 1995 Dedicated to Prof. H. Berens on the occasion of his 60th birthday Abstract Minimal degree interpolation spaces with respect to a nite set of points are subspaces of multivariate polynomials of least possible degree for which Lagrange interpolation with respect to the given points is uniquely solvable and degree reducing. This is a generalization of the concept of least interpolation introduced by de Boor and Ron. This paper investigates the behavior of Lagrange interpolation with respect to these spaces, giving a Newton interpolation method and a remainder formula for the error of interpolation. Moreover, a special minimal degree interpolation space will be introduced which is particularly benecial from the numerical point of view. Key words: Lagrange Interpolation, minimal degree, Newton interpolation method, remainder formula, interpolation algorithm, numerical performance Mathematics Subject Classication (1991): 41A05, 41A10, 65D10 1. Introduction Given a nite dimensional linear space P of dimension N +1, N 2 N and a nite set of N +1 pairwise distinct points, say N = fx 0 ; : : :; x N g R d, the Lagrange interpolation problem addresses the question of nding, for a given f : R d! R, an element P 2 P which matches f at N ; i.e., P (x i ) = f(x i ); i = 0; : : :; N: The \simplest" example for such a P, which can also be treated numerically in a suitable way, is when P d, where d is the space of all polynomials in d variables. It is well-known that in the univariate case the Lagrange interpolation problem with respect to N + 1 distinct points is always poised, i.e., uniquely

2 Th. Sauer solvable, if one takes P to be the space of all polynomials of degree less than or equal to N. In several variables, however, the situation is much more dicult. In order to successfully interpolate with n, d the space of all polynomials of total degree less than or equal to n,? the number of the points has to match n+d the dimension of the space which is d, so that only point sets of a certain cardinality are admissible. And even if this is the case, there can be the problem that the points lie on some algebraic surface of degree n, i.e., there is some polynomial q of total degree n which vanishes on N. To overcome this problem, there have been several approaches to nd congurations of points which guarantee the poisedness of the respective interpolation problem. Investigations in this spirit emerge from a remarkable paper by Chung and Yao [7] and were extended, e.g., by Gasca and Maeztu [10]; an extensive survey about these methods has been provided by Gasca [9]. Unfortunately, in many cases the points are given a priori and cannot be determined or modied by the interpolation process; for example, if the data to be interpolated stem from some physical or \real world" measurements. In this respect, even the Lagrange interpolation with respect to points with very regular structure (e.g., points on a rectangular grid) need not be poised in n, d regardless of whether the cardinalities match or not. Consequently, if there is no access to the points, the only way out is to choose the subspace P suitably, such that one can uniquely interpolate at the given points. The rst approach in this direction is the concept of Kergin interpolation, introduced by Kergin [11]. Although his method of constructing an interpolating polynomial provides a very nice closed form of the interpolating polynomial based on a generalized Newton interpolation formula, as pointed out by Micchelli [15], it has the drawback that the interpolating polynomial with respect to N + 1 points has degree N and cannot be handled very well numerically. From that point of view it is important to nd appropriate spaces P (i.e., polynomial subspaces P d such that interpolation with respect to the given point set N is poised in P) which consist of polynomials of least possible degree. This question has been considered by de Boor and Ron [3] who investigated it to great extend in a series of papers ([5, 1, 2] to name a few). They introduced a particular polynomial subspace which they called the least choice. Among other properties to be listed later, this space, which depends on the nodes of interpolation, provides three intrinsic features: 1. The Lagrange interpolation problem with respect to the points N is uniquely solvable in this subspace. 2. This subspace is of minimal degree; i.e., if the subspace contains polynomials of total degree less or equal to n and at least one polynomial of total degree n, then the Lagrange interpolation problem with respect to N is unsolvable in any subspace of d n?1. 3. Interpolation with respect to this space is degree reducing; i.e., if a polynomial p has total degree k n, then its interpolant has degree at most k. This paper will approach the question of nding proper polynomial subspaces by considering all polynomial subspaces which satisfy the above three requirements and study Lagrange interpolation with respect to them. For these spaces, which we will call minimal degree interpolation spaces, we will derive an analogy

Minimal degree interpolation 3 to the univariate Newton interpolation method as well as a remainder formula. In addition, we will provide a particular minimal degree interpolation space which captures quite a few of the striking properties of the least interpolant from de Boor and Ron but combines it with \optimal" numerical behavior in the sense that it minimizes storage and arithmetic operations, which may make the space useful for practical applications. The paper is organized as follows: rst we will formally introduce minimal degree interpolation spaces in Section 2 and discuss some of their basic properties. In Section 3 we will consider some special examples of minimal interpolation spaces, among them the least interpolation of de Boor and Ron. The Newton method and the remainder formula are the subject of Section 4, while in the nal Section 5 we construct and investigate the particular minimal degree interpolation space mentioned above, stating and verifying its properties. We will use standard multiindex notation throughout the paper. For a multiindex = ( 1 ; : : :; d ) 2 N d 0, we denote by jj = 1 + + d the length of. Moreover, we write x = 1 1 d d ; x = ( 1; : : :; d ) 2 R d ; 2 N d 0; for the monomials. The totality x, jj n, spans d n d, the space of all polynomials of total degree less than or equal to n. We will order multiindices in the most convenient way, writing if either jj jj or jj = jj and appears earlier than in the standard lexicographical ordering. The latter means that there is some 1 k d such that i = i, i = 1; : : :; k? 1, and k < k. 2. Minimal degree interpolation spaces Let N = fx 0 ; : : :; x N g be a set of N + 1 distinct points in R d. We say that the Lagrange interpolation problem with respect to N is poised in a subspace P d, if for any f : R d! R there exist a unique P 2 P such that P (x i ) = f(x i ); i = 0; : : :; N: Clearly, this requires that dimp = N + 1. Suppose that the Lagrange interpolation problem with respect to N is poised in P( N ) d, where the notation P( N ) is used to emphasize the fact that the space P( N ), which admits unique Lagrange interpolation at N, depends on N. By L P(N )(f; ) we denote the projection of f on P( N ), i.e., the interpolating polynomial L P(N )(f; ) 2 P( N ) such that L P(N )(f; x i ) = f(x i ); i = 0; : : :; N: Since polynomials of high degree are expensive in storage and unstable in evaluation and therefore inconvenient for numerical purposes, it is reasonable to request that the space P( N ) be chosen of minimal degree, i.e., we take P( N ) n d where n is the minimum of all admissible values. In other words, n is chosen in such a way that the Lagrange interpolation problem with respect to N is not poised in any subspace of n?1. d Of course, n depends on N and satises

4 Th. Sauer n + d n N + 1 ; d? where n = N + 1 if and only if all the points are on a straight line and N = n+d d? 1 if and only if the Lagrange interpolation problem with respect to N is poised in n. d The latter case has been treated in [20]. The second requirement is that Lagrange interpolation with respect to the space P( N ) be degree reducing, a property observed already by de Boor and Ron [3]. This means that for k n p 2 d k ) L P( N )p 2 d k ; which is a desirable behavior of the projection operator L P(N ) on d n. Summarizing these requirements, we state Denition 1. Let a nite set N of N distinct points be given. A subspace P( N ) d is called a minimal degree interpolation space of order n with respect to N provided that 1. the Lagrange interpolation problem with respect to N is poised in P( N ) d n, 2. P( N ) is of minimal degree with this property, i.e., there is no subspace of d n?1 which admits unique interpolation at N, 3. Lagrange interpolation with respect to P( N ) is degree reducing. Next we introduce a Newton interpolation method for P( N ), extending the one in [20]. For that purpose, let us briey recall that this approach is based on rearranging the points N, N = dimn, d into fx : jj ng, such that there are polynomials p 2 d, jj n, which satisfy jj p (x ) = ; ; jj jj n: The sets x k = fx : jj = kg are called blocks. To extend this notion to minimal degree interpolation spaces, we consider index sets I k f : jj kg N d 0, k = 0; : : :; n, and I?1 = ; for convenience, which satisfy (1) I 0 I 1 I n and I k n I k?1 f : jj = kg ; k = 0; : : :; n: The complements of these sets, Ik 0 := f : jj kg n I k, k = 0; : : :; n, then are nested as well. We say that P n d admits a Newton basis of order n with respect to N, if there exists a system of index sets I = (I 0 ; : : :; I n ), satisfying (1), such that 1. I n n I n?1 6= ;, 2. the points in N can be re-indexed as N = fx : 2 I n g (in particular, #I n = dimp( N ) = N + 1), 3. there exists a basis p 2 d, 2 I jj n, of P( N ) such that (2) p (x ) = ; ; 2 I jj ; 2 I n ; 4. there exist polynomials p? 2 d jj, 2 I0 n, such that (3) p? ( N) = 0 and d n = span fp : 2 I n g span p? : 2 I0 n :

Minimal degree interpolation 5 The polynomials p, 2 I n, are called the Newton fundamental polynomials with respect to N in P( N ). It is easily seen that the Lagrange interpolation problem with respect to N is poised in P( N ) if there exist Newton fundamental polynomials for P( N ) which satisfy (2) after re-indexing N properly. The respective blocks of points for such a Newton basis are the sets x k := fx : 2 I k n I k?1 g ; k = 0; : : :; n: The relation between minimal degree interpolation spaces and spaces which admit a Newton basis is now as close as can be, in view of Theorem 1. A subspace P d is a minimal degree interpolation space of order n with respect to N if and only if there exists a Newton basis of order n with respect to N for P. Proof. Assume that P n d is a minimal degree interpolation space of or- n with respect to N and let p 1 ; : : :; p N be a basis of P. Let Q = der q 2 d : q( N ) = 0 denote the ideal of all polynomials which annihilate N and dene Q k = Q \ d, k 2 N k 0. Since P is an interpolation space we have d n = P Q n : Since P is degree reducing, it follows that d = P k k Q k as well, k = 0; : : :; n, where P k = P \ d k. This implies that there exists a graded basis for P. Thus we dene the system of nested index sets I = (I 0 ; : : :; I n ) in such a way that #I k = dimp k, and rewrite the graded basis as fg : 2 I k ; k = 0; : : :; ng. Note that this is indeed trivial since the only requirements for the index system is being nested and having proper cardinality at each level. In order to obtain the Newton fundamental polynomials we then apply the orthogonalization process from [20] (see also [18]) to the polynomials g, 2 I n, which also re-indexes the points in N in an appropriate way. The polynomials p?, 2 I0 n, are obtained similarly by applying the same process (without the nal orthogonalization step) to I 0 = (I0; 0 : : :; In) 0 and a graded basis of the vector space Q n. Conversely, it is obvious that the existence of a Newton basis for P implies poisedness. The minimal degree property follows from the assumption that I n n I n?1 6= ; and the fact that all polynomials which do not belong to P have to belong to Q n and thus vanish on N. To prove the degree reducing property, choose any p 2 k d, k n. Then p can be written as and thus L P (p; x) = 2Ik p(x) = 2Ik c p (x) + 2I 0 k c L P (p ; x) + 2I 0 k c p? (x); c L P (p? ; x) = 2Ik c p ; since L P (p ; x) = p (x) and L P (p? ; x) = 0.

6 Th. Sauer The idea of the Newton interpolation is straightforward: dening P k ( N ) as P( N )\ d k and setting L k = L Pk( N ) we generate the interpolating polynomial L n (f; x) for some f : N! R successively via L k+1 (f; x) = L k (f; x) + L k+1 (f? L k f; x) ; k = 0; : : :; n? 1: We will describe this method in detail in Section 4 where we also derive an error formula for minimal degree interpolation. To assure that the above procedure always works we dene (4) J k := I k n I k?1 ; J 0 k := I 0 k n I 0 k?1; k = 0; : : :; n and state the following observation: Proposition 1. If P( N ) is a minimal degree interpolation space with respect to N then the sets J k, k = 0; : : :; n, of the respective Newton basis satisfy J k 6= ;; k = 0; : : :; n: Proof. For an arbitrary? basis of n, d say 2 d, jj n, let us consider the jj k+d generalized (N + 1) d Vandermonde matrix Vk of order k, k = 0; : : :; n, dened by V k ( N ) = [ (x i )] i=0;:::;n; jjk ; k = 0; : : :; n: Since P( N ) is a minimal degree interpolation space, these matrices satisfy (5) rank V 0 rank V 1 rank V n?1 < rank V n = N; where the strict inequality at the end stems from minimality. Now assume that for some k < n we have J k = ;; from this it is easy to conclude that there exist linearly independent polynomials q, jj = k, such that q ( N ) = 0.? n+k?1 d?1 For example, choose q (x) = x? L Pk( N ) (() ; x) ; jj = k: Hence, all the polynomials ; (x) = x q (x), jj = k; jj n? k, also satisfy ;( N ) = 0. Clearly, d n = span f' : jj < kg span f ; : jj = k; jj n? kg: Thus, there is a basis of d n such that ( N ) = 0 if jj k. But this implies that contradicting (5). rank V k?1 = rank V k = = rank V n ;

Minimal degree interpolation 7 3. Special Examples Example 1. The rst and most prominent example of a minimal degree interpolation space is the least interpolation introduced and extensively investigated by de Boor and Ron [3]. They started with explicitly constructing the least interpolation space P l ( N ), according to some arbitrary given set of points N, such that interpolation with respect to N is poised in P l ( N ), and then proved that it possesses the minimal degree property. See also [6] for a comparison between the least interpolation space and general minimal degree interpolation spaces. They characterized the least interpolation space, which is uniquely determined through the set N, by means of the kernels of certain homogeneous dierential operators. Precisely, P l ( N ) is the unique minimal degree interpolation space which satises \ (6) P l ( N ) = kern q " (D); q( )=0 where q ", in the notation of de Boor and Ron, denotes the leading term of q, dened in the following way: assume that q has degree n, then q " is the unique homogeneous polynomial of degree n such that q(x)? q " (x) 2 d n?1 : For information on how to construct P l ( N ) and the interpolating polynomial in numerical practice, see [1, 5]. Among several other remaining properties, this particular space turns out to be scale invariant and shift invariant, i.e., where P l ( N ) = P l (c N? y); c 2 R; c 6= 0; y 2 R d ; c? y = fcx 0? y; : : :; cx N?1? yg : Since minimal degree interpolation spaces cannot be rotationally invariant in general, this is the strongest coordinate system independence to be expected. Also note that shift and scale invariance implies that P l ( N ) is closed under derivation. Example 2. To represent polynomials in practice, one usually stores and manipulates their coecients with respect to some basis and the most convenient (but not numerically most stable) basis to be used are the monomials x, 2 N d 0. Since the requirements on storage and the number of operations necessary to manipulate polynomials depend on the number of basis functions necessary to represent the subspace P( ), it can be benecial to nd a minimal degree interpolation space which uses as few basis functions as possible. To illustrate this idea, let us rst consider the following extremal situation: Suppose the points x 0 ; : : :; x N are all on a straight line, i.e., x j = x 0 + j a, j = 1; : : :; N, for some 0 6= a 2 R d and pairwise distinct j 2 R, j = 1; : : :; N. Assume in addition that a j 6= 0, j = 1; : : :; d and let ` 2 d 1 be the linear polynomial `(x) = ha; xi = d j=1 a j x j ; x 2 R d :

8 Th. Sauer Then one minimal degree interpolation space with respect to N (which is even the least interpolation space for that conguration of points) is spanned by the powers of `, `k(x), k = 0; : : :; N, and the Newton fundamental polynomials are p k (x) = `(x? x 0) `(x? x k?1 ) ; k = 0; : : :; N: `(x k? x 0 ) `(x k?? x k?1 ) k+d Note that each of these polynomials has d coecients with respect to the monomials due to the assumption that all coecients a j are nonzero; hence, a \typical" interpolation? polynomial for this situation will have to store and N+d manipulate d = O(N d ) coecients with respect to the monomial basis. Also the evaluation of these polynomials requires access to O(N d ) coecients which makes it ineective compared to the dimension of the space which is only N + 1. Note that this problem is not a consequence of choosing the monomials as a basis: \usually" (i.e., up to set of measure zero) the coecient vector of a multivariate interpolating polynomial with respect to any basis is densely lled with entries, regardless of the underlying basis. Example 3. From the preceding example we see that it may be benecial for numerical purposes to nd minimal degree interpolation spaces which can be described by as few generic basis functions as possible; though not being the best possible choice from the point of view of numerical stability, the monomials are still the simplest and most common choice for a generic basis of polynomials. As already mentioned, there is always a subspace of N d such that in this subspace the Lagrange interpolation problem with respect to given N + 1 points x 0 ; : : :; x N is poised { one may take, for? example, the Kergin interpolant. This N+d implies that the generalized (N + 1) d Vandermonde matrix (which is no more a quadratic matrix for d > 1) V N = [x i : i = 0; : : :; N; jj N] has rank N +1, i.e., there must be an (N +1)(N +1) sub-matrix of V N with nonvanishing determinant; in other words, there always exists a subspace spanned by N + 1 monomials, where the Lagrange interpolation problem with respect to N is poised. Among these spaces spanned by certain monomials, there will also be one (and in general several) subspace(s) of minimal degree. We will refer to minimal degree interpolation spaces which are spanned by N + 1 monomials as minimal degree interpolation with minimal monomials. Note that we will need further requirements to single out a unique minimal degree interpolation space with minimal monomials. We will make this more precisely in Section 5, where we use this remaining degree of freedom to construct a minimal degree interpolation space which combines the minimal monomials property with the invariance properties of least interpolation. Example 4. The nal idea to specialize a minimal degree interpolation space is to introduce additional points in such a way that the original points and the added ones give rise to a Lagrange interpolation problem which is poised in d n, where n is the specic minimal degree. For that purpose let us recall that P( N ) d n already admits unique Lagrange interpolation; now, if we consider Q n = d n n P( N), on the other hand, then it is well{known (see for instance [19]) that the Lagrange interpolation problem with respect to a suitable number

Minimal degree interpolation 9 of points is poised except for a set of measure zero. Let x, 2 I 0 n, denote such a set of points (their number is correct since the dimension of Q n equals the cardinality of I 0 n). Then it is easily seen that the Lagrange interpolation problem with respect to the set of points fx : 2 I n g [ fx : 2 I 0 ng is poised in n d { this is an immediate consequence of the fact that n d = P( ) Q n. Choosing a reasonable Newton base in both P( ) and Q n we can reformulate this process as follows: we add points x, 2 In, 0 such that there are polynomials p, 2 I n, and p?, 2 I0, such that n p? ( ) = 0 and, in addition (7) and, respectively, (8) p (x ) = ; jj jj; 2 I k ; p? (x ) = ; jj jj; 2 I 0 k: Of course, there is again freedom in choosing the additional points since the above extension is possible for almost all choices of x, 2 I 0 n, as already mentioned above. This type of minimal degree interpolation, referred to as minimal degree interpolation with additional points, provides a particularly nice and simple remainder formula which will be given in (22). 4. Finite Dierence and Remainder Formula In case P = d n for some n 2 N, there are two remainder formulae which are valid for all sets of points N that do not lie on an algebraic surface of degree n: the rst one is due to Ciarlet [8] and is based on a multipoint Taylor expansion, while the more recent one, developed in [20], is obtained from a Newton interpolation scheme. In this section we will extend the latter result to minimal degree interpolation spaces. It has to be remarked here that Newton formulae for special congurations of points have been given and investigated before, cf., for example [12, 21, 17]. However, the Newton interpolation to be described here is based on a nite dierence approach for minimal degree interpolation which is very much similar to the method introduced in [20]. In addition to its theoretic use for deriving the remainder formula, the Newton method also oers a good tool in practical computations: it not only provides a fast method to compute the interpolating polynomial with less memory consumption, but also shows superior numerical robustness if the function is suciently smooth (see [18]). The key tool for the derivation of the Newton method is the nite dierence k [x 0 ; : : :; x k?1 ; x], k = 0; : : :; n, which is recursively dened as (9) (10) 0 [x]f := f(x) k+1 [x 0 ; : : :; x k ; x]f := k [x 0 ; : : :; x k?1 ; x]f? 2Jk k [x 0 ; : : :; x k?1 ; x ]f p (x): It has been pointed out in [20] that in case d = 1 this dierence coincides with a re-normalized version of the classical divided dierence f[: : :]; precisely:

10 Th. Sauer n+1 [x 0 ; : : :; x n ; x]f = f[x 0 ; : : :; x n ; x](x? x 0 ) (x? x n ): Indeed, this dierence plays a crucial role in describing the interpolating polynomial and the error of interpolation. Theorem 2. For a minimal degree interpolation space with Newton fundamental polynomials p, 2 I n, the interpolating polynomial for a function f is given by (11) L n (f; x) = jj [x 0 ; : : :; x jj?1 ; x ]f p (x): 2In Moreover, (12) f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f: Proof. We will prove that equations (11) and (12) both hold with n replaced by k, k = 0; : : :; n, where L k corresponds to interpolating at the points x, 2 I k, with the span of the polynomials p, 2 I k. This will be be done by induction on k. Indeed, if k = 0, then (11) and (12) read L 0 (f; x) = f(x 0 ); f(x)? L 0 (x) = 1 [x 0 ; x]f = f(x)? f(x 0 ): So, suppose that for some k < n, the equations are already veried. Since the polynomials p, 2 J k+1 (recall: J k+1 = I k+1 n I k ), vanish in x, 2 I k, we obtain that for 2 I k 2I k+1 jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) = 2Ik jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) = L k (f; x ) = f(x ): For 2 J k+1 we use (12), (11) and the fact that p (x ) =, 2 J k+1, to compute f(x ) = L k (f; x ) + f(x )? L k (f; x ) = L k (f; x ) + k+1 [x 0 ; : : :; x k ; x ]f = 2Ik jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) + = 2J k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x ) 2I k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x ): Hence, (11) holds for k + 1. Using the recursive denition of the nite dierence nally yields k+2 [x 0 ; : : :; x k+1 ; x]f = k+1 [x 0 ; : : :; x n?1 ; x]f? = f(x)? L k (f; x)? (L k+1 (f; x)? L k (f; x)) = f(x)? L k+1 (f; x); 2J k+1 k+1 [x 0 ; : : :; x n?1 ; x ]f p (x)

Minimal degree interpolation 11 which is (12) for k + 1. Of course, one could also introduce the nite dierence via (12). Then (11) is obvious, but one has to prove the recurrence relation (10) instead. Since the proof of the remainder formula uses induction based on this recurrence, the above way to introduce the nite dierence is more convenient here. To formulate and establish the remainder formula for minimal degree interpolation, we have to introduce some additional notation. First, let us generalize the notion of a path as dened in [20] to paths in I n. A path in I n is a vector of multiindices of increasing length, which has the form = ( 0 ; : : :; n ); k 2 J k ; k = 0; : : :; n: Let n denote the totality of all those paths. The name \path" stems from the image of walking through the set of multiindices in I k, ascending to higher level in each step and passing exactly one multiindex on each level. Note that this notion still is reasonable because of Proposition 1 which certies that at each level there is at least one multiindex where the path can pass through. In other words: there are no \broken" paths or \jumps". To each path 2 n we associate the well{dened numbers (x ) = n?1 Y i=0 p i (x i+1 ); a homogeneous n{th order dierential operator as well as the set of points on the path D n x = D x n?x n?1 D x1?x 0 ; x = fx 0 ; : : :; x n g : Finally, let us recall the notion of a simplex spline: given any n+1 d+1 knots v 0 ; : : :; v n 2 R d, the simplex spline M(xjv 0 ; : : :; v n ) is the distribution dened by (13) where R d f(x) M(xjv 0 ; : : :; v n ) dx = (n? d)! S n f( 0 v 0 + + n v n )d; f 2 C(R d ); S n := f = ( 0 ; : : :; n ) : i 0; 0 + + n = 1g : The most important property for our present purposes is the formula for directional derivatives, derived, among other important facts about simplex splines, by Micchelli [16]: (14) D y M(xjv 0 ; : : :; v n ) = y = n j=0 j v j ; n j=0 n j=0 j M(xjv 0 ; : : :; v j?1 ; v j+1 ; : : :; v n ); j = 0:

12 Th. Sauer We will need this in the more special version (15) D v i?v jm(xjv0 ; : : :; v n ) = M(xjv 0 ; : : :; v i?1 ; v i+1 ; : : :; v n )? M(xjv 0 ; : : :; v j?1 ; v j+1 ; : : :; v n ); 0 i; j n: Before giving the integral representation of the nite dierence for any minimal degree interpolation space, let us have a brief look at the representation formula from [20] for the special case of a Lagrange interpolation problem being poised in d n. The formula reads as n+1 [x 0 ; : : :; x n ; x]f = 2 n p n (x) (x ) R d D x?xn D n x f(t)m(tjx ; x) dt: The dierential operator under the integral is of order n+1 and hence, the above expression vanishes on all polynomials of degree less than or equal to n. Since we have, due to (12), that n+1 [x 0 ; : : :; x n ; x]p? = p? (x)? L n(p? ; x) = p? (x); 2 I0 n ; there have to be additional terms in the remainder formula which are responsible for the reproduction of the polynomials p?. For their description, we have to dene the directions d ; 2 R d, 2 J k, 2 I 0 k+1 as the unique solutions of the vector interpolation problem (16) 2I 0 k+1 d ; p? (x) = p (x) (x? x )? 2J k+1 p (x)p (x ) (x? x ) : Lemma 1. The interpolation problem (16) is uniquely solvable for each 2 J k, k = 0; : : :; n? 1. Proof. Since the polynomial on the right-hand side is a polynomial of degree k + 1, it suces to show that '(x) = p (x) (x? x )? 2J k+1 p (x)p (x ) (x? x ) satises '(x ) = 0, 2 I k+1. This is obvious for 2 I k and =, since then both terms vanish. In the remaining cases 6= we have '(x ) = p (x ) (x? x )? 1 p (x ) (x? x ) = 0: because of p (x ) =. Hence, '(x) belongs to the span of p?, 2 I0 and k+1 can therefore be uniquely represented as required in (16). Now we are in position to formulate the remainder formula for a blockwise minimal degree interpolation space as

Minimal degree interpolation 13 Theorem 3. Let P( N ) be a minimal degree interpolation space of degree n. Then f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f (17) = p n (x) (x ) D 2 n R x?xn Dx n f(t)m(tjx ; x)dt d + 2I 0 n p? (x) n R d j=jj (x )D dj?1 ; Dx j?1 f(t)m(tjx ; x)dt: 2 j?1 Proof. The proof will use induction on k to prove that for any 0 k n (18) k+1 [x 0 ; : : :; x k ; x]f = p k (x) (x ) 2 k + k j=1 2 j?1 2Ij 0 R d D x?xk D k x f(t)m(tjx ; x)dt p? (x) (x ) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt; from which (17) follows by setting k = n and rearranging summation and integration in the second term. Since (0; : : :; 0) 2 I 0, equation (18) with k = 0 reads as 1 [x 0 ; x]f = f(x)? f(x 0 ) = R d D x?x0 f(t)m(tjx 0 ; x)dt; which is clearly true. Hence, suppose that for some 0 k < n equation (18) has already been proved. In particular, we know that for 2 J k+1 (19) k+1 [x 0 ; : : :; x k ; x ]f = p k (x ) (x ) 2 k R d D x?x k D k x f(t)m(tjx ; x )dt; since the second term of (18) vanishes in all points of N by the denition of the polynomials p?. Applying (16) and recalling the linearity of directional derivatives then yields that for 2 J k (20) p (x)d x?x = p (x)p (x )D x?x + p? (x)d d; : 2J k+1 Inserting this into (18) gives k+1 [x 0 ; : : :; x k ; x]f = p (x)p k (x ) (x ) 2 k 2J k+1 + p? (x) (x ) 2 k 2I 0 k+1 2I 0 k+1 R d D x?x k D k x f(t)m(tjx ; x)dt R d D dk ;D k x f(t)m(tjx ; x)dt

14 Th. Sauer + = k j=1 2 j?1 2Ij 0 2 k k+1 + p?(x) (x ) 2J k+1 p (x)p n (x ) (x ) j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 d ; Dx j?1 f(t)m(tjx ; x)dt; R d D x?x k D k x f(t)m(tjx ; x)dt D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt; and we substitute this and (19) into the recurrence relation (10) to obtain k+2 [x 0 ; : : :; x k+1 ; x]f = = k+1 [x 0 ; : : :; x k ; x]f? = 2 k k+1 + 2J k+1 p (x)p k (x ) (x ) j=1 2 j?1 2I j 0 R d D x?x k D k x f(t)m(tjx ; x)dt 2J k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x) p? (x) (x )? p (x) p k (x ) (x ) 2J k+1 2 k = p (x)p k (x ) (x ) 2J k+1 2 k k+1 + Since by (15) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt R d D x?x k D k x f(t)m(tjx ; x )dt R d D x?x k D k x f(t) (M(tjx ; x)? M(tjx ; x )) dt j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt: M(tjx ; x)? M(tjx ; x ) = D x?xm(tjx ; x ; x); we can apply partial integration to obtain k+2 [x 0 ; : : :; x k+1 ; x]f = 2 k 2J k+1 p (x)p k (x ) (x ) k+1 + R d D x?x D x?x k D k x f(t)m(tjx ; x ; x)dt j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 ;Dx j?1 d Writing k+1 instead of nally completes the induction. f(t)m(tjx ; x)dt:

Minimal degree interpolation 15 Looking at (17) we observe that the n{th order dierential operators D n = n j=jj (x )D dj?1 ;Dx j?1 ; 2 I0 n ; 2 j?1 are in general inhomogeneous if 2 In?1; 0 thus, it is interesting to ask for minimal degree interpolation spaces where D n is a homogeneous dierential operator for all 2 In. 0 For that purpose note that D n has to have a component of order jj for the reproduction of p?. So the question is equivalent to asking whether there exist minimal degree interpolation spaces such that d ; = 0 if jj jj. The answer is positive and a class of examples will be minimal degree interpolation with additional points. The derivation of a particularly simple formula for this case is based on the fact that we can give the coecients d ; explicitly in Lemma 2. Let the polynomials p, 2 I n, and p?, 2 I 0 n, satisfy (7) and (8). Then for any 2 J k, k n? 1, (21) p (x) (x? x ) = 2J k+1 p (x)p (x ) (x? x ) + 2J 0 k+1 p? (x)p (x ) (x? x ) : Proof. The proof works in exactly the same way as the one of Lemma 1: it is again easily veried that for any jj k both sides of (21) vanish and have the same value for each jj = k. Since interpolation at the points x, jj k + 1, is unique in d k+1, the polynomials have to be identical. From (21) it now follows that d ; = 0 jj jj p (x ) (x? x ) jj = jj + 1 ; 2 I k; 2 I 0 k+1; which enables us to give the remainder formula for a minimal degree interpolation space with additional points. Corollary 1. Let P( N ) be a minimal degree interpolation space of degree n with blockwise structure and additional points. Then (22) f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f = p n (x) (x ) D 2 n R x?xn Dx n f(t)m(tjx ; x)dt d + 2I 0 n p? (x) R d 2 jj?1 p n (x ) (x ) D x?x n Dx jj?1 f(t)m(tjx ; x)dt:

16 Th. Sauer 5. Minimal Monomials In this section we will introduce and investigate a particular minimal degree interpolation space P ( N ), spanned by a minimal number of monomials, which are, moreover, nested in an appropriate way. We will see that this space combines quite a few of the properties of least interpolation with the practical advantage of minimal memory consumption which makes it particularly useful for practical purposes. Some numerical details are also discussed at the end of this section. 5.1. Construction Since the construction of the space is quite intricate and overshadowed by the notation, let us rst sketch an outline of the construction. The main idea is to choose the monomials which span the interpolation space in such a way that the respective set of multiindices, I n, is a lower set; those play an important role in the study of the multivariate Birkho interpolation problem (see e.g., [17, 13] or [14] for a more extensive survey). That I n is a lower set means that whenever 2 I n and 2 N n satises 0 i i, i = 1; : : :; d, then 2 I n as well. This property will be ensured by choosing the complement spaces Q k, k = 0; : : :; n, in such a way that they are spanned by polynomials which only have a single monomial with an index from I 0 n in their leading term, i.e., polynomials x + q(x), 2 I 0 n, where q is a polynomial of degree less than or equal to jj such that all coecients of the leading term of q which are dierent from belong to I jj. The crucial point of the construction is to do this in such a way that N d 0 n I n I 0 n is an upper set (i.e., 2 I 0 n and i i implies that 2 N d 0 n I n ), which is possible due to the well{known fact that Q is the polynomial ideal associated to the nite variety N (cf. [4]). Indeed, if 2 I 0 n, then p? (x) = x + q (x) belongs to Q and also i p? (x) = x +ei + i q (x) 2 Q. In other words, +e i belongs to I 0 n as well. What remains is to nd the minimal indices 2 In, 0 i.e., those elements 2 In 0 which cannot be written as = +, 2 In, 0 2 N d 0; however, these indices will be determined automatically in the process of computing the Newton basis. Let us now turn to the details of the construction. Still we assume N = fx 0 ; : : :; x N g to be a nite set of pairwise distinct points. We claim that there exists a minimal degree interpolation space of degree, say n, which is spanned by x, 2 I n, and has the additional properties that the complementary indices satisfy the following two conditions: 1. 2 I 0, k < n, implies + k ei 2 I 0 k+1, i = 1; : : :; d. 2. the polynomials p?, 2 In, 0 satisfy (23) @ jj @x p? =! ; ; 2 I 0 n: Note that (23) is only a reformulation of the fact that the part of the leading term of p? which has exponents from I 0, consists of jj () only. We construct the space P ( N ) by an inductional process, subsequently generating P ( N ) \ d k, k = 0; : : :; n.

Minimal degree interpolation 17 For k = 0 we dene p 0 = 1, I 0 = f0g; so, let us suppose that for some k, 0 k < n we already constructed a set I k of multiindices, Newton fundamental polynomials p, 2 I k, and the complementary basis p?, 2 I 0 k, with the above properties. Since the polynomials i p? (x), 2 I 0, also annihilate k N, we set I 0 k+1 = I 0 k [ + e i : i = 1; : : :; d; 2 I 0 k ; I k+1 = f : jj k + 1g n I 0 k+1; and p? (x) = +e i i p? (x), i = 1; : : :; d, 2 I 0 k. If for some, jj = k + 1 there are several ways to write in the form = + e i, 1 i d, 2 I 0, then k we choose the lexicographically largest which satises = + e i for some 0 i d. The polynomials p?, 2 J 0 k+1, constructed this way are well{dened and vanish on N. Replacing n by k in (23) shows that the span of p?, 2 J 0 k+1, has the same dimension as the span of x, 2 J 0 k+1, and still satises (23) with k + 1 instead of n. Hence, the polynomials p?, 2 I 0, and k+1 x, 2 I k+1, dene a basis of d. k+1 Let L k denote the interpolation operator with respect to the points x, 2 I k, constructed so far and set for 2 J k+1 (24) (x) = x? L k (() ; x) : Clearly, these polynomials annihilate k = fx : 2 I k g. For these polynomials, 2 J k+1, we now have to decide whether they belong to P k+1 ( N) or to Q k+1. For that purpose, we arrange the multiindices 2 J k+1 in lexicographical order and again proceed inductively. Let us assume that for some 2 J k+1 we already obtained points x and polynomials p, 2 J k+1,, which satisfy (25) p (x ) = ; 2 I k+1 ; 2 J k+1 ; : Of course, this is trivial for the rst 2 J k+1 as there are no conditions imposed then. Next we consider (x) = (x)? J k+13 (x )p (x): By construction, vanishes on k [ fx : g; thus, if also vanishes on Y := N n fx : g, then it vanishes on all of N and consequently belongs to Q. In that case we set I 0 = k+1 I0 [ fg, I k+1 k+1 = I k+1 n fg and p?(x) = (x). Note that despite of adding to I 0 k+1, the property (23) still remains valid for I 0. k+1 Otherwise, if does not vanish on Y, then there is some x 2 Y such that p(x ) 6= 0. We then set p (x) = (x)= (x ) and p (x) = p (x)? p (x )p (x); 2 J k+1 ; ; and (25) is now satised by the polynomials p, 2 J k+1,. Formally, the decision can be written as (26) (Y) 8 < : = 0! I 0 k+1 ; p? = 6= 0! I k+1 ; p = = (x ) :

18 Th. Sauer This orthogonalization process has to be repeated as long as N nfx : 2 I n g is nonempty, and it nally yields the Newton fundamental polynomials of level k +1. Therefore, we can conclude that each of these polynomials is in the span of x, 2 I k+1 and that all the polynomials p? satisfy (23). Hence, for k + 1 = n, the construction is complete. Let us nally remark that the actual value of n does not come up in the construction and that the construction can be used to determine it algorithmically. Remark 1. The crucial point in the above construction was to make sure that 2 I 0 implies that + k ei 2 I 0 k+1, i = 1; : : :; d, k < n. This is in turn equivalent to 2 I k implying? e i 2 I k?1, i = 1; : : :; d, whenever? e i is dened. By iteration we obtain that (27) 2 I jj ) f : ; jj = kg I jj : Here has to be understood in the sense that? 2 N d 0. Remark 2. Let us briey point to the fact that the construction presented above has an interesting by{product: if we collect all the polynomials p? which were decided to belong to Q in (26), then this set of polynomials is a minimal Groebner basis for the ideal Q associated to the nite variety N with respect to the graded lexicographical ordering. This statement is only correct up to basis elements of degree n + 1, but those can be easily obtained by an additional step of the same process. Remark 3. We did not x in (26) which particular element from Y to choose as x. This leaves room for several pivoting strategies. The straightforward numerical choice would be to take that element such that the absolute value of at this point becomes maximal. Before we turn to properties of P ( N ), let us rst rephrase the above construction into a \cooking{recipe" for the generation of P ( N ). Recall that the algorithm does not know n a priori but computes this number \on the y". Algorithm 1. Construction of P ( N ). Input: N 2 N and x 1 ; : : :; x N 2 R d. Initialization: n := 0; I 0?1 := ;; := fx 1 ; : : :; x N g; Computation: while 6= ; do for 2 I 0 n?1 ; jj = n? 1 do for i = 1; : : :; d do p? (x) := +e i i p?(x); In 0 := In 0 [ f + e i g; done; done; I n := f : jj ng n In; 0 for 2 I n ; jj = n do p (x) := x? L n?1 (() ; x);

Minimal degree interpolation 19 done; for 2 I n ; jj = n do if p ( ) = 0 then p? (x) = p (x); I n := I n n fg; I 0 n := I 0 n [ fg; else Choose x 2 fx 2 : p (x) 6= 0g; p (x) := p (x)=p (x ); for 2 J n ; 6= do p (x) := p (x)? p (x )p (x); done; := n fx g; ; done; n := n + 1; done; Output: Degree n, index sets I n, Newton fundamental polynomials p, 2 I n. It should be remarked here that P ( N ) is uniquely determined by the order in which we process the multiindices in the for loops. Before we turn to the description of some properties of the minimal degree interpolation space P ( N ), let us rst illustrate the above construction by looking at an example. Example 5. We consider the case that x 0 ; : : :; x 7 are the 8th roots on the unit circle in R 2, i.e., x j = ( j ; j ) := (cos(j=4); sin(j=4)). Remark that neither their cardinality matches the dimension of any of the spaces n, 2 and that there exists a quadratic polynomial, namely x 2 + y 2? 1, which vanishes at all the interpolation points. For convenience we denote points in R 2 by (; ). So, we proceed by the degree, say k, where for k = 0 the situation is simple as we use the polynomial p (0;0) (; ) 1; clearly, p (0;0) (x 0 ) = 1, so we choose x (0;0) = x 0. Turning to the case k = 1 we rst consider the polynomial? 0 p (0;0) (x), which clearly vanishes at x 0 but has nonzero value at x 1 which we normalize to be 1 and (preliminary) call the resulting polynomial p (0;1). Also we x x (0;1) = x 1. Next we take? 0 p (0;0) (x)? 1 p (0;1) (x), which vanishes at x 0 ; x 1 and has nonzero value at x 2. Again we re-normalize, set x (1;0) = x 2, call the resulting polynomial p (1;0) and then replace p (0;1) by p (0;1)? p (0;1) ( 2 ; 2 )p (1;0). This yields the linear Newton fundamental polynomials p (0;1) (; ) = p 1 (; ) = p (1;0) (; ) = p 2 (; ) = 1? p 2?1?? + 1 p2?1? 2 (1? p 2) +? 1 Proceeding with k = 2 the same procedure gives the polynomials p (0;2) (; ) = p 3 (; ) = 7? 5 p 2?1 (3? 2 p 2) 2? (7? 5 p 2)? (3? 2 p 2) :

20 Th. Sauer p (1;1) (; ) = p 4 (; ) = 4 p 2? 6?1?(3? 2 p 2) (4? 3 p 2) 2 + (4? 3 p 2)? (1? p 2) + (3? 2 p 2) Now, the nal polynomial to be checked can be computed to be 2 + 2? 1. Of course, this polynomial vanishes on all the remaining points and therefore is the rst choice for an element in Q. This is, we set p? (2;0) (; ) = 2 + 2? 1 and put the index (2; 0) into I2. 0 For k = 3 the only exponents to be checked are (3; 0) and (2; 1) since the other two indices (1; 2) and (0; 3) automatically belong to I 0 3. The algorithm then yields p (3;0) (; ) = p 5 (; ) = 941664? 665857 p 2?1 +(941664? 665857 p 2) p (2;1) (; ) = p 6 (; ) = 133844? 94642 p 2 and for k = 4 we nally obtain p (4;0) (; ) = p 7 (; ) =?1 (1331714? 941664 p 2) 2?(228486? 161564 p 2) 3?(94642? 66922 p 2) 2 + (66922? 47321 p 2) 2?(66922? 47321 p 2) + (161564? 114243 p 2)?1 2046573816377474? 1447146223759344 p 2?(2046573816377474? 1447146223759344 p 2) y 4?(1447146223759344? 1023286908188737 p 2) y 3?(1447146223759344? 1023286908188737 p 2) xy 2 +(2046573816377474? 1447146223759344 p 2) y 2?(1023286908188737? 723573111879672 p 2) xy +(1447146223759344? 1023286908188737 p 2) y Summarizing, we can note that the minimal degree interpolation space is P ( N ) = span 1; ; ; ; 2 ; 2 ; 3 ; 4 : The above Newton fundamental polynomials have been computed using MAPLE V. : ;

Minimal degree interpolation 21 5.2. Properties First we notice that P ( N ) uses a minimal number of monomials with exponents 2 I n which have an additional minimality property among all properly nested index sets: indeed, this particular space uses monomials with lexicographically minimal exponents. Although this property is of no practical use, it serves the purpose of making P ( ) unique. In order to investigate the invariance properties of P, we consider P as a map that transfers N to an N + 1-dimensional subspace of d ; this is possible since P assigns a unique polynomials subspace to each subset of R d consisting of N +1 elements by the construction from the last subsection. Let ' : R d! R d be some map, then P is called '-invariant if for any nite subset N P ('( N )) = P ( N ); where '( N ) = f'(x 0 ); : : :; '(x N?1 )g. We will show that the construction rule for P ( N ) implies that the space generated this way is 1. scale invariant 2. translation invariant and thus closed under taking derivatives as well. Clearly, a minimal degree interpolation space is scale{invariant if it is spanned by homogeneous polynomials { the (still homogeneous) polynomials p (=c) = c?jj p are obviously a basis for the minimal degree interpolation space with respect to c N with the same minimality properties as the original one, N. Since it is indeed spanned by homogeneous polynomials, P ( N ) is scale{invariant, i.e., P ( N ) = P (c N ); c 2 R; c 6= 0: A little more eort has to be taken to prove the translation invariance of P ( N ), i.e., P ( N ) = P ( N? y); y 2 R d : Since a set of Newton fundamental polynomials with respect to N? y is given by the polynomials p ( + y); 2 I n ; and their span, say P, is again a minimal degree interpolation space. On the other hand, we know that Since P = span f( + y) : 2 I n g: (x + y) = we can apply Remark 1 to obtain that x y? =: c x ; P ( N? y) = span f(x + y) : 2 I n g span fx : 2 I n g = P ( N ): Replacing N by N + y also yields the converse inclusion. Hence P ( N ) is also translation{invariant. Together these two invariance properties imply that

22 Th. Sauer P ( N ) is D{invariant, too, i.e., the space is closed under the operation of taking derivatives. It has been pointed out in [5] that least interpolation is not invariant under arbitrary rotations of the coordinate system. The same holds true for P ( N ). Instead of dwelling on this in general, let us consider one example here which nevertheless illuminates the phenomenon. Example 6. Let the points x 0 ; : : :; x N lie on on a line passing through the origin and let us rotate them such that the origin remains xed. Clearly, as long as the line which contains the points is not perpendicular to the d {axis, P ( N ) is always a minimal degree interpolation space with respect to the rotated points. This is due to the lexicographical ordering which in this case always gives us the polynomials 1; d ; : : :; N as a basis for d P ( N ). However, if the the line coincides with one coordinate axis, say k, then the projection to this coordinate axis, spanned by 1; k ; : : :; N k, is the one and only minimal degree interpolation space which uses a minimal number of monomials. Hence, in this case also P ( N ) = span 1; k ; : : :; k N. So, P ( N ) is not rotation{invariant. Least interpolation was described in (6) by a dierential operator involving the leading terms of all polynomials which vanish in the interpolation points. It is interesting that something similar holds for P ( N ), too, involving only certain partial derivatives. Indeed, if we dene I 0 = N d 0 n I n = I 0 n [ 2 N d 0 : jj > n then we can trivially reformulate the fact that P ( N ) is spanned by the monomials x, 2 I n, as P ( N ) = \ 2I 0 p 2 d : @jj x p 0 = \ 2I 0 kern D : 5.3. Numerical performance In this section we nally lay out that Lagrange interpolation from the space P ( N ) can be handled numerically in a very ecient way. In particular, for polynomials in this space one can not only carry out the vector space operations addition/subtraction as well as multiplication by a real number with N operations (which is trivial), it is even possible to state a Horner scheme which evaluates polynomials in P ( N ) with the optimal number of N nested additions and multiplications. This in turn yields that P ( N ) is very suitable with respect to speed and numerical robustness, since a minimum of operations also reduces roundo errors to the greatest possible extent. In order to express this fact in greater detail, let us rst recall the multivariate Horner scheme (see [18]) which evaluates a polynomial p(x) = c x ; n 2 N; jjn recursively by the nested multiplications p(x) = 1? 1 ( 1 p n (^x 1 ) + p n?1 (^x 1 )) + p 0 (^x 1 );

Minimal degree interpolation 23 where ^x 1 = ( 2 ; : : :; d ) and d?1 n?j 3 pj (^x 1 ) = jjn; 1=j c x?je1 ; j = 0; : : :; n: The same process, with respect to 2 is then applied to each of the polynomials p j (^x 1 ) to be expanded into p j (^x 1 ) = 2? 2 ( 2 p j;n?j (^x 1;2 ) + p j;n?j?1 (^x 1;2 )) + p j;0 (^x 1;2 ); j = 0; : : :; n; where ^x 1;2 = ( 3 ; : : :; d ), and so on. Following [18], we observe that in this evaluation algorithm the coecients c, jj n, are processed in descending lexicographical order, i.e., c is accessed earlier than c if appears later than in the lexicographical ordering. In particular, every coecient is accessed only once in the evaluation process. Since in practical applications the coecients of a polynomial are in an array of the form c[0...n], there is no need of permanently applying a function which transforms multiindices into the linear index; conveniently the coecients are arranged in a graded lexicographical order. Nevertheless, when using the above Horner scheme it suces to provide only a table which maps the lexicographical arrangement of the multiindices into the linear ordering. To derive the Horner scheme for polynomials in P ( N ), we dene i 1 = maxf 1 : 2 J n g and note that, by (27), the indices in the set (28) again satisfy ^I n (i 1 ) = ^ 2 N d?1 0 : (i 1 ; ^) 2 I n n o ^ 2 ^In (i 1 ) ) ^ 2 N d?1 0 : ^ ^ ^In (i 1 ): Applying (27) once more, we observe that together with (i 1 ; ^) we also have that (j; ^) 2 ^I n, j = 0; : : :; i 1, and that the respective sets ^I n (j), dened according to (28), satisfy (29) ^ 2 ^I n (j) ) n ^ 2 N d?1 0 : ^ ^ Hence, we can expand p 2 P ( N ), say as (30) o p(x) = 2In c x ; ^I n (j); j = 0; : : :; i 1 : p(x) = 1? 1 ( 1 p i1 (^x 1 ) + p i1?1 (^x 1 )) + p 0 (^x 1 ): Noticing that the exponents in the (d? 1)-variate polynomials p j satisfy the respective version of (27) again, we can proceed recursively to evaluate the polynomial nally. Note that this algorithm needs exactly as many additions and multiplications as the polynomial has coecients; thus, evaluation of a polynomial can be done with 2N arithmetic operations.