Abstract Minimal degree interpolation spaces with respect to a nite set of

Size: px
Start display at page:

Download "Abstract Minimal degree interpolation spaces with respect to a nite set of"

Transcription

1 Numerische Mathematik Manuscript-Nr. (will be inserted by hand later) Polynomial interpolation of minimal degree Thomas Sauer Mathematical Institute, University Erlangen{Nuremberg, Bismarckstr. 1 1, Erlangen, 2 Germany, sauer@mi.uni-erlangen.de December 19, 1995 Dedicated to Prof. H. Berens on the occasion of his 60th birthday Abstract Minimal degree interpolation spaces with respect to a nite set of points are subspaces of multivariate polynomials of least possible degree for which Lagrange interpolation with respect to the given points is uniquely solvable and degree reducing. This is a generalization of the concept of least interpolation introduced by de Boor and Ron. This paper investigates the behavior of Lagrange interpolation with respect to these spaces, giving a Newton interpolation method and a remainder formula for the error of interpolation. Moreover, a special minimal degree interpolation space will be introduced which is particularly benecial from the numerical point of view. Key words: Lagrange Interpolation, minimal degree, Newton interpolation method, remainder formula, interpolation algorithm, numerical performance Mathematics Subject Classication (1991): 41A05, 41A10, 65D10 1. Introduction Given a nite dimensional linear space P of dimension N +1, N 2 N and a nite set of N +1 pairwise distinct points, say N = fx 0 ; : : :; x N g R d, the Lagrange interpolation problem addresses the question of nding, for a given f : R d! R, an element P 2 P which matches f at N ; i.e., P (x i ) = f(x i ); i = 0; : : :; N: The \simplest" example for such a P, which can also be treated numerically in a suitable way, is when P d, where d is the space of all polynomials in d variables. It is well-known that in the univariate case the Lagrange interpolation problem with respect to N + 1 distinct points is always poised, i.e., uniquely

2 2 Th. Sauer solvable, if one takes P to be the space of all polynomials of degree less than or equal to N. In several variables, however, the situation is much more dicult. In order to successfully interpolate with n, d the space of all polynomials of total degree less than or equal to n,? the number of the points has to match n+d the dimension of the space which is d, so that only point sets of a certain cardinality are admissible. And even if this is the case, there can be the problem that the points lie on some algebraic surface of degree n, i.e., there is some polynomial q of total degree n which vanishes on N. To overcome this problem, there have been several approaches to nd congurations of points which guarantee the poisedness of the respective interpolation problem. Investigations in this spirit emerge from a remarkable paper by Chung and Yao [7] and were extended, e.g., by Gasca and Maeztu [10]; an extensive survey about these methods has been provided by Gasca [9]. Unfortunately, in many cases the points are given a priori and cannot be determined or modied by the interpolation process; for example, if the data to be interpolated stem from some physical or \real world" measurements. In this respect, even the Lagrange interpolation with respect to points with very regular structure (e.g., points on a rectangular grid) need not be poised in n, d regardless of whether the cardinalities match or not. Consequently, if there is no access to the points, the only way out is to choose the subspace P suitably, such that one can uniquely interpolate at the given points. The rst approach in this direction is the concept of Kergin interpolation, introduced by Kergin [11]. Although his method of constructing an interpolating polynomial provides a very nice closed form of the interpolating polynomial based on a generalized Newton interpolation formula, as pointed out by Micchelli [15], it has the drawback that the interpolating polynomial with respect to N + 1 points has degree N and cannot be handled very well numerically. From that point of view it is important to nd appropriate spaces P (i.e., polynomial subspaces P d such that interpolation with respect to the given point set N is poised in P) which consist of polynomials of least possible degree. This question has been considered by de Boor and Ron [3] who investigated it to great extend in a series of papers ([5, 1, 2] to name a few). They introduced a particular polynomial subspace which they called the least choice. Among other properties to be listed later, this space, which depends on the nodes of interpolation, provides three intrinsic features: 1. The Lagrange interpolation problem with respect to the points N is uniquely solvable in this subspace. 2. This subspace is of minimal degree; i.e., if the subspace contains polynomials of total degree less or equal to n and at least one polynomial of total degree n, then the Lagrange interpolation problem with respect to N is unsolvable in any subspace of d n?1. 3. Interpolation with respect to this space is degree reducing; i.e., if a polynomial p has total degree k n, then its interpolant has degree at most k. This paper will approach the question of nding proper polynomial subspaces by considering all polynomial subspaces which satisfy the above three requirements and study Lagrange interpolation with respect to them. For these spaces, which we will call minimal degree interpolation spaces, we will derive an analogy

3 Minimal degree interpolation 3 to the univariate Newton interpolation method as well as a remainder formula. In addition, we will provide a particular minimal degree interpolation space which captures quite a few of the striking properties of the least interpolant from de Boor and Ron but combines it with \optimal" numerical behavior in the sense that it minimizes storage and arithmetic operations, which may make the space useful for practical applications. The paper is organized as follows: rst we will formally introduce minimal degree interpolation spaces in Section 2 and discuss some of their basic properties. In Section 3 we will consider some special examples of minimal interpolation spaces, among them the least interpolation of de Boor and Ron. The Newton method and the remainder formula are the subject of Section 4, while in the nal Section 5 we construct and investigate the particular minimal degree interpolation space mentioned above, stating and verifying its properties. We will use standard multiindex notation throughout the paper. For a multiindex = ( 1 ; : : :; d ) 2 N d 0, we denote by jj = d the length of. Moreover, we write x = 1 1 d d ; x = ( 1; : : :; d ) 2 R d ; 2 N d 0; for the monomials. The totality x, jj n, spans d n d, the space of all polynomials of total degree less than or equal to n. We will order multiindices in the most convenient way, writing if either jj jj or jj = jj and appears earlier than in the standard lexicographical ordering. The latter means that there is some 1 k d such that i = i, i = 1; : : :; k? 1, and k < k. 2. Minimal degree interpolation spaces Let N = fx 0 ; : : :; x N g be a set of N + 1 distinct points in R d. We say that the Lagrange interpolation problem with respect to N is poised in a subspace P d, if for any f : R d! R there exist a unique P 2 P such that P (x i ) = f(x i ); i = 0; : : :; N: Clearly, this requires that dimp = N + 1. Suppose that the Lagrange interpolation problem with respect to N is poised in P( N ) d, where the notation P( N ) is used to emphasize the fact that the space P( N ), which admits unique Lagrange interpolation at N, depends on N. By L P(N )(f; ) we denote the projection of f on P( N ), i.e., the interpolating polynomial L P(N )(f; ) 2 P( N ) such that L P(N )(f; x i ) = f(x i ); i = 0; : : :; N: Since polynomials of high degree are expensive in storage and unstable in evaluation and therefore inconvenient for numerical purposes, it is reasonable to request that the space P( N ) be chosen of minimal degree, i.e., we take P( N ) n d where n is the minimum of all admissible values. In other words, n is chosen in such a way that the Lagrange interpolation problem with respect to N is not poised in any subspace of n?1. d Of course, n depends on N and satises

4 4 Th. Sauer n + d n N + 1 ; d? where n = N + 1 if and only if all the points are on a straight line and N = n+d d? 1 if and only if the Lagrange interpolation problem with respect to N is poised in n. d The latter case has been treated in [20]. The second requirement is that Lagrange interpolation with respect to the space P( N ) be degree reducing, a property observed already by de Boor and Ron [3]. This means that for k n p 2 d k ) L P( N )p 2 d k ; which is a desirable behavior of the projection operator L P(N ) on d n. Summarizing these requirements, we state Denition 1. Let a nite set N of N distinct points be given. A subspace P( N ) d is called a minimal degree interpolation space of order n with respect to N provided that 1. the Lagrange interpolation problem with respect to N is poised in P( N ) d n, 2. P( N ) is of minimal degree with this property, i.e., there is no subspace of d n?1 which admits unique interpolation at N, 3. Lagrange interpolation with respect to P( N ) is degree reducing. Next we introduce a Newton interpolation method for P( N ), extending the one in [20]. For that purpose, let us briey recall that this approach is based on rearranging the points N, N = dimn, d into fx : jj ng, such that there are polynomials p 2 d, jj n, which satisfy jj p (x ) = ; ; jj jj n: The sets x k = fx : jj = kg are called blocks. To extend this notion to minimal degree interpolation spaces, we consider index sets I k f : jj kg N d 0, k = 0; : : :; n, and I?1 = ; for convenience, which satisfy (1) I 0 I 1 I n and I k n I k?1 f : jj = kg ; k = 0; : : :; n: The complements of these sets, Ik 0 := f : jj kg n I k, k = 0; : : :; n, then are nested as well. We say that P n d admits a Newton basis of order n with respect to N, if there exists a system of index sets I = (I 0 ; : : :; I n ), satisfying (1), such that 1. I n n I n?1 6= ;, 2. the points in N can be re-indexed as N = fx : 2 I n g (in particular, #I n = dimp( N ) = N + 1), 3. there exists a basis p 2 d, 2 I jj n, of P( N ) such that (2) p (x ) = ; ; 2 I jj ; 2 I n ; 4. there exist polynomials p? 2 d jj, 2 I0 n, such that (3) p? ( N) = 0 and d n = span fp : 2 I n g span p? : 2 I0 n :

5 Minimal degree interpolation 5 The polynomials p, 2 I n, are called the Newton fundamental polynomials with respect to N in P( N ). It is easily seen that the Lagrange interpolation problem with respect to N is poised in P( N ) if there exist Newton fundamental polynomials for P( N ) which satisfy (2) after re-indexing N properly. The respective blocks of points for such a Newton basis are the sets x k := fx : 2 I k n I k?1 g ; k = 0; : : :; n: The relation between minimal degree interpolation spaces and spaces which admit a Newton basis is now as close as can be, in view of Theorem 1. A subspace P d is a minimal degree interpolation space of order n with respect to N if and only if there exists a Newton basis of order n with respect to N for P. Proof. Assume that P n d is a minimal degree interpolation space of or- n with respect to N and let p 1 ; : : :; p N be a basis of P. Let Q = der q 2 d : q( N ) = 0 denote the ideal of all polynomials which annihilate N and dene Q k = Q \ d, k 2 N k 0. Since P is an interpolation space we have d n = P Q n : Since P is degree reducing, it follows that d = P k k Q k as well, k = 0; : : :; n, where P k = P \ d k. This implies that there exists a graded basis for P. Thus we dene the system of nested index sets I = (I 0 ; : : :; I n ) in such a way that #I k = dimp k, and rewrite the graded basis as fg : 2 I k ; k = 0; : : :; ng. Note that this is indeed trivial since the only requirements for the index system is being nested and having proper cardinality at each level. In order to obtain the Newton fundamental polynomials we then apply the orthogonalization process from [20] (see also [18]) to the polynomials g, 2 I n, which also re-indexes the points in N in an appropriate way. The polynomials p?, 2 I0 n, are obtained similarly by applying the same process (without the nal orthogonalization step) to I 0 = (I0; 0 : : :; In) 0 and a graded basis of the vector space Q n. Conversely, it is obvious that the existence of a Newton basis for P implies poisedness. The minimal degree property follows from the assumption that I n n I n?1 6= ; and the fact that all polynomials which do not belong to P have to belong to Q n and thus vanish on N. To prove the degree reducing property, choose any p 2 k d, k n. Then p can be written as and thus L P (p; x) = 2Ik p(x) = 2Ik c p (x) + 2I 0 k c L P (p ; x) + 2I 0 k c p? (x); c L P (p? ; x) = 2Ik c p ; since L P (p ; x) = p (x) and L P (p? ; x) = 0.

6 6 Th. Sauer The idea of the Newton interpolation is straightforward: dening P k ( N ) as P( N )\ d k and setting L k = L Pk( N ) we generate the interpolating polynomial L n (f; x) for some f : N! R successively via L k+1 (f; x) = L k (f; x) + L k+1 (f? L k f; x) ; k = 0; : : :; n? 1: We will describe this method in detail in Section 4 where we also derive an error formula for minimal degree interpolation. To assure that the above procedure always works we dene (4) J k := I k n I k?1 ; J 0 k := I 0 k n I 0 k?1; k = 0; : : :; n and state the following observation: Proposition 1. If P( N ) is a minimal degree interpolation space with respect to N then the sets J k, k = 0; : : :; n, of the respective Newton basis satisfy J k 6= ;; k = 0; : : :; n: Proof. For an arbitrary? basis of n, d say 2 d, jj n, let us consider the jj k+d generalized (N + 1) d Vandermonde matrix Vk of order k, k = 0; : : :; n, dened by V k ( N ) = [ (x i )] i=0;:::;n; jjk ; k = 0; : : :; n: Since P( N ) is a minimal degree interpolation space, these matrices satisfy (5) rank V 0 rank V 1 rank V n?1 < rank V n = N; where the strict inequality at the end stems from minimality. Now assume that for some k < n we have J k = ;; from this it is easy to conclude that there exist linearly independent polynomials q, jj = k, such that q ( N ) = 0.? n+k?1 d?1 For example, choose q (x) = x? L Pk( N ) (() ; x) ; jj = k: Hence, all the polynomials ; (x) = x q (x), jj = k; jj n? k, also satisfy ;( N ) = 0. Clearly, d n = span f' : jj < kg span f ; : jj = k; jj n? kg: Thus, there is a basis of d n such that ( N ) = 0 if jj k. But this implies that contradicting (5). rank V k?1 = rank V k = = rank V n ;

7 Minimal degree interpolation 7 3. Special Examples Example 1. The rst and most prominent example of a minimal degree interpolation space is the least interpolation introduced and extensively investigated by de Boor and Ron [3]. They started with explicitly constructing the least interpolation space P l ( N ), according to some arbitrary given set of points N, such that interpolation with respect to N is poised in P l ( N ), and then proved that it possesses the minimal degree property. See also [6] for a comparison between the least interpolation space and general minimal degree interpolation spaces. They characterized the least interpolation space, which is uniquely determined through the set N, by means of the kernels of certain homogeneous dierential operators. Precisely, P l ( N ) is the unique minimal degree interpolation space which satises \ (6) P l ( N ) = kern q " (D); q( )=0 where q ", in the notation of de Boor and Ron, denotes the leading term of q, dened in the following way: assume that q has degree n, then q " is the unique homogeneous polynomial of degree n such that q(x)? q " (x) 2 d n?1 : For information on how to construct P l ( N ) and the interpolating polynomial in numerical practice, see [1, 5]. Among several other remaining properties, this particular space turns out to be scale invariant and shift invariant, i.e., where P l ( N ) = P l (c N? y); c 2 R; c 6= 0; y 2 R d ; c? y = fcx 0? y; : : :; cx N?1? yg : Since minimal degree interpolation spaces cannot be rotationally invariant in general, this is the strongest coordinate system independence to be expected. Also note that shift and scale invariance implies that P l ( N ) is closed under derivation. Example 2. To represent polynomials in practice, one usually stores and manipulates their coecients with respect to some basis and the most convenient (but not numerically most stable) basis to be used are the monomials x, 2 N d 0. Since the requirements on storage and the number of operations necessary to manipulate polynomials depend on the number of basis functions necessary to represent the subspace P( ), it can be benecial to nd a minimal degree interpolation space which uses as few basis functions as possible. To illustrate this idea, let us rst consider the following extremal situation: Suppose the points x 0 ; : : :; x N are all on a straight line, i.e., x j = x 0 + j a, j = 1; : : :; N, for some 0 6= a 2 R d and pairwise distinct j 2 R, j = 1; : : :; N. Assume in addition that a j 6= 0, j = 1; : : :; d and let ` 2 d 1 be the linear polynomial `(x) = ha; xi = d j=1 a j x j ; x 2 R d :

8 8 Th. Sauer Then one minimal degree interpolation space with respect to N (which is even the least interpolation space for that conguration of points) is spanned by the powers of `, `k(x), k = 0; : : :; N, and the Newton fundamental polynomials are p k (x) = `(x? x 0) `(x? x k?1 ) ; k = 0; : : :; N: `(x k? x 0 ) `(x k?? x k?1 ) k+d Note that each of these polynomials has d coecients with respect to the monomials due to the assumption that all coecients a j are nonzero; hence, a \typical" interpolation? polynomial for this situation will have to store and N+d manipulate d = O(N d ) coecients with respect to the monomial basis. Also the evaluation of these polynomials requires access to O(N d ) coecients which makes it ineective compared to the dimension of the space which is only N + 1. Note that this problem is not a consequence of choosing the monomials as a basis: \usually" (i.e., up to set of measure zero) the coecient vector of a multivariate interpolating polynomial with respect to any basis is densely lled with entries, regardless of the underlying basis. Example 3. From the preceding example we see that it may be benecial for numerical purposes to nd minimal degree interpolation spaces which can be described by as few generic basis functions as possible; though not being the best possible choice from the point of view of numerical stability, the monomials are still the simplest and most common choice for a generic basis of polynomials. As already mentioned, there is always a subspace of N d such that in this subspace the Lagrange interpolation problem with respect to given N + 1 points x 0 ; : : :; x N is poised { one may take, for? example, the Kergin interpolant. This N+d implies that the generalized (N + 1) d Vandermonde matrix (which is no more a quadratic matrix for d > 1) V N = [x i : i = 0; : : :; N; jj N] has rank N +1, i.e., there must be an (N +1)(N +1) sub-matrix of V N with nonvanishing determinant; in other words, there always exists a subspace spanned by N + 1 monomials, where the Lagrange interpolation problem with respect to N is poised. Among these spaces spanned by certain monomials, there will also be one (and in general several) subspace(s) of minimal degree. We will refer to minimal degree interpolation spaces which are spanned by N + 1 monomials as minimal degree interpolation with minimal monomials. Note that we will need further requirements to single out a unique minimal degree interpolation space with minimal monomials. We will make this more precisely in Section 5, where we use this remaining degree of freedom to construct a minimal degree interpolation space which combines the minimal monomials property with the invariance properties of least interpolation. Example 4. The nal idea to specialize a minimal degree interpolation space is to introduce additional points in such a way that the original points and the added ones give rise to a Lagrange interpolation problem which is poised in d n, where n is the specic minimal degree. For that purpose let us recall that P( N ) d n already admits unique Lagrange interpolation; now, if we consider Q n = d n n P( N), on the other hand, then it is well{known (see for instance [19]) that the Lagrange interpolation problem with respect to a suitable number

9 Minimal degree interpolation 9 of points is poised except for a set of measure zero. Let x, 2 I 0 n, denote such a set of points (their number is correct since the dimension of Q n equals the cardinality of I 0 n). Then it is easily seen that the Lagrange interpolation problem with respect to the set of points fx : 2 I n g [ fx : 2 I 0 ng is poised in n d { this is an immediate consequence of the fact that n d = P( ) Q n. Choosing a reasonable Newton base in both P( ) and Q n we can reformulate this process as follows: we add points x, 2 In, 0 such that there are polynomials p, 2 I n, and p?, 2 I0, such that n p? ( ) = 0 and, in addition (7) and, respectively, (8) p (x ) = ; jj jj; 2 I k ; p? (x ) = ; jj jj; 2 I 0 k: Of course, there is again freedom in choosing the additional points since the above extension is possible for almost all choices of x, 2 I 0 n, as already mentioned above. This type of minimal degree interpolation, referred to as minimal degree interpolation with additional points, provides a particularly nice and simple remainder formula which will be given in (22). 4. Finite Dierence and Remainder Formula In case P = d n for some n 2 N, there are two remainder formulae which are valid for all sets of points N that do not lie on an algebraic surface of degree n: the rst one is due to Ciarlet [8] and is based on a multipoint Taylor expansion, while the more recent one, developed in [20], is obtained from a Newton interpolation scheme. In this section we will extend the latter result to minimal degree interpolation spaces. It has to be remarked here that Newton formulae for special congurations of points have been given and investigated before, cf., for example [12, 21, 17]. However, the Newton interpolation to be described here is based on a nite dierence approach for minimal degree interpolation which is very much similar to the method introduced in [20]. In addition to its theoretic use for deriving the remainder formula, the Newton method also oers a good tool in practical computations: it not only provides a fast method to compute the interpolating polynomial with less memory consumption, but also shows superior numerical robustness if the function is suciently smooth (see [18]). The key tool for the derivation of the Newton method is the nite dierence k [x 0 ; : : :; x k?1 ; x], k = 0; : : :; n, which is recursively dened as (9) (10) 0 [x]f := f(x) k+1 [x 0 ; : : :; x k ; x]f := k [x 0 ; : : :; x k?1 ; x]f? 2Jk k [x 0 ; : : :; x k?1 ; x ]f p (x): It has been pointed out in [20] that in case d = 1 this dierence coincides with a re-normalized version of the classical divided dierence f[: : :]; precisely:

10 10 Th. Sauer n+1 [x 0 ; : : :; x n ; x]f = f[x 0 ; : : :; x n ; x](x? x 0 ) (x? x n ): Indeed, this dierence plays a crucial role in describing the interpolating polynomial and the error of interpolation. Theorem 2. For a minimal degree interpolation space with Newton fundamental polynomials p, 2 I n, the interpolating polynomial for a function f is given by (11) L n (f; x) = jj [x 0 ; : : :; x jj?1 ; x ]f p (x): 2In Moreover, (12) f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f: Proof. We will prove that equations (11) and (12) both hold with n replaced by k, k = 0; : : :; n, where L k corresponds to interpolating at the points x, 2 I k, with the span of the polynomials p, 2 I k. This will be be done by induction on k. Indeed, if k = 0, then (11) and (12) read L 0 (f; x) = f(x 0 ); f(x)? L 0 (x) = 1 [x 0 ; x]f = f(x)? f(x 0 ): So, suppose that for some k < n, the equations are already veried. Since the polynomials p, 2 J k+1 (recall: J k+1 = I k+1 n I k ), vanish in x, 2 I k, we obtain that for 2 I k 2I k+1 jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) = 2Ik jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) = L k (f; x ) = f(x ): For 2 J k+1 we use (12), (11) and the fact that p (x ) =, 2 J k+1, to compute f(x ) = L k (f; x ) + f(x )? L k (f; x ) = L k (f; x ) + k+1 [x 0 ; : : :; x k ; x ]f = 2Ik jj [x 0 ; : : :; x jj?1 ; x ]f p (x ) + = 2J k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x ) 2I k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x ): Hence, (11) holds for k + 1. Using the recursive denition of the nite dierence nally yields k+2 [x 0 ; : : :; x k+1 ; x]f = k+1 [x 0 ; : : :; x n?1 ; x]f? = f(x)? L k (f; x)? (L k+1 (f; x)? L k (f; x)) = f(x)? L k+1 (f; x); 2J k+1 k+1 [x 0 ; : : :; x n?1 ; x ]f p (x)

11 Minimal degree interpolation 11 which is (12) for k + 1. Of course, one could also introduce the nite dierence via (12). Then (11) is obvious, but one has to prove the recurrence relation (10) instead. Since the proof of the remainder formula uses induction based on this recurrence, the above way to introduce the nite dierence is more convenient here. To formulate and establish the remainder formula for minimal degree interpolation, we have to introduce some additional notation. First, let us generalize the notion of a path as dened in [20] to paths in I n. A path in I n is a vector of multiindices of increasing length, which has the form = ( 0 ; : : :; n ); k 2 J k ; k = 0; : : :; n: Let n denote the totality of all those paths. The name \path" stems from the image of walking through the set of multiindices in I k, ascending to higher level in each step and passing exactly one multiindex on each level. Note that this notion still is reasonable because of Proposition 1 which certies that at each level there is at least one multiindex where the path can pass through. In other words: there are no \broken" paths or \jumps". To each path 2 n we associate the well{dened numbers (x ) = n?1 Y i=0 p i (x i+1 ); a homogeneous n{th order dierential operator as well as the set of points on the path D n x = D x n?x n?1 D x1?x 0 ; x = fx 0 ; : : :; x n g : Finally, let us recall the notion of a simplex spline: given any n+1 d+1 knots v 0 ; : : :; v n 2 R d, the simplex spline M(xjv 0 ; : : :; v n ) is the distribution dened by (13) where R d f(x) M(xjv 0 ; : : :; v n ) dx = (n? d)! S n f( 0 v n v n )d; f 2 C(R d ); S n := f = ( 0 ; : : :; n ) : i 0; n = 1g : The most important property for our present purposes is the formula for directional derivatives, derived, among other important facts about simplex splines, by Micchelli [16]: (14) D y M(xjv 0 ; : : :; v n ) = y = n j=0 j v j ; n j=0 n j=0 j M(xjv 0 ; : : :; v j?1 ; v j+1 ; : : :; v n ); j = 0:

12 12 Th. Sauer We will need this in the more special version (15) D v i?v jm(xjv0 ; : : :; v n ) = M(xjv 0 ; : : :; v i?1 ; v i+1 ; : : :; v n )? M(xjv 0 ; : : :; v j?1 ; v j+1 ; : : :; v n ); 0 i; j n: Before giving the integral representation of the nite dierence for any minimal degree interpolation space, let us have a brief look at the representation formula from [20] for the special case of a Lagrange interpolation problem being poised in d n. The formula reads as n+1 [x 0 ; : : :; x n ; x]f = 2 n p n (x) (x ) R d D x?xn D n x f(t)m(tjx ; x) dt: The dierential operator under the integral is of order n+1 and hence, the above expression vanishes on all polynomials of degree less than or equal to n. Since we have, due to (12), that n+1 [x 0 ; : : :; x n ; x]p? = p? (x)? L n(p? ; x) = p? (x); 2 I0 n ; there have to be additional terms in the remainder formula which are responsible for the reproduction of the polynomials p?. For their description, we have to dene the directions d ; 2 R d, 2 J k, 2 I 0 k+1 as the unique solutions of the vector interpolation problem (16) 2I 0 k+1 d ; p? (x) = p (x) (x? x )? 2J k+1 p (x)p (x ) (x? x ) : Lemma 1. The interpolation problem (16) is uniquely solvable for each 2 J k, k = 0; : : :; n? 1. Proof. Since the polynomial on the right-hand side is a polynomial of degree k + 1, it suces to show that '(x) = p (x) (x? x )? 2J k+1 p (x)p (x ) (x? x ) satises '(x ) = 0, 2 I k+1. This is obvious for 2 I k and =, since then both terms vanish. In the remaining cases 6= we have '(x ) = p (x ) (x? x )? 1 p (x ) (x? x ) = 0: because of p (x ) =. Hence, '(x) belongs to the span of p?, 2 I0 and k+1 can therefore be uniquely represented as required in (16). Now we are in position to formulate the remainder formula for a blockwise minimal degree interpolation space as

13 Minimal degree interpolation 13 Theorem 3. Let P( N ) be a minimal degree interpolation space of degree n. Then f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f (17) = p n (x) (x ) D 2 n R x?xn Dx n f(t)m(tjx ; x)dt d + 2I 0 n p? (x) n R d j=jj (x )D dj?1 ; Dx j?1 f(t)m(tjx ; x)dt: 2 j?1 Proof. The proof will use induction on k to prove that for any 0 k n (18) k+1 [x 0 ; : : :; x k ; x]f = p k (x) (x ) 2 k + k j=1 2 j?1 2Ij 0 R d D x?xk D k x f(t)m(tjx ; x)dt p? (x) (x ) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt; from which (17) follows by setting k = n and rearranging summation and integration in the second term. Since (0; : : :; 0) 2 I 0, equation (18) with k = 0 reads as 1 [x 0 ; x]f = f(x)? f(x 0 ) = R d D x?x0 f(t)m(tjx 0 ; x)dt; which is clearly true. Hence, suppose that for some 0 k < n equation (18) has already been proved. In particular, we know that for 2 J k+1 (19) k+1 [x 0 ; : : :; x k ; x ]f = p k (x ) (x ) 2 k R d D x?x k D k x f(t)m(tjx ; x )dt; since the second term of (18) vanishes in all points of N by the denition of the polynomials p?. Applying (16) and recalling the linearity of directional derivatives then yields that for 2 J k (20) p (x)d x?x = p (x)p (x )D x?x + p? (x)d d; : 2J k+1 Inserting this into (18) gives k+1 [x 0 ; : : :; x k ; x]f = p (x)p k (x ) (x ) 2 k 2J k+1 + p? (x) (x ) 2 k 2I 0 k+1 2I 0 k+1 R d D x?x k D k x f(t)m(tjx ; x)dt R d D dk ;D k x f(t)m(tjx ; x)dt

14 14 Th. Sauer + = k j=1 2 j?1 2Ij 0 2 k k+1 + p?(x) (x ) 2J k+1 p (x)p n (x ) (x ) j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 d ; Dx j?1 f(t)m(tjx ; x)dt; R d D x?x k D k x f(t)m(tjx ; x)dt D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt; and we substitute this and (19) into the recurrence relation (10) to obtain k+2 [x 0 ; : : :; x k+1 ; x]f = = k+1 [x 0 ; : : :; x k ; x]f? = 2 k k+1 + 2J k+1 p (x)p k (x ) (x ) j=1 2 j?1 2I j 0 R d D x?x k D k x f(t)m(tjx ; x)dt 2J k+1 k+1 [x 0 ; : : :; x k ; x ]f p (x) p? (x) (x )? p (x) p k (x ) (x ) 2J k+1 2 k = p (x)p k (x ) (x ) 2J k+1 2 k k+1 + Since by (15) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt R d D x?x k D k x f(t)m(tjx ; x )dt R d D x?x k D k x f(t) (M(tjx ; x)? M(tjx ; x )) dt j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 ;Dx j?1 d f(t)m(tjx ; x)dt: M(tjx ; x)? M(tjx ; x ) = D x?xm(tjx ; x ; x); we can apply partial integration to obtain k+2 [x 0 ; : : :; x k+1 ; x]f = 2 k 2J k+1 p (x)p k (x ) (x ) k+1 + R d D x?x D x?x k D k x f(t)m(tjx ; x ; x)dt j=1 2 j?1 2I j 0 p? (x) (x ) D R dj?1 ;Dx j?1 d Writing k+1 instead of nally completes the induction. f(t)m(tjx ; x)dt:

15 Minimal degree interpolation 15 Looking at (17) we observe that the n{th order dierential operators D n = n j=jj (x )D dj?1 ;Dx j?1 ; 2 I0 n ; 2 j?1 are in general inhomogeneous if 2 In?1; 0 thus, it is interesting to ask for minimal degree interpolation spaces where D n is a homogeneous dierential operator for all 2 In. 0 For that purpose note that D n has to have a component of order jj for the reproduction of p?. So the question is equivalent to asking whether there exist minimal degree interpolation spaces such that d ; = 0 if jj jj. The answer is positive and a class of examples will be minimal degree interpolation with additional points. The derivation of a particularly simple formula for this case is based on the fact that we can give the coecients d ; explicitly in Lemma 2. Let the polynomials p, 2 I n, and p?, 2 I 0 n, satisfy (7) and (8). Then for any 2 J k, k n? 1, (21) p (x) (x? x ) = 2J k+1 p (x)p (x ) (x? x ) + 2J 0 k+1 p? (x)p (x ) (x? x ) : Proof. The proof works in exactly the same way as the one of Lemma 1: it is again easily veried that for any jj k both sides of (21) vanish and have the same value for each jj = k. Since interpolation at the points x, jj k + 1, is unique in d k+1, the polynomials have to be identical. From (21) it now follows that d ; = 0 jj jj p (x ) (x? x ) jj = jj + 1 ; 2 I k; 2 I 0 k+1; which enables us to give the remainder formula for a minimal degree interpolation space with additional points. Corollary 1. Let P( N ) be a minimal degree interpolation space of degree n with blockwise structure and additional points. Then (22) f(x)? L n (f; x) = n+1 [x 0 ; : : :; x n ; x]f = p n (x) (x ) D 2 n R x?xn Dx n f(t)m(tjx ; x)dt d + 2I 0 n p? (x) R d 2 jj?1 p n (x ) (x ) D x?x n Dx jj?1 f(t)m(tjx ; x)dt:

16 16 Th. Sauer 5. Minimal Monomials In this section we will introduce and investigate a particular minimal degree interpolation space P ( N ), spanned by a minimal number of monomials, which are, moreover, nested in an appropriate way. We will see that this space combines quite a few of the properties of least interpolation with the practical advantage of minimal memory consumption which makes it particularly useful for practical purposes. Some numerical details are also discussed at the end of this section Construction Since the construction of the space is quite intricate and overshadowed by the notation, let us rst sketch an outline of the construction. The main idea is to choose the monomials which span the interpolation space in such a way that the respective set of multiindices, I n, is a lower set; those play an important role in the study of the multivariate Birkho interpolation problem (see e.g., [17, 13] or [14] for a more extensive survey). That I n is a lower set means that whenever 2 I n and 2 N n satises 0 i i, i = 1; : : :; d, then 2 I n as well. This property will be ensured by choosing the complement spaces Q k, k = 0; : : :; n, in such a way that they are spanned by polynomials which only have a single monomial with an index from I 0 n in their leading term, i.e., polynomials x + q(x), 2 I 0 n, where q is a polynomial of degree less than or equal to jj such that all coecients of the leading term of q which are dierent from belong to I jj. The crucial point of the construction is to do this in such a way that N d 0 n I n I 0 n is an upper set (i.e., 2 I 0 n and i i implies that 2 N d 0 n I n ), which is possible due to the well{known fact that Q is the polynomial ideal associated to the nite variety N (cf. [4]). Indeed, if 2 I 0 n, then p? (x) = x + q (x) belongs to Q and also i p? (x) = x +ei + i q (x) 2 Q. In other words, +e i belongs to I 0 n as well. What remains is to nd the minimal indices 2 In, 0 i.e., those elements 2 In 0 which cannot be written as = +, 2 In, 0 2 N d 0; however, these indices will be determined automatically in the process of computing the Newton basis. Let us now turn to the details of the construction. Still we assume N = fx 0 ; : : :; x N g to be a nite set of pairwise distinct points. We claim that there exists a minimal degree interpolation space of degree, say n, which is spanned by x, 2 I n, and has the additional properties that the complementary indices satisfy the following two conditions: 1. 2 I 0, k < n, implies + k ei 2 I 0 k+1, i = 1; : : :; d. 2. the polynomials p?, 2 In, 0 satisfy p? =! ; ; 2 I 0 n: Note that (23) is only a reformulation of the fact that the part of the leading term of p? which has exponents from I 0, consists of jj () only. We construct the space P ( N ) by an inductional process, subsequently generating P ( N ) \ d k, k = 0; : : :; n.

17 Minimal degree interpolation 17 For k = 0 we dene p 0 = 1, I 0 = f0g; so, let us suppose that for some k, 0 k < n we already constructed a set I k of multiindices, Newton fundamental polynomials p, 2 I k, and the complementary basis p?, 2 I 0 k, with the above properties. Since the polynomials i p? (x), 2 I 0, also annihilate k N, we set I 0 k+1 = I 0 k [ + e i : i = 1; : : :; d; 2 I 0 k ; I k+1 = f : jj k + 1g n I 0 k+1; and p? (x) = +e i i p? (x), i = 1; : : :; d, 2 I 0 k. If for some, jj = k + 1 there are several ways to write in the form = + e i, 1 i d, 2 I 0, then k we choose the lexicographically largest which satises = + e i for some 0 i d. The polynomials p?, 2 J 0 k+1, constructed this way are well{dened and vanish on N. Replacing n by k in (23) shows that the span of p?, 2 J 0 k+1, has the same dimension as the span of x, 2 J 0 k+1, and still satises (23) with k + 1 instead of n. Hence, the polynomials p?, 2 I 0, and k+1 x, 2 I k+1, dene a basis of d. k+1 Let L k denote the interpolation operator with respect to the points x, 2 I k, constructed so far and set for 2 J k+1 (24) (x) = x? L k (() ; x) : Clearly, these polynomials annihilate k = fx : 2 I k g. For these polynomials, 2 J k+1, we now have to decide whether they belong to P k+1 ( N) or to Q k+1. For that purpose, we arrange the multiindices 2 J k+1 in lexicographical order and again proceed inductively. Let us assume that for some 2 J k+1 we already obtained points x and polynomials p, 2 J k+1,, which satisfy (25) p (x ) = ; 2 I k+1 ; 2 J k+1 ; : Of course, this is trivial for the rst 2 J k+1 as there are no conditions imposed then. Next we consider (x) = (x)? J k+13 (x )p (x): By construction, vanishes on k [ fx : g; thus, if also vanishes on Y := N n fx : g, then it vanishes on all of N and consequently belongs to Q. In that case we set I 0 = k+1 I0 [ fg, I k+1 k+1 = I k+1 n fg and p?(x) = (x). Note that despite of adding to I 0 k+1, the property (23) still remains valid for I 0. k+1 Otherwise, if does not vanish on Y, then there is some x 2 Y such that p(x ) 6= 0. We then set p (x) = (x)= (x ) and p (x) = p (x)? p (x )p (x); 2 J k+1 ; ; and (25) is now satised by the polynomials p, 2 J k+1,. Formally, the decision can be written as (26) (Y) 8 < : = 0! I 0 k+1 ; p? = 6= 0! I k+1 ; p = = (x ) :

18 18 Th. Sauer This orthogonalization process has to be repeated as long as N nfx : 2 I n g is nonempty, and it nally yields the Newton fundamental polynomials of level k +1. Therefore, we can conclude that each of these polynomials is in the span of x, 2 I k+1 and that all the polynomials p? satisfy (23). Hence, for k + 1 = n, the construction is complete. Let us nally remark that the actual value of n does not come up in the construction and that the construction can be used to determine it algorithmically. Remark 1. The crucial point in the above construction was to make sure that 2 I 0 implies that + k ei 2 I 0 k+1, i = 1; : : :; d, k < n. This is in turn equivalent to 2 I k implying? e i 2 I k?1, i = 1; : : :; d, whenever? e i is dened. By iteration we obtain that (27) 2 I jj ) f : ; jj = kg I jj : Here has to be understood in the sense that? 2 N d 0. Remark 2. Let us briey point to the fact that the construction presented above has an interesting by{product: if we collect all the polynomials p? which were decided to belong to Q in (26), then this set of polynomials is a minimal Groebner basis for the ideal Q associated to the nite variety N with respect to the graded lexicographical ordering. This statement is only correct up to basis elements of degree n + 1, but those can be easily obtained by an additional step of the same process. Remark 3. We did not x in (26) which particular element from Y to choose as x. This leaves room for several pivoting strategies. The straightforward numerical choice would be to take that element such that the absolute value of at this point becomes maximal. Before we turn to properties of P ( N ), let us rst rephrase the above construction into a \cooking{recipe" for the generation of P ( N ). Recall that the algorithm does not know n a priori but computes this number \on the y". Algorithm 1. Construction of P ( N ). Input: N 2 N and x 1 ; : : :; x N 2 R d. Initialization: n := 0; I 0?1 := ;; := fx 1 ; : : :; x N g; Computation: while 6= ; do for 2 I 0 n?1 ; jj = n? 1 do for i = 1; : : :; d do p? (x) := +e i i p?(x); In 0 := In 0 [ f + e i g; done; done; I n := f : jj ng n In; 0 for 2 I n ; jj = n do p (x) := x? L n?1 (() ; x);

19 Minimal degree interpolation 19 done; for 2 I n ; jj = n do if p ( ) = 0 then p? (x) = p (x); I n := I n n fg; I 0 n := I 0 n [ fg; else Choose x 2 fx 2 : p (x) 6= 0g; p (x) := p (x)=p (x ); for 2 J n ; 6= do p (x) := p (x)? p (x )p (x); done; := n fx g; ; done; n := n + 1; done; Output: Degree n, index sets I n, Newton fundamental polynomials p, 2 I n. It should be remarked here that P ( N ) is uniquely determined by the order in which we process the multiindices in the for loops. Before we turn to the description of some properties of the minimal degree interpolation space P ( N ), let us rst illustrate the above construction by looking at an example. Example 5. We consider the case that x 0 ; : : :; x 7 are the 8th roots on the unit circle in R 2, i.e., x j = ( j ; j ) := (cos(j=4); sin(j=4)). Remark that neither their cardinality matches the dimension of any of the spaces n, 2 and that there exists a quadratic polynomial, namely x 2 + y 2? 1, which vanishes at all the interpolation points. For convenience we denote points in R 2 by (; ). So, we proceed by the degree, say k, where for k = 0 the situation is simple as we use the polynomial p (0;0) (; ) 1; clearly, p (0;0) (x 0 ) = 1, so we choose x (0;0) = x 0. Turning to the case k = 1 we rst consider the polynomial? 0 p (0;0) (x), which clearly vanishes at x 0 but has nonzero value at x 1 which we normalize to be 1 and (preliminary) call the resulting polynomial p (0;1). Also we x x (0;1) = x 1. Next we take? 0 p (0;0) (x)? 1 p (0;1) (x), which vanishes at x 0 ; x 1 and has nonzero value at x 2. Again we re-normalize, set x (1;0) = x 2, call the resulting polynomial p (1;0) and then replace p (0;1) by p (0;1)? p (0;1) ( 2 ; 2 )p (1;0). This yields the linear Newton fundamental polynomials p (0;1) (; ) = p 1 (; ) = p (1;0) (; ) = p 2 (; ) = 1? p 2?1?? + 1 p2?1? 2 (1? p 2) +? 1 Proceeding with k = 2 the same procedure gives the polynomials p (0;2) (; ) = p 3 (; ) = 7? 5 p 2?1 (3? 2 p 2) 2? (7? 5 p 2)? (3? 2 p 2) :

20 20 Th. Sauer p (1;1) (; ) = p 4 (; ) = 4 p 2? 6?1?(3? 2 p 2) (4? 3 p 2) 2 + (4? 3 p 2)? (1? p 2) + (3? 2 p 2) Now, the nal polynomial to be checked can be computed to be 2 + 2? 1. Of course, this polynomial vanishes on all the remaining points and therefore is the rst choice for an element in Q. This is, we set p? (2;0) (; ) = 2 + 2? 1 and put the index (2; 0) into I2. 0 For k = 3 the only exponents to be checked are (3; 0) and (2; 1) since the other two indices (1; 2) and (0; 3) automatically belong to I 0 3. The algorithm then yields p (3;0) (; ) = p 5 (; ) = ? p 2?1 +(941664? p 2) p (2;1) (; ) = p 6 (; ) = ? p 2 and for k = 4 we nally obtain p (4;0) (; ) = p 7 (; ) =?1 ( ? p 2) 2?(228486? p 2) 3?(94642? p 2) 2 + (66922? p 2) 2?(66922? p 2) + (161564? p 2)? ? p 2?( ? p 2) y 4?( ? p 2) y 3?( ? p 2) xy 2 +( ? p 2) y 2?( ? p 2) xy +( ? p 2) y Summarizing, we can note that the minimal degree interpolation space is P ( N ) = span 1; ; ; ; 2 ; 2 ; 3 ; 4 : The above Newton fundamental polynomials have been computed using MAPLE V. : ;

21 Minimal degree interpolation Properties First we notice that P ( N ) uses a minimal number of monomials with exponents 2 I n which have an additional minimality property among all properly nested index sets: indeed, this particular space uses monomials with lexicographically minimal exponents. Although this property is of no practical use, it serves the purpose of making P ( ) unique. In order to investigate the invariance properties of P, we consider P as a map that transfers N to an N + 1-dimensional subspace of d ; this is possible since P assigns a unique polynomials subspace to each subset of R d consisting of N +1 elements by the construction from the last subsection. Let ' : R d! R d be some map, then P is called '-invariant if for any nite subset N P ('( N )) = P ( N ); where '( N ) = f'(x 0 ); : : :; '(x N?1 )g. We will show that the construction rule for P ( N ) implies that the space generated this way is 1. scale invariant 2. translation invariant and thus closed under taking derivatives as well. Clearly, a minimal degree interpolation space is scale{invariant if it is spanned by homogeneous polynomials { the (still homogeneous) polynomials p (=c) = c?jj p are obviously a basis for the minimal degree interpolation space with respect to c N with the same minimality properties as the original one, N. Since it is indeed spanned by homogeneous polynomials, P ( N ) is scale{invariant, i.e., P ( N ) = P (c N ); c 2 R; c 6= 0: A little more eort has to be taken to prove the translation invariance of P ( N ), i.e., P ( N ) = P ( N? y); y 2 R d : Since a set of Newton fundamental polynomials with respect to N? y is given by the polynomials p ( + y); 2 I n ; and their span, say P, is again a minimal degree interpolation space. On the other hand, we know that Since P = span f( + y) : 2 I n g: (x + y) = we can apply Remark 1 to obtain that x y? =: c x ; P ( N? y) = span f(x + y) : 2 I n g span fx : 2 I n g = P ( N ): Replacing N by N + y also yields the converse inclusion. Hence P ( N ) is also translation{invariant. Together these two invariance properties imply that

22 22 Th. Sauer P ( N ) is D{invariant, too, i.e., the space is closed under the operation of taking derivatives. It has been pointed out in [5] that least interpolation is not invariant under arbitrary rotations of the coordinate system. The same holds true for P ( N ). Instead of dwelling on this in general, let us consider one example here which nevertheless illuminates the phenomenon. Example 6. Let the points x 0 ; : : :; x N lie on on a line passing through the origin and let us rotate them such that the origin remains xed. Clearly, as long as the line which contains the points is not perpendicular to the d {axis, P ( N ) is always a minimal degree interpolation space with respect to the rotated points. This is due to the lexicographical ordering which in this case always gives us the polynomials 1; d ; : : :; N as a basis for d P ( N ). However, if the the line coincides with one coordinate axis, say k, then the projection to this coordinate axis, spanned by 1; k ; : : :; N k, is the one and only minimal degree interpolation space which uses a minimal number of monomials. Hence, in this case also P ( N ) = span 1; k ; : : :; k N. So, P ( N ) is not rotation{invariant. Least interpolation was described in (6) by a dierential operator involving the leading terms of all polynomials which vanish in the interpolation points. It is interesting that something similar holds for P ( N ), too, involving only certain partial derivatives. Indeed, if we dene I 0 = N d 0 n I n = I 0 n [ 2 N d 0 : jj > n then we can trivially reformulate the fact that P ( N ) is spanned by the monomials x, 2 I n, as P ( N ) = \ 2I 0 p 2 d x p 0 = \ 2I 0 kern D : 5.3. Numerical performance In this section we nally lay out that Lagrange interpolation from the space P ( N ) can be handled numerically in a very ecient way. In particular, for polynomials in this space one can not only carry out the vector space operations addition/subtraction as well as multiplication by a real number with N operations (which is trivial), it is even possible to state a Horner scheme which evaluates polynomials in P ( N ) with the optimal number of N nested additions and multiplications. This in turn yields that P ( N ) is very suitable with respect to speed and numerical robustness, since a minimum of operations also reduces roundo errors to the greatest possible extent. In order to express this fact in greater detail, let us rst recall the multivariate Horner scheme (see [18]) which evaluates a polynomial p(x) = c x ; n 2 N; jjn recursively by the nested multiplications p(x) = 1? 1 ( 1 p n (^x 1 ) + p n?1 (^x 1 )) + p 0 (^x 1 );

23 Minimal degree interpolation 23 where ^x 1 = ( 2 ; : : :; d ) and d?1 n?j 3 pj (^x 1 ) = jjn; 1=j c x?je1 ; j = 0; : : :; n: The same process, with respect to 2 is then applied to each of the polynomials p j (^x 1 ) to be expanded into p j (^x 1 ) = 2? 2 ( 2 p j;n?j (^x 1;2 ) + p j;n?j?1 (^x 1;2 )) + p j;0 (^x 1;2 ); j = 0; : : :; n; where ^x 1;2 = ( 3 ; : : :; d ), and so on. Following [18], we observe that in this evaluation algorithm the coecients c, jj n, are processed in descending lexicographical order, i.e., c is accessed earlier than c if appears later than in the lexicographical ordering. In particular, every coecient is accessed only once in the evaluation process. Since in practical applications the coecients of a polynomial are in an array of the form c[0...n], there is no need of permanently applying a function which transforms multiindices into the linear index; conveniently the coecients are arranged in a graded lexicographical order. Nevertheless, when using the above Horner scheme it suces to provide only a table which maps the lexicographical arrangement of the multiindices into the linear ordering. To derive the Horner scheme for polynomials in P ( N ), we dene i 1 = maxf 1 : 2 J n g and note that, by (27), the indices in the set (28) again satisfy ^I n (i 1 ) = ^ 2 N d?1 0 : (i 1 ; ^) 2 I n n o ^ 2 ^In (i 1 ) ) ^ 2 N d?1 0 : ^ ^ ^In (i 1 ): Applying (27) once more, we observe that together with (i 1 ; ^) we also have that (j; ^) 2 ^I n, j = 0; : : :; i 1, and that the respective sets ^I n (j), dened according to (28), satisfy (29) ^ 2 ^I n (j) ) n ^ 2 N d?1 0 : ^ ^ Hence, we can expand p 2 P ( N ), say as (30) o p(x) = 2In c x ; ^I n (j); j = 0; : : :; i 1 : p(x) = 1? 1 ( 1 p i1 (^x 1 ) + p i1?1 (^x 1 )) + p 0 (^x 1 ): Noticing that the exponents in the (d? 1)-variate polynomials p j satisfy the respective version of (27) again, we can proceed recursively to evaluate the polynomial nally. Note that this algorithm needs exactly as many additions and multiplications as the polynomial has coecients; thus, evaluation of a polynomial can be done with 2N arithmetic operations.

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Computing with D-algebraic power series

Computing with D-algebraic power series Computing with D-algebraic power series by Joris van der Hoeven CNRS, LIX, École polytechnique 91128 Palaiseau Cedex France Email: vdhoeven@lix.polytechnique.fr April 15, 2014 Abstract In this paper, we

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

Polynomial interpolation in several variables: lattices, differences, and ideals

Polynomial interpolation in several variables: lattices, differences, and ideals Preprint 101 Polynomial interpolation in several variables: lattices, differences, and ideals Tomas Sauer Lehrstuhl für Numerische Mathematik, Justus Liebig Universität Gießen Heinrich Buff Ring 44, D

More information

Roots of Unity, Cyclotomic Polynomials and Applications

Roots of Unity, Cyclotomic Polynomials and Applications Swiss Mathematical Olympiad smo osm Roots of Unity, Cyclotomic Polynomials and Applications The task to be done here is to give an introduction to the topics in the title. This paper is neither complete

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Functions: A Fourier Approach. Universitat Rostock. Germany. Dedicated to Prof. L. Berg on the occasion of his 65th birthday.

Functions: A Fourier Approach. Universitat Rostock. Germany. Dedicated to Prof. L. Berg on the occasion of his 65th birthday. Approximation Properties of Multi{Scaling Functions: A Fourier Approach Gerlind Plona Fachbereich Mathemati Universitat Rostoc 1851 Rostoc Germany Dedicated to Prof. L. Berg on the occasion of his 65th

More information

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2  1 LECTURE 5 Characteristics and the Classication of Second Order Linear PDEs Let us now consider the case of a general second order linear PDE in two variables; (5.) where (5.) 0 P i;j A ij xix j + P i,

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! Abstract A lemma of Micchelli s, concerning radial polynomials and weighted sums of point evaluations, is shown to hold

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains Ring Theory (part 4): Arithmetic and Unique Factorization in Integral Domains (by Evan Dummit, 018, v. 1.00) Contents 4 Arithmetic and Unique Factorization in Integral Domains 1 4.1 Euclidean Domains and

More information

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12 Analysis on Graphs Alexander Grigoryan Lecture Notes University of Bielefeld, WS 0/ Contents The Laplace operator on graphs 5. The notion of a graph............................. 5. Cayley graphs..................................

More information

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime DIMACS Series in Discrete Mathematics and Theoretical Computer Science Volume 00, 1997 Lower Bounds of Isoperimetric Functions for Nilpotent Groups Jose Burillo Abstract. In this paper we prove that Heisenberg

More information

Simple Lie subalgebras of locally nite associative algebras

Simple Lie subalgebras of locally nite associative algebras Simple Lie subalgebras of locally nite associative algebras Y.A. Bahturin Department of Mathematics and Statistics Memorial University of Newfoundland St. John's, NL, A1C5S7, Canada A.A. Baranov Department

More information

Abstract Vector Spaces and Concrete Examples

Abstract Vector Spaces and Concrete Examples LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,

More information

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh

More information

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University Lagrange Interpolation and Neville s Algorithm Ron Goldman Department of Computer Science Rice University Tension between Mathematics and Engineering 1. How do Mathematicians actually represent curves

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Mathematical Olympiad Training Polynomials

Mathematical Olympiad Training Polynomials Mathematical Olympiad Training Polynomials Definition A polynomial over a ring R(Z, Q, R, C) in x is an expression of the form p(x) = a n x n + a n 1 x n 1 + + a 1 x + a 0, a i R, for 0 i n. If a n 0,

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Abstract. We show that a proper coloring of the diagram of an interval order I may require 1 +

Abstract. We show that a proper coloring of the diagram of an interval order I may require 1 + Colorings of Diagrams of Interval Orders and -Sequences of Sets STEFAN FELSNER 1 and WILLIAM T. TROTTER 1 Fachbereich Mathemati, TU-Berlin, Strae des 17. Juni 135, 1000 Berlin 1, Germany, partially supported

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

4.6 Bases and Dimension

4.6 Bases and Dimension 46 Bases and Dimension 281 40 (a) Show that {1,x,x 2,x 3 } is linearly independent on every interval (b) If f k (x) = x k for k = 0, 1,,n, show that {f 0,f 1,,f n } is linearly independent on every interval

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Rewriting Polynomials

Rewriting Polynomials Rewriting Polynomials 1 Roots and Eigenvalues the companion matrix of a polynomial the ideal membership problem 2 Automatic Geometric Theorem Proving the circle theorem of Appolonius 3 The Division Algorithm

More information

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University Method of Frobenius General Considerations L. Nielsen, Ph.D. Department of Mathematics, Creighton University Dierential Equations, Fall 2008 Outline 1 The Dierential Equation and Assumptions 2 3 Main Theorem

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Polynomial Interpolation Part II

Polynomial Interpolation Part II Polynomial Interpolation Part II Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Exercise Session

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

response surface work. These alternative polynomials are contrasted with those of Schee, and ideas of

response surface work. These alternative polynomials are contrasted with those of Schee, and ideas of Reports 367{Augsburg, 320{Washington, 977{Wisconsin 10 pages 28 May 1997, 8:22h Mixture models based on homogeneous polynomials Norman R. Draper a,friedrich Pukelsheim b,* a Department of Statistics, University

More information

Congurations of periodic orbits for equations with delayed positive feedback

Congurations of periodic orbits for equations with delayed positive feedback Congurations of periodic orbits for equations with delayed positive feedback Dedicated to Professor Tibor Krisztin on the occasion of his 60th birthday Gabriella Vas 1 MTA-SZTE Analysis and Stochastics

More information

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas Plan of Class 4 Radial Basis Functions with moving centers Multilayer Perceptrons Projection Pursuit Regression and ridge functions approximation Principal Component Analysis: basic ideas Radial Basis

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

2 Real equation solving the input equation of the real hypersurface under consideration, we are able to nd for each connected component of the hypersu

2 Real equation solving the input equation of the real hypersurface under consideration, we are able to nd for each connected component of the hypersu Polar Varieties, Real Equation Solving and Data-Structures: The Hypersurface Case B. Bank 1, M. Giusti 2, J. Heintz 3, G. M. Mbakop 1 February 28, 1997 Dedicated to Shmuel Winograd Abstract In this paper

More information

Homework 2 Solutions

Homework 2 Solutions Math 312, Spring 2014 Jerry L. Kazdan Homework 2 s 1. [Bretscher, Sec. 1.2 #44] The sketch represents a maze of one-way streets in a city. The trac volume through certain blocks during an hour has been

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

Systems of Linear Equations

Systems of Linear Equations LECTURE 6 Systems of Linear Equations You may recall that in Math 303, matrices were first introduced as a means of encapsulating the essential data underlying a system of linear equations; that is to

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN H.T. Banks and Yun Wang Center for Research in Scientic Computation North Carolina State University Raleigh, NC 7695-805 Revised: March 1993 Abstract In

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Properties of Arithmetical Functions

Properties of Arithmetical Functions Properties of Arithmetical Functions Zack Clark Math 336, Spring 206 Introduction Arithmetical functions, as dened by Delany [2], are the functions f(n) that take positive integers n to complex numbers.

More information

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v 260) Contents 2 Partial Derivatives 1 21 Limits and Continuity 1 22 Partial Derivatives 5 23 Directional Derivatives and the Gradient

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Lifting to non-integral idempotents

Lifting to non-integral idempotents Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,

More information

B y Werner M. Seiler. School of Physics and Materialsy.

B y Werner M. Seiler. School of Physics and Materialsy. Generalized Tableaux and Formally Well-Posed Initial Value Problems B y Werner M. Seiler School of Physics and Materialsy Lancaster University Bailrigg, LA 4YB, UK Email: W.SeilerLancaster.ac.uk We generalize

More information

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea Torsion of dierentials on toric varieties Klaus Altmann Institut fur reine Mathematik, Humboldt-Universitat zu Berlin Ziegelstr. 13a, D-10099 Berlin, Germany. E-mail: altmann@mathematik.hu-berlin.de Abstract

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G. Group Theory Jan 2012 #6 Prove that if G is a nonabelian group, then G/Z(G) is not cyclic. Aug 2011 #9 (Jan 2010 #5) Prove that any group of order p 2 is an abelian group. Jan 2012 #7 G is nonabelian nite

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Solving Systems of Equations Row Reduction

Solving Systems of Equations Row Reduction Solving Systems of Equations Row Reduction November 19, 2008 Though it has not been a primary topic of interest for us, the task of solving a system of linear equations has come up several times. For example,

More information

COS 424: Interacting with Data

COS 424: Interacting with Data COS 424: Interacting with Data Lecturer: Rob Schapire Lecture #14 Scribe: Zia Khan April 3, 2007 Recall from previous lecture that in regression we are trying to predict a real value given our data. Specically,

More information

The 70th William Lowell Putnam Mathematical Competition Saturday, December 5, 2009

The 70th William Lowell Putnam Mathematical Competition Saturday, December 5, 2009 The 7th William Lowell Putnam Mathematical Competition Saturday, December 5, 9 A1 Let f be a real-valued function on the plane such that for every square ABCD in the plane, f(a) + f(b) + f(c) + f(d) =.

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS Eduardo D. Sontag SYCON - Rutgers Center for Systems and Control Department of Mathematics, Rutgers University, New Brunswick, NJ 08903

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Formal Groups. Niki Myrto Mavraki

Formal Groups. Niki Myrto Mavraki Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal

More information

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 Throughout this lecture k denotes an algebraically closed field. 17.1 Tangent spaces and hypersurfaces For any polynomial f k[x

More information

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog, A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact Data Bob Anderssen and Frank de Hoog, CSIRO Division of Mathematics and Statistics, GPO Box 1965, Canberra, ACT 2601, Australia

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

1 Ordinary points and singular points

1 Ordinary points and singular points Math 70 honors, Fall, 008 Notes 8 More on series solutions, and an introduction to \orthogonal polynomials" Homework at end Revised, /4. Some changes and additions starting on page 7. Ordinary points and

More information

Pade approximants and noise: rational functions

Pade approximants and noise: rational functions Journal of Computational and Applied Mathematics 105 (1999) 285 297 Pade approximants and noise: rational functions Jacek Gilewicz a; a; b;1, Maciej Pindor a Centre de Physique Theorique, Unite Propre

More information

University of Missouri. In Partial Fulllment LINDSEY M. WOODLAND MAY 2015

University of Missouri. In Partial Fulllment LINDSEY M. WOODLAND MAY 2015 Frames and applications: Distribution of frame coecients, integer frames and phase retrieval A Dissertation presented to the Faculty of the Graduate School University of Missouri In Partial Fulllment of

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

Asymptotic expansion of multivariate conservative linear operators

Asymptotic expansion of multivariate conservative linear operators Journal of Computational and Applied Mathematics 150 (2003) 219 251 www.elsevier.com/locate/cam Asymptotic expansion of multivariate conservative linear operators A.-J. Lopez-Moreno, F.-J. Muñoz-Delgado

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Chapter 2. Vector Spaces

Chapter 2. Vector Spaces Chapter 2 Vector Spaces Vector spaces and their ancillary structures provide the common language of linear algebra, and, as such, are an essential prerequisite for understanding contemporary applied mathematics.

More information

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 November 7, 2011 2011 1 /

More information

Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness

Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness Exhaustive Classication of Finite Classical Probability Spaces with Regard to the Notion of Causal Up-to-n-closedness Michaª Marczyk, Leszek Wro«ski Jagiellonian University, Kraków 16 June 2009 Abstract

More information