Chapter 7 Approximations of Functions 7.1. Introduction IS A BETTER TITLE FOR THE CHAPTER: INTERPOLATION AND APPROXIMATION THEORY Before we can study

Size: px
Start display at page:

Download "Chapter 7 Approximations of Functions 7.1. Introduction IS A BETTER TITLE FOR THE CHAPTER: INTERPOLATION AND APPROXIMATION THEORY Before we can study"

Transcription

1 Chapter 7 Approximations of Functions 7.. Introduction IS A BETTER TITLE FOR THE CHAPTER: INTERPOLATION AND APPROIMATION THEORY Before we can study the solutions of dierential equations such as dy dx = f(y) for x 2 [a b] (7.) we rst need to understand how weapproximate functions on the computer. One common method of approximation, which wehave already discussed, is to approximate y(x) by the discrete set of nodes (or data) (x i y i ). Thus, if we solve an initial value ode with initial condition y(a) =y 0 our solution will be such a set of output data. We often then want to reconstruct a smooth function from this set. Notation: In this chapter we generally let x, rather than t, be the independent variable. The main reason is that we are not considering dynamical systems in this chapter. And it is more common to use x than t in the literature. We will use t whenever we are considering a time evolution problem. SAY BETTER??? In this chapter we are generally working with the output data from some experiment. Usually this data is in the form (x i y i ) and we will simply call this the set of data points, or just points, that we want to interpolate or approximate. The x values are called the abscissas and the y values the ordinates. However, at times the data will be dierent. For example, we might be given (x i y i yi 0), which also includes the slopes at the points. (If our data is actually (ti y i yi) 0 where t is the time, then yi 0 is the velocity at a particular time.) Generally, however, our data is the set of n+ points (x i y i ) i 2N [0 n] where y i 2 R m for all i. In this chapter we normally work with y i 2 R. However, we can easily handle y i 2 R m by ; then we consider x i y (j) i i 2N [0 n] for each j 2N [ m] separately. There are a number ofways to interpolate or approximate a set of points, which is the topic of the present chapter. Let us now discuss interpolation. We can use a polynomial to interpolate the data and we are guaranteed to succeed as long as the degree of the polynomial is at least n. Often we do not attempt to interpolate all the points with one polynomial. Instead we choose a small subset of the points and interpolate it with a polynomial so that the polynomial has low degree. working with each component of y separately. That is, if we let y = ; y () y (2) ::: y (m) T 47

2 472 Ch. 7. Approximations of Functions We can combine all these subsets together and consider the set of low-degree polynomials to be the interpolation of the data. This is an example of piecewise polynomial interpolation. Another method of interpolation P is to use a set of \simple" functions i (x) and let our interpolating function be (x) = c j i (x). The coecients are chosen so that (x j )=y j. If j (x) =x j then we return to polynomial interpolation. Another common choice of basis functions is trigonometric functions. On the other hand, we mightnotwanttointerpolate the points but only to approximate them. For example, the points might be contaminated by \noise" (such as high-frequency oscillations). We will often want our approximating function to be \reasonably smooth" so that we do not want the approximation to pass through the points but only to be \close" to the points. Thus, we want to be \small", but not usually zero, and max i Z b a (xi ) ; y i 0 (x) dx to be "small". By forcing the integral to be \small" we keep 0 \small" in the L -norm so that does not have high-frequency oscillations. EPLAIN BETTER??? Again, (x) might be a polynomial, a set of low-degree polynomials, or a linear combination of \simple" functions. One reason for interpolating the points is when we need to use them further. Suppose that we want to calculate Lfyg where L is some linear operator and y is the solution of the ode (7.). Suppose, however, that we only have the set of points (x i y i ) which approximate the smooth function y. Then we can discretize L as we have often done in this book, so that Lfyg!L h fyg where y =(y 0 y ::: y n ) T. However, we can also use the interpolating or approximating function (x) and calculate Lfg from Lf i g. Conversely, wemightwant tocalculate Lfyg directly. For example, we mightwant to calculate R b a y(x) dx. Generally, we cannot obtain an exact answer and so we must use numerical methods, which usually involve taking a particular linear combination of values of y(x) for specied values of x. Thus, we have discretized y. Instead, we might be solving a boundary-value problem. In this case we must solve for a discretization of y(x). Again, one common method is to solve for (x i y i ) where y i is our approximation to y(x i ). We can also set and discretize by y(x) = y(x) i= n i= c i i (x) c i i (x) : We then solve the ode using this discretization. There are still a number of methods possible. We can require that satisfy the ode exactly at specied points in [a b]. Thus we solve the exact continuous ode at specied points. This makes the error 0 at these points but does nothing else in the interval. This is called a collocation method. We can also require that the error be minimized over the entire interval. This is called a nite-element method. Most of our discussion is focused on one-dimensional interpolation and approximation theory. However, we briey discuss it in higher dimensions. This is a much more \open" area and often the method depends on the specic problem.

3 7.2. Polynomial Interpolation Polynomial Interpolation Suppose we are given the set of points (x i y i ) i 2N [0 n] and we want to interpolate these points by using a polynomial of degree n. As we will see, this polynomial, which wedenote by p n (x), is unique. Also, as we will see, there are a number of ways to calculate the coecients of this polynomial, which can have vastly dierent stability properties and costs. Finally, there are also a number of ways to evaluate this polynomial for a given value of x, which can again have vastly dierent stability properties and costs. These points are the main topics of this section. Throughout this section we assume that these are the points to be interpolated. SAY BETTER??? Notation: To reiterate the point, p n (x) denotes a polynomial of degree n. Only when a polynomial has arbitrary degree will we simply write p(x) Lagrange Polynomials For completeness, we begin with the denition of a polynomial and a polynomial space. Denition 7.. p(x) is a real polynomial of degree n if it can be written as p(x) =a 0 + a x + a 2 x 2 + ::: + a n x n ai i 2N [0 n] R. p(x) is a complex polynomial of degree n if ai i 2N [0 n] where C. (We usually work with real polynomials.) p(x) has exactly degree n if a n 6=0. Sometimes the word \order" is used to describe a polynomial. The order of a polynomial is the number of possible coecients, which is one more than the degree. We denote the set of all polynomials of degree n by n = a 0 + a x + a 2 x 2 + ::: + a n; x n; + a n x n ai 2R (C ) : (7.2) We will need to use a subset of n repeatedly and so we give it special notation. We often want to \normalize" polynomials by making their lead coecient one and we denote the set of all such polynomials by n = a 0 + a x + a 2 x 2 + ::: + a n; x n; + x n ai 2R (C ) : (7.3) Note that n is a subspace of the space of all polynomials. Also, note that n is the set of all polynomials of degree exactly n, which is not a linear space because, for example, the sum of two such polynomials does not have leading coecient one. To begin our discussion of polynomial interpolation we will prove that there exists a unique polynomial p n 2 n that interpolates our data. This proof is quite instructive, particularly because it is constructive. (That is, we will not only prove that such a polynomial exists we will also show how to calculate it.) First, we need a denition and then a lemma. Denition 7.2. Let x i i 2N [0 n] be a set of distinct abscissas, which need not be given in any particular order. The set of Lagrange polynomials for the abscissas f x j g, denoted by `j(x) j 2N [0 n], is dened by (x ; x 0 )(x ; x ) :::(x ; x j; )(x ; x j )(x ; x j+ ) :::(x ; x n ) `j(x) = (x j ; x 0 )(x j ; x ) :::(x j ; x j; )(x j ; x j )(x j ; x j+ ) :::(x j ; x n ) = ny k=0 k6=j x ; x k x j ; x k (7.4)

4 474 Ch. 7. Approximations of Functions where the notation b 0 b :::b j; b j b j+ :::b n; b n means that the term b j is omitted. Note that each polynomial has degree exactly n. If we need to emphasize the fact that `j(x) is a polynomial of degree n, we will write it as `(n) j (x). The numerators are chosen so that `j(x k )=0 if j 6= k. And the denominators are chosen so that `j(x j ) = for all j. Thus, `j(x k ) = jk for all j k 2 N [0 n]. For example, the four Lagrange polynomials of degree three are shown in Figure 7., where the data points are shown as the dots. The primary reason for using Lagrange polynomials is that it is particularly easy to write any polynomial of degree n as a linear combination of this set of Lagrange polynomials, as we prove in Theorem 7.. First, however, we need a lemma. Figure 7.. The Lagrange polynomials of degree three with the set of abscissas being f g. The locations of the data points are given by the dots. Lemma 7.. Let x i i 2N [0 n] be a set of distinct abscissas and let `j(x) j 2N [0 n] be the corresponding set of Lagrange polynomials. This set of Lagrange polynomials is a basis for n. Proof: To show that these polynomials are linearly independent, consider the equation c 0`0(x)+c `(x)+ ::: + c n`n(x) =0 for all x 2R : First, c i =0 for each i follows immediately by letting x = x i. And, second, `j 2 n for each j. Thus, we have asetof n+ linearly independent functions in n (which isann+-dimensional space). We now use the Lagrange form to prove the desired theorem. Theorem 7.. Consider the n+ points (x i y i ) i 2N [0 n] where the abscissas are all distinct (but need not be in any order).then there exists a unique polynomial p n 2 n such that p n (x i )=y i for all i. (We will usually just say that the polynomial interpolates the given data.) This polynomial is given by p n (x) = n k=0 y k`k(x) : (7.5) Proof: Obviously, p n, as dened in (7.5), is one such polynomial. Suppose there exists a second such polynomial, say q n 2 P n, and let us write it as q n (x) = k=0 c k`k(x). This can always be done since by Lemma 7. `j(x) j 2N [0 n] spans n. Since q n (x j )=c j we must have c j = y j for all j 2N [0 n] and so q n = p n.

5 7.2. Polynomial Interpolation 475 There is another way to write this basis of Lagrange polynomials of degree n. If we dene n 2 n+ by then n (x) = `j(x) = ny j=0 (x ; x j ) (7.6) n (x) (x ; x j ) 0 n(x j ) (7.7) as can easily be seem by substituting the denition for n (x) into eq. (7.6). This expression for `j will be needed frequently in the next chapter also, n will be used later in this chapter. Warning: n (x) is a polynomial of degree n+, which is dierent from the convention we are using in this chapter. This is the normal convention for and so we will follow it here. Now suppose we actually wanttoevaluate p n (x) for many values of x. How costly is it? If we write p n as a linear combination of the Lagrange polynomials, eq. (7.5), then the coecients of this linear combination are given immediately. However, the evaluation of each Lagrange polynomial is very costly. The calculation of `j(x) requires setting up n; terms in the numerator and also in the denominator, multiplying together all these terms in the numerator, multiplying together all these terms in the denominator, and then dividing the numerator and denominator. Of course, the denominator need be done only once, since it is the same for each Lagrange polynomial. Thus, the total cost is n 2 & n 2. This is an unacceptable cost for evaluating a polynomial. There are other forms for expressing p n which lead to an O(n) cost for evaluating p n (x). We saythat p n is written in power form if it is written as p n (x) =a 0 + a x + a 2 x 2 + ::: + a n x n : (7.8) p n (x) can be evaluated with a cost of O(n) if we write it in the nested form p n (x) =a 0 + x a + x ; a 2 + x(a 3 + :::x(a n; + xa n ) :::) : (7.9) In nested form p n (x) can be easily evaluated numerically. In Fortran the algorithm is the following. P = A(N) DO 0 I = N-,0,- P = *P + A(I) 0 CONTINUE Algorithm 7.. Horner's method for evaluating p n (x) in power form. On output, p n (x) = P. This algorithm is called Horner's method its cost is n & n, which is quite acceptable. The fundamental diculty with expressing a polynomial in the power form is in calculating the coecients of p n. Suppose we rstwrite p n as a linear combination of the Lagrange polynomials, as in eq. (7.5). If we then calculate the coecients in the power form by multiplying all the terms in eq. (7.4), the cost is O(n2 n ). If, instead, we evaluate the coecients directly by solving the linear system generated by p n (x j )=y j for all j, namely 0 x 0 x 2 0 ::: x n 0 x x 2 ::: x n x 2 x 2 2 ::: x n 2 : : : : : : : x n x 2 n ::: x n n 0 C A a 0 a a 2 : a n C A = 0 y 0 y y 2 : y n C A

6 476 Ch. 7. Approximations of Functions then our cost is O; n 3. Recall from Example 4. that the matrix is called a Vandermonde matrix and that this calculation is unstable. In addition, the evaluation of p n (x) by Horner's method can be unstable. The diculty is that we might have x. As a simple example, let p 3 (x) =(x ; 00) 3 + so that p 3 (x) =; x ; x(;300 + x) : We will have catastrophic cancellation in evaluating p 3 (x) if x 300. This instability can be alleviated if we write p n in a shifted power form, namely p n (x) =a 0 + a (x ; )+a 2 (x ; ) 2 + ::: + a n (x ; ) n (7.0) where we should usually choose to be some sort of average of the values of x we expect to use. Normally, the average value of the abscissas is a good choice. As long as the abscissas are not \too widely separated" and as long as x is not \too far" from, then the evaluation of p n (x) should be stable. Another approach that sometimes works is to write p n as a linear combination of orthogonal polynomials. Example 7.2 gives a case where the evaluation is nearly ten times as accurate for a linear combination of Chebyshev polynomials as for Horner's method. CASE??? The shifted power form improves the stability ofthe calculation. However, we still have the problem of how to calculate the coecients of the polynomial in the rst place. In order to be able to calculate the coecients of the polynomial eciently we will use a dierent form for the polynomial. Finally, we can investigate the conditioning of polynomial interpolation by using eq. (7.5). If we perturb the ordinates y k to y k + y, then the polynomial changes by p n (x) =y k`k(x). Thus, the change is \not large" as long as k`kk is \not large". Note that a change of y might lead to a \small" change in p n for one value of k and a \large" value for another. SAY MORE AND BETTER??? This statement is generally true both analytically and numerically. Note that there is no catastrophic cancellation in the calculation of the Lagrange polynomials unless x is \very close" to an abscissa, in which case the contribution from all of the basis elements, except for one, is small. Of course, we can have catastrophic cancellation in the summing of y k`k(x) for all k. However, this is true of any calculation. Thus, we expect this method to be well-conditioned, as long as we do not have abscissas \very near" others. SAY MORE AND BETTER MENTION WHAT HAPPENS WHEN SOME 'S NEAR OTHERS??? Divided Dierences A generalization of the shifted power form is Newton's form where we write the polynomial p n (x) as p n (x) =a 0 +[x ; x 0 ] a +[x ; ; x ] a 2 + ::: +[x ; x n;2 ](a n; +[x ; x n; ]a n ) ::: : (7.) Note that Newton's form has good stability properties, similar to the shifted power form. In this case the shifting is quite natural since it involves the abscissas of the points themselves. Also, this polynomial is evaluated quite eciently by Horner's method. P = A(N) DO 0 I = N-,0,- P = ( - (I))*P + A(I)

7 7.2. Polynomial Interpolation CONTINUE Algorithm 7.2. Horner's method for evaluating p n (x) in Newton's form. The result is stored in P. The reason we have introduced Newton's form is that the coecients in eq. (7.) are easily calculated by the procedure of divided dierences. We calculate the coecients of p n (x) by a recursive procedure. To do this it is helpful to write the coecients in a notation which brings out the recursive nature of the calculation. In addition, the coecients depend on the order in which weevaluate the set of abscissas f x i g. In Newton's form we do not require that the abscissas be ordered in any way. However, we do require that all the abscissas are distinct (but we will remove even this requirement in subsection 7.2.4). Example 7.. Suppose the points are ( 8) (3 36) (4 32) (5 0) order so x 0 =, x =3, x 2 =4, and x 3 =5, then Newton's form for a cubic polynomial (cont. on page 478). If wechoose the abscissas in increasing p 3 (x) = 8 + 4(x ; ) ; 6(x ; )(x ; 3) ; 2(x ; )(x ; 3)(x ; 4) : (7.2a) If we choose the abscissas in the order x 0 = 4, x =, x 2 = 5, and x 3 = 3, so that no data point is physically adjacent to either the preceding or following point, then p 3 (x) =32+8(x ; 4) ; 0(x ; 4)(x ; ) ; 2(x ; 4)(x ; )(x ; 5) : (7.2b) Note that the only coecient which is the same in these two representations is the coecient of the highest-order term. This is no accident. In power form p 3 (x) = ;2x 3 +0x 2 so that the coecient ofthe x 3 term is ;2. In Newton's form only the last term, i.e., the term (x ; x 0 )(x ; x )(x ; x 2 ) contains the highest power of x, which is x 3 in our example. Thus the last term in any representation of p 3 (x) using Newton's form must have the coecient ;2. We can rewrite eq. (7.2a) as p 3 (x) =y[] + y[ 3](x ; ) + y[ 3 4](x ; )(x ; 3) + y[ 3 4 5](x ; )(x ; 3)(x ; 4) (7.3a) to indicate which points are needed in the calculation of each coecient in Newton's form. That is, as we will see shortly, only ( 8) is needed to calculate y[], only ( 8) (3 36) is needed to calculate y[ 3], etc. We rewrite eq. (7.2b) as p 3 (x) =y[4] + y[4 ](x ; 4) + y[4 5](x ; 4)(x ; ) + y[4 5 3](x ; 4)(x ; )(x ; 5) (7.3b) to, again, indicate which points are needed in order to calculate a particular coecient. Note that y[ 3 4 5] = y[4 5 3] = ;2 because these are the coecients of the highest power of p 3 (x), i.e., x 3, and this coecient is independent of the ordering of the points.

8 478 Ch. 7. Approximations of Functions To indicate how the coecients are calculated we use a new notation for Newton's form, namely p n (x) =y[x 0 ]+y[x 0 x ](x ; x 0 )+y[x 0 x x 2 ](x ; x 0 )(x ; x )+ ::: + y[x 0 x x 2 ::: x n ](x ; x 0 )(x ; x ) :::(x ; x n; ) (7.4) or, in compact notation, p n (x) = n j=0 Y j; y[x 0 x ::: x j ] (x ; x k ) : k=0 One important fact about this notation is that y[x 0 x x 2 ::: x n ] is the coecient of the highest power in p n (x), i.e., x n, and this coecient is independent of the ordering of the points. Notation: We use square brackets to denote the coecients so that we do not confuse y(x j ), which is the value of the function y = y(x) at x = x j with y[x j ], which is the coecient of the constant term in p n (x) = a 0 + a (x ; x j )+a 2 (x ; x j )(x ; x k )+ :::. Similarly, y(x j x k ) would be the value of a function of two variables, whereas y[x j x k ] is the coecient of the term x ; x j in p n (x) when the rst two abscissas are x j and x k. Example 7. (continued). [Message:exam:[app: 4pts]cexam] (cont. from page 477 cont. on page 489) We now show how to calculate these coecients for eq. (7.3a). Once we calculate then, it will be clear how to obtain the general procedure. First, we let x = x 0 = so that Second, we let x = x =3 so that y[] = p 3 () = 8 : y[] + y[ 3](2) = p 3 (3) = 36 and y[ 3] = 4. Third, we let x = x 2 =4 so that y[] + y[ 3](3) + y[ 3 4](3) = p 3 (4) = 32 and y[ 3 4] = ;6. Finally, welet x = x 3 =5 so that y[] + y[ 3](4) + y[ 3 4](8) + y[ 3 4 5](8) = p 3 (5) = 0 and y[ 3 4 5] = ;2. We can now obtain a simple recursive procedure. To do so we work with an arbitary ordering of the points, and so write p 3 (x) =y[x 0 ]+y[x 0 x ](x ; x 0 )+y[x 0 x x 2 ](x ; x 0 )(x ; x ) + y[x 0 x x 2 x 3 ](x ; x 0 )(x ; x )(x ; x 2 ) : First, we let x = x 0 so that y[x 0 ]=p 3 (x 0 )=y 0 :

9 7.2. Polynomial Interpolation 479 We nowdene y[x j ]=y j for j 2N [ 3] since a dierent ordering of the nodes gives this result. (For example, if we order the abscissas as x 3 x 2 x x 0, then we obtain y[x 3 ]=y 3.) Second, we let x = x so that y[x 0 ]+y[x 0 x ](x ; x 0 )=p 3 (x )=y and We dene y[x 0 x ]= y ; y[x 0 ] x ; x 0 = y[x ] ; y[x 0 ] x ; x 0 : y[x j x k ]= y[x k] ; y[x j ] x k ; x j for all j 6= k because a dierent ordering of the abscissas gives ; this result. (Again, if we order the abscissas as x 3 x 2 x x 0, then we obtain y[x 3 x 2 ] = y[x 3 ] ; y[x 2 ] =(x 3 ; x 2 ).) DO WE NEED THIS PARENTHETICAL STATEMENT??? Third, we let x = x 2 so that y[x 0 ]+y[x 0 x ](x 2 ; x 0 )+y[x 0 x x 2 ](x 2 ; x 0 )(x 2 ; x )=p 3 (x 2 )=y 2 and, with a little algebra, y[x 0 x x 2 ]= y 2 ; y 0 ; y[x 0 x ](x 2 ; x 0 ) (x 2 ; x 0 )(x 2 ; x ) = = y 2 ; y 0 ; y ; y 0 (x x ; 2 ; x 0 ) x 0 (x 2 ; x 0 )(x 2 ; x ) y 2 ; y 0 ; y ; y 0 x 2 ; x 0 x 2 ; x x ; x 0 x 2 ; x x 2 ; x 0 y 2 ; y ; y ; y 0 x = 2 ; x x ; x 0 x 2 ; x 0 = y[x x 2 ] ; y[x 0 x ] : x 2 ; x 0 We dene y[x j x k x l ]= y[x k x l ] ; y[x j x k ] x j ; x l for distinct j k l : (7.5) Eq. (7.5) is quite suggestive. We will not derive it here (because the algebra is quite messy) but, in fact, y[x 0 x x 2 x 3 ]= y[x x 2 x 3 ] ; y[x 0 x x 2 ] x 3 ; x 0 (as we will prove shortly in Theorem 7.2). A schematic is shown in Table 7. which makes the simple nature of the recursion obvious.

10 480 Ch. 7. Approximations of Functions x 0 : y 0 = y[x 0 ] x : y = y[x ] b y[x0 x " ] b y[x x 2 ] " x 2 : y 2 = y[x 2 ] b y[x2 " x " 3 ] x 3 : y 3 = y[x 3 ] b y[x0 x " x 2 ] b y[x x 2 x 3 ] b y[x0 x " x 2 x 3 ] 4:32=32 b 8;32 " ;4 =8 :8 =8 b ;2;8 " 5;4 b = ;0 0;8 " 5; = ;2 " b ;8;(;0) 3;4 = ;2 5:0 =0 b ;8;(;2) " 3; = ;8 b 36;0 " 3;5 = ;8 3:36=36 Table 7.. A schematic of a divided dierence table using four points and the actual data points which generated eq. (7.3b). We have made plausible the fact that there is a simple recursive formula for the coecients in Newton's form of p n (x), eq. (7.4). Now we prove it. Theorem 7.2. Let the n+ points be given by (x i y i ) i 2N [0 n] where all the abscissas are distinct. Let p n 2 n be the unique polynomial which interpolates these points. Then the coecients of p n (x) using eq. (7.4) are calculated by the recursive formula y[x j x j+ ::: x j+k ] = y[x j+ x j+2 ::: x j+k ] ; y[x j x j+ ::: x j+k; ] x j+k ; x j for j 2N [0 n; k] (7.6a) where k = 2 ::: n. The recursion begins with y[x j ]=y j for j 2N [0 n] : (7.6b) Proof: Without loss of generality we can assume that j = 0. (If it is not we simply reorder the.) We prove this theorem by induction. It is obviously points by (x j+i y j+i ) i 2N [0 n; j] true for n = by the same argument as in Example 7.. So we assume it is true for n 2N [ N;] and we prove itfor n = N. Let p N; 2 N; interpolate the points (x i y i ) i 2N [0 N ; ], let q N; 2 N; interpolate the points (x i y i ) i 2N [ N], and dene p N 2 N by p N (x) = x ; x 0 x N ; x 0 q N; (x)+ x N ; x x N ; x 0 p N; (x) : (7.7) We now show that p N interpolates all N+ points. First, p N (x 0 )=p N; (x 0 )=y 0. Next, for

11 7.2. Polynomial Interpolation 48 i 2N [ N ; ] p N (x i )= x i ; x 0 x N ; x 0 q N; (x i )+ x N ; x i x N ; x 0 p N; (x i ) = x i ; x 0 x N ; x 0 y i + x N ; x i x N ; x 0 y i = y i : Finally, p N (x N )=q N; (x N )=y N. All that remains is to equate the coecients of the term x N on both sides of eq. (7.7). One fact which we want to again emphasize is that in p n (x) the coecient onthe term x n is y[x 0 x ::: x n ] and that this coecient is independent of the order in which we assign the points. We have already demonstrated why this is true in Example 7.. Again, the reason is that y[x 0 x ::: x n ] is the coecient ofthe highest-order term in p n (x), i.e., x n. EPLAIN BETTER??? Note also that this means that it is symmetric in all the elements. That is, a reordering of the abscissas does not change its value. SAY BETTER??? The numerical algorithm for the calculation of the coecients of p n (x) in Newton's form is quite simple. In Fortran it is DO 0 I = 0,N A(I) = Y(I) 0 CONTINUE DO 30 I =,N DO 20 K = N,I,- A(K) = ( A(K) - A(K-) ) / ( (K) - (K-I) ) 20 CONTINUE 30 CONTINUE Algorithm 7.3. The coecients in Newton's form of p n (x). where (I) = x I, Y(I) = y I, and, on output, A(I) = a I. The cost of this algorithm is n 2 & = 2 n 2, which is quite reasonable. We can then use Horner's method, Algorithm 7.2, to evaluate p n (x) for any given x. In subsection we will show how to calculate derivatives of p n (x). Let x min = min k x k and x max = max k x k. If x 2 [x min x max ] then we say that we are interpolating the data. If x =2 [x min x max ] then we say that we areextrapolating the data. We will show in the next subsection that polynomial interpolation can be stable or unstable, depending on the particular data and on n. (However, as long as n is \small" it is never unstable. CHECK THIS???) We will also show that polynomial extrapolation is always dangerous. There are occasions when we will use extrapolation. However we will make sure that n is \small" and we will usually do a second procedure to improve the accuracy. SAY BETTER??? EAMPLE: ABM So far we have shown how to calculate the interpolating polynomial and how to evaluate it. We next discuss how accurately the polynomial approximates the underlying function. HOW STABLE IS THIS NUMERICALLY??? ANALYTICALLY IT IS THE SAME AS US- ING THE LAGRANGE BASIS??? The Errors in the Interpolating Polynomial There is no way to determine the error in the interpolating polynomial p n if we are simply given the data, as has been the case so far in this chapter. The reason for this is exactly the same as occurs in achievement andintelligence tests when they ask a multiple choice question such as the following:

12 482 Ch. 7. Approximations of Functions What number should come next in the sequence :::? The correct answer, although it is never one of the choices, is any number. The proof is quite simple. The quartic polynomial p 4 (x)=+2(x; ) + 0(x ; )(x ; 2) + 0(x ; )(x ; 2)(x ; 3) + ; 9 (x ; )(x ; 2)(x ; 3)(x ; 4) 24 takes on the values p 4 () =, p 4 (2) = 3, p 4 (3) = 5, p 4 (4) = 7, and p 4 (5) = for any 2 R. Thus, we have found a deterministic sequence p 4 (k) k = 2 3 :::, based on a \simple" function, which has the values for any real number. Of course, the right answer (according to the test makers) is 9, which is obtained by using the polynomial p (x) = +2(x;). p is certainly a \simpler" function than p 4. However, this extra assumption is never included in the test question and so any answer should be allowed. y Thus, if we are simply given a set of data points, then there is no error in calculating the interpolating polynomial since it passes through all the points (except for round-o errors). In addition, there are an innite numberofcontinuous functions y = y(x) that we can devise to also go through these data points. Thus, the error kp n ; yk is completely indeterminate, where we are free to use any norm. We will generally use the -norm. However, suppose that we start with a \simple" and \smooth" function y = y(x) and we use it to generate our data, so that y k = y(x k ) for k 2 N [0 n]. Then we want to know how accurately p n (x) approximates y(x). Of course, we do not expect p n (x) to be a very good approximation everywhere. Practically,we usually restrict our attention to points z 2 [x min x max ], where x min = min k x k and x max =max k x k. However, theoretically this is not necessary. Theorem 7.3. Let y 2 C n+ (I) for some open interval I R, let x i i 2N [0 n] I, and let y i = y(x i ) for all i. For every z 2 I there exists a = (z) 2 ; minfz x min g maxfz x max g such that y(z) ; p n (z) = n(z) y (n+) () (n +)! : (7.8) Proof: Dene e n y ; p n and note that e n (x j )=0 for all j 2N [0 n]. Thus we can dene a function r 2 C(I) by r(x) = y(x) ; p n(x) n (x) where n (x) = Q n k=0 (x ; x k). Note that r(x) is indeterminate at x = x k since we have 0=0. These are removable singularities and so we dene r(x k ) by lim x!xk r(x). The error in the interpolating polynomial is given by e n (x) =y(x) ; p n (x) = n (x)r(x) and we dene a function closely related to e n (x) ; n (x)r(x) by (x) y(x) ; p n (x) ; r(z) n (x) : (7.9) Note that has at least n+2 zeroes in minfz x min g maxfz x max g since (x j ) = 0 for j 2 N [0 n] and, in addition, (z) =0. By Rolle's theorem (see Appendix I) 0 (x) has at least y The authors dislike multiple \guess" tests intensely.

13 7.2. Polynomial Interpolation 483 n+ zeroes in ; minfz x min g maxfz x max g. Continuing this argument, 00 (x) has at least n zeroes in ; minfz x min g maxfz x max g, etc. Finally, we can conclude that (n+) (x) has at least one zero, say at = (z) 2 ; minfz xmin g maxfz x max g. Thus, y (n+) () ; p (n+) n () ; r(z) (n+) n () =0: Since p n is a polynomial of degree n we have p (n+) n y (n+) () ; r(z)(n +)!=0 and substituting for r(z) in eq. (7.9) we obtain the desired result. (x) 0 also, (n+) (x) =(n + )! Thus, Several words of caution about this result are necessary. The function n (z) grows rapidly in absolute value if z =2 [x min x max ]. For example, we plot 3 for the abscissas in Figure 7.2. Note that 3 (0) > 50 even though 3 () = 0. In Figure 7. we have shown the corresponding Lagrange polynomials, which are not nearly as large. Note that the vertical axes are very dierent for these two gures. The rapid growth outside the interval [x min x max ] is certainly evident in this gure it is much more evident forlarger n. Consequently, it is undesirable to use interpolating polynomials to extrapolate the data points since the error can be very large. Occasionally, wedo use extrapolation but we are very careful. SAY BETTER??? EAMPLE??? n Figure 7.2. The behavior of 3 (x) using the same abscissas as in Example 7. and Figures 7.. Note that the vertical axes are very dierent. Even when z 2 [x min x max ], the error kp n ; yk can be large. This can occur because n (z)y (n+) () can grow faster than n! for the \right" choice of the underlying function y and of the abscissas. We will discuss this growth in detail shortly. Now let us return to one of the fundamental principles of scientic computing, namely that a discretized solution should approach the true continuous solution as the discretization goes to zero. Otherwise, scientic computing has no rm ground to stand on. Suppose that the solution to the continuum problem is y 2 C (a b) and that we obtain f y k g as the solution to the discretized problem. We then construct p n (x) from (x k y k ) k 2N [0 n]. We expect p n (x)! y(x) as n! for all x 2 [x min x max ]. However, as the following example shows, this need not occur.

14 484 Ch. 7. Approximations of Functions Example 7.2. Runge's example The \canonical" example of the growth of kp n ; yk as n! is Runge's example, f(x) = +25x 2 for x 2 [; ]: Suppose that the n+ abscissas are equally spaced in this interval. We can show that f (n+) = O(5 n n!) and also k n k only decays slowly (see Problem 9). Thus, the error grows unboundedly as n!. See Experiment??? where we do this exact problem. SAY BETTER??? That is, for the \wrong" function and/or the \wrong" distribution of abscissas this is an illposed method (IS ILL-POSED THE RIGHT WORD???) to approximate the continuous solution. Note that f(x) = =( + 25x 2 ) might be the \wrong" function, but it is certainly a reasonable function to want to interpolate. Thus, we will need a dierent approach to obtain convergence Osculatory Polynomial Interpolation Up until now wehave assumed that the abscissas were all distinct. However, suppose we know more information about the data than just the values at particular abscissas. Suppose we also know the values of some derivatives at particular abscissas. For example, suppose we not only know the values at the abscissas but we also know the values of the rst derivatives at these abscissas. How can we use this additional information to calculate our interpolating polynomial now? Example 7.3. Specifying rst derivatives Suppose that we want to calculate the unique cubic polynomial p 3 which takes on specied values and rst derivatives at two abscissas x A and x B where, for simplicity, we require that x A <x B. That is, p 3 (x A )=y A, p 0 3(x A )=ya 0, p 3(x B )=y B, and p 0 3(x B )=y B. The notation might be slightly confusing. By p 0 3(x) we, of course, mean dp 3 (x)=dx. However, by ya 0 we do notmeanthederivative of y because we have no underlying function. Instead, this symbol merely denotes the desired slope at x A. We knowhow to calculate p 3 if all the abscissas are distinct by using divided dierences. We want to use the same recursive formulas and numerical algorithms as before (of course, with slight modications), and not have to use a completely dierent form to calculate p 3 or to evaluate p 3 (z). Thus, we will calculate p 3 by rst using divided dierences with four distinct abscissas and we will use a limiting procedure to obtain the desired polynomial. Suppose that our data comes from a function y 2 C (I) for some open interval I R where [x A x B ] I. Then our data is ; xa y(x A ) ; x A + h y(x A + h) ; x B y(x B ) ; x B + h y(x B + h) (7.20) and we represent the abscissas by f x 0 x x 2 x 3 g[a b] where x 0 = x A, x = x A +h, x 2 = x B, and x 3 = x B + h. First, however we will need some notation for our values and their derivatives. The set of points (7.20) is suggestive. If we simply let h =0 then ; x A y(x A ) appears twice (and the same for the other point at x B ). We have lost a crucial piece of information at x A.

15 7.2. Polynomial Interpolation 485 That is, y(x A + h) y(x A )+hy 0 (x A ) and y 0 (x A ) is the data that we lose when h =0. Thus, when h =0 we write our points as (xa y A ) (x A y 0 A) (x B y B ) (x B y 0 B) or, using our standard notation, (x0 y 0 ) (x 0 y 0 0) (x 2 y 2 ) (x 2 y 0 2) : (7.2) Thus, we use repeated abscissas with the corresponding values of the ordinates being y 0, y0, 0 y0 00, etc. Note that this means that the data is discontinuous. When h 6= 0 we use the set of points (7.20) but when h =0 we use the set of data (7.2). When h 6= 0 our polynomial is p 3 (x h) =y[x 0 ]+y[x 0 x ](x ; x 0 )+y[x 0 x x 2 ](x ; x 0 )(x ; x ) + y[x 0 x x 2 x 3 ](x ; x 0 )(x ; x )(x ; x 2 ) where x = x 0 + h and x 3 = x 2 + h. When h =0 our polynomial is p 3 (x) =y[x 0 ]+y[x 0 x 0 ](x ; x 0 )+y[x 0 x 0 x 2 ](x ; x 0 ) 2 + y[x 0 x 0 x 2 x 2 ](x ; x 0 ) 2 (x ; x 2 ) where we expect p 3 (x) =lim h!0 p 3 (x h) for all x. Although the data which is used to calculate p 3 (x h) is dierent than the data which is used to calculate p 3 (x), (as we will see) the coecients of p 3 (x h) approach the coecients of p 3 (x) as h! 0. It is annoying to have discontinuous data, but it makes the analysis, and particularly the numerical algorithm which implements this, much simpler. SAY THIS BETTER??? Using divided dierences and our new notation, we begin with y[x 0 ] = y 0 and y[x 2 ] = y 2. Now, y[x 0 x ]= y(x ) ; y(x 0 )! y x ; 0 (x 0 ) as h! 0 x 0 so that we should set y[x 0 x 0 ]=y 0 0 and, also, y[x 2 x 2 ]=y 0 2 : Next, Thus, and, also, Finally, y[x 0 x x 2 ]= y[x x 2 ] ; y[x 0 x ] x 2 ; x 0! y[x 0 x 2 ] ; y[x 0 x 0 ] x 2 ; x 0 as h! 0 : y[x 0 x 0 x 2 ]= y[x 0 x 2 ] ; y[x 0 x 0 ] x 2 ; x 0 y[x 0 x 2 x 2 ]= y[x 2 x 2 ] ; y[x 0 x 2 ] x 2 ; x 0 : y[x 0 x x 2 x 3 ]= y[x x 2 x 3 ] ; y[x 0 x x 2 ] x 3 ; x 0! y[x 0 x 2 x 2 ] ; y[x 0 x 0 x 2 ] as h! 0 x 2 ; x 0 so we have found the desired coecients. Table 7.2 presents a schematic of these formulas. We do divided dierences normally on y[x i x i+ ::: x i+k ] unless all the elements have the same value. If they do, we use the appropriate derivative. For example, y[x 0 x 0 x 2 ] merely uses divided dierences on y[x 0 x 0 ] and y[x 0 x 2 ]. However, to calculate y[x 0 x 0 ] we simply read o y 0 (x 0 ).

16 486 Ch. 7. Approximations of Functions x 0 : y 0 = y[x 0 ] A" y[x 0 x 0 ] x 0 : y0 0 A " b y[x 0 x 0 x 2 ] y[x " 0 x 2 ] x 2 : y 2 = y[x 2 ] x 2 : y 0 2 b y[x0 x " 2 x 2 ] " y[x 2 x 2 ] b y[x0 x " 0 x 2 x 2 ] Table 7.2. A schematic for calculating the coecients of p 3 (x) when rst derivatives are also specied. Since we can use this technique for any number of derivatives, we will repeat it here for one other example, namely (x0 y 0 ) (x y ) (x y 0 ) (x y 00 ) : Table 7.3 shows the schematic in this case. We will not prove that it is correct. (We leave that to Problem 0.) It follows immediately from Theorem 7.4, which we will prove shortly. SAY BETTER??? x 0 : y 0 = y[x 0 ] b y[x0 x " ] x : y = y[x ] " y[x x ] b y[x0 x " x ] x : y 0 " y[x x x ] x : y 00 " b y[x0 x " x x ] Table 7.3. Aschematic for calculating the coecients of p 3 (x) when a second derivative isspeci- ed. In the above example we have made one fundamental assumption, namely that y[x 0 ::: x k ] is a continuous function of its arguments. That is, suppose we have n+ distinct points again, but (to be perverse) we denote the rst one by, i.e., = x 0. Then Now suppose we let! x k p n (x) =y[]+y[ x ](x ; )+y[ x x 2 ](x ; )(x ; x )+ ::: + y[ x x 2 ::: x n; x n ](x ; )(x ; x ) :::(x ; x n; ) : for some k 2 N [ n]. Then we expect the coecients to vary in a continuous fashion as long as the data points originated from a \smooth enough" function (since the limits must exist). That is, if two abscissas are the same then we require y 2 C [x min x max ] if three abscissas are the same then we require y 2 C 2 [x min x max ] etc. We will not prove this assumption now. It is done in gory detail in Theorem 8.. However, it is not needed in the theorem below, even though it makes the theorem easier to understand. We now show that this limiting process really works. First, we reiterate that we now are using a special ordering for our data (x k y k ). Notation: We require that x i x i+ for all i 2N [0 n; ]. Also, if x i = x i+ = ::: = x i+j are all the abscissas with the same value, then y i+l = y (l) i for l 2N [0 j].

17 7.2. Polynomial Interpolation 487 Theorem 7.4. Let the n+ points be given by (x i y i ) i 2N [0 n] where the data is given in the format just described. Let p n 2 n be the unique polynomial which interpolates the data. Then the coecients of p n (x) using eq. (7.4) are calculated by the recursive formula y[x j x j+ ::: x j+k ] = 8 >< >: y[x j+ x j+2 ::: x j+k ] ; y[x j x j+ ::: x j+k; ] x j+k ; x j y j+k k! = y(k) j k! for j 2N [0 n; k] where k =0 2 ::: n. The recursion begins with if x j+k 6= x j if x j+k = x j (7.22a) y[x j ]=y j if j =0 or x j; 6= x j : (7.22b) Proof: Without loss of generalitywe can assume that j =0 since, if it is not, simply let the data points be (x j+i y j+i ) i 2N [0 n; j]. We prove this theorem by induction. The theorem is true for n = since we have already shown in Example 7.3 that y[x 0 x ]= 8 >< >: y[x ] ; y[x 0 ] x ; x 0 if x 6= x 0 y0 0! if x = x 0. Now assume it is true for n 2N [ N ; ] and we will prove itfor n = N. Let us get the case x 0 = x = ::: = x N out of the way rst. In this case p N (x) =y 0 + y0 0! (x ; x 0)+ y00 0 2! (x ; x 0) 2 + ::: + y(n) 0 N! (x ; x 0) N (7.23) so that obviously y[x 0 x ::: x N ]=y (N) =N! =y N =N! Now we assume that x N 6= x 0. Let p N; 2 N; interpolate the data (xi y i ) i 2N [0 N ; ] and q N; 2 N; interpolate the data (x i y i ) i 2N [ N] and dene p N 2 N by p N (x) = x ; x 0 x N ; x 0 q N; (x)+ x N ; x x N ; x 0 p N; (x) : Note: To calculate p N; and q N; we use two sequences of N data points from our sequence of N+ data points, and we have tobecarefulhowweinterpret these sequences. For example, suppose N =4 and our sequence is (x 0 y 0 ) (x 0 y0) 0 (x 0 y0 00 ) (x 3 y 3 ) (x 3 y3) 0. Then p 3 is calculated from the sequence (x 0 y 0 ) (x 0 y0) 0 (x 0 y0 00 ) (x 3 y 3 ), and q 3 is calculated from the sequence (x 0 y 0 ) (x 0 y0) 0 (x 3 y 3 ) (x 3 y3) 0. Because of this diculty, wehave to consider many more cases than in Theorem 7.2. We nowshowthat p N interpolates all the data. Suppose that we have a distinct abscissa x k. That is, either k =0 and x >x 0, k = N and x N; <x N, or x k; <x k <x k+. The proof is immediate in this case by exactly the same argument as in the proof of Theorem 7.2. We have just handled the case when an abscissa is distinct. Thus, we now need only consider abscissas which are not distinct. Let x i = x i+ = ::: = x i+j be all the abscissas with this value when j>0. We will show that p N (x i+l )=y (l) i for l =0 ::: j: (7.24)

18 488 Ch. 7. Approximations of Functions We have to consider a number of cases. First, we prove eq. (7.24) when l =0. If i =0 then p N (x 0 )=p N; (x 0 )=y 0. If i>0 then p N; (x i )=q N; (x i )=y i. We have just handled the case when l =0. Thus, we now need only consider l>0, so we need to calculate p (l) N (x). It is p (l) N (x) = x ; x 0 q x N ; (l) N; x (x)+ x N ; x p 0 x N ; (l) q(l;) N; (x) ; p(l;) N; N; (x)+l (x) : x 0 x N ; x 0 If i =0 then p (l) N (x l)=p (l) N; (x l)=y l since q (l;) N; (x l)=p (l;) N; (x l) and x = x 0. If i > 0 and i + l < N p (l;) N; (x i+l). Finally, if i + l = N it is again true since p (l) x = x N. it is again true since q (l) N; (x i+l) = p (l) N; (x i+l) and q (l;) N; (x i+l) = The Fortran code which does all this is the following. N (x N) = q (l) N; (x N), q (l;) N; (x N)=p (l;) N; (x N), and DO 0 I = 0,N A(I) = Y(I) 0 CONTINUE DO 30 I =,N ALAST = A(I-) DO 20 K = I,N IF ( (K-I).NE. (K) ) THEN TEMP = ( A(K) - ALAST ) / ( (K) - (K-I) ) ALAST = A(K) A(K) = TEMP ELSE A(K) = A(K)/I ENDIF 20 CONTINUE 30 CONTINUE Algorithm 7.4. The calculation of the coecients in Newton's form of p n (x) when the abscissas can be repeated. This is a generalization of Algorithm 7.3. Note that this algorithm is a generalization of Algorithm 7.3. However, Algorithm 7.3 was written in a particularly simple form so the two do not appear to be as similar as they really are. You should always use Algorithm 7.4 since you never know when you might want to calculate an osculatory polynomial. There is a slight overhead in this algorithm because of the extra temporary variables and the IF test. However, this is very small compared to the versatility this algorithm allows. Warning: We must be very careful when using Algorithm 7.4 with repeated abscissas. For example, suppose that x = x 2 = x 3. Then, analytically the IF test \IF ( (K-I).NE. (K) )" in the algorithm is false when K = 2 and K = 3. However, numerically it might not be unless all three abscissas are calculated by exactly the same formula. Thus, before using this algorithm it is wise to include the lines of code (2) = () (3) = () or DO 0 I = 2,3 (I) = () 0 CONTINUE

19 7.2. Polynomial Interpolation 489 In this way we are guaranteed that the IF tests are false. CHECK THIS OVER AND MAKE IT CLEARER WHY THERE MIGHT BE A PROBLEM??? Calculating Derivatives of Polynomials We have now completed our discussion of how to calculate p n (x) in Newton's form. However, one question still remains. Since it is dicult to dierentiate p n (x) in Newton's form, how can we easily calculate derivatives of this polynomial at specied points, i.e., p (j) n (z)? To see this diculty, simply dierentiate p 3 (x) = 8 + 4(x ; ) ; 6(x ; )(x ; 3) ; 2(x ; )(x ; 3)(x ; 4) from Example 7.. Even calculating p 0 3(x) is quite dicult, and the result is no longer in Newton form. It would be easy to calculate derivatives if p n (x) was written in the power form (7.8) or the shifted power form (7.0). It would be P particularly easy if the in the shifted power form was n exactly z. That is, suppose p n (x) = i=0 a j(x ; z) j. Then p (j) n (z) =j! a j for j 2N [0 n]. We can go part way in converting from Newton's form to the shifted power form in the following way. Suppose we change the abscissas from f x 0 x x 2 ::: x n; x n g to f z x 0 x x 2 ::: x n;2 x n; g. Then it is trivial to calculate p n (z) because p n (x) =y[z]+y[z x 0 ](x ; z)+y[z x 0 x ](x ; z)(x ; x 0 ) + y[z x 0 x x 2 ](x ; z)(x ; x 0 )(x ; x )+ ::: + y[z x 0 x ::: x n;2 x n; ](x ; z)(x ; x 0 )(x ; x ) :::(x ; x n;2 ) and so p n (z) = y[z]. Next, suppose we further change the abscissas to f z z x 0 x ::: x n;3 x n;2 g. Then it is trivial to calculate p 0 n(z) because p n (x) =y[z]+y[z z](x ; z)+y[z z x 0 ](x ; z) 2 + y[z z x 0 x ](x ; z) 2 (x ; z 0 )+ ::: + y[z z x 0 ::: x n;3 x n;2 ](x ; z) 2 (x ; x 0 ) :::(x ; x n;3 ) and so p 0 n(z) = y[z z]. If we want to evaluate p 00 n(z) we need only operate once more on the abscissas to obtain f z z z x 0 ::: x n;4 x n;3 g and then p 00 n(z) =2!y[z z z]. Obviously, wecan continue this procedure ad innitum. All that remains is to nd an algorithm which changes the abscissas in this way. An example will make this algorithm clear. Example 7. (continued). [Message:exam:[app: 4pts]cexam] (cont. from page 478) Consider the cubic polynomial p 3 (x) =y[x 0 ]+y[x 0 x ](x ; x 0 )+y[x 0 x x 2 ](x ; x 0 )(x ; x ) which we will write for simplicity as + y[x 0 x x 2 x 3 ](x ; x 0 )(x ; x )(x ; x 2 ) p 3 (x) =a 0 + a (x ; x 0 )+a 2 (x ; x 0 )(x ; x ) + a 3 (x ; x 0 )(x ; x )(x ; x 2 ) : (7.25)

20 490 Ch. 7. Approximations of Functions We will show the steps needed in transformating the abscissas from f x 0 x x 2 x 3 g to f z x 0 x x 2 g. The procedure is quite simple. Begin with the last term on the right-hand side of eq. (7.25), and rewrite x ; x 2 as (x ; z)+(z ; x 2 ). Transfer z ; x 2 to the third term of p 3 (x) where it modies the coecient a 2. Next, in the third term rewrite x ; x as (x ; z)+(z ; x ) and transfer z ; x to the second term where it modies a. Finally, rewrite x ; x 0 in the second term as (x ; z)+(z ; x 0 ) and transfer z ; x 0 to the rst term where it modies a 0. All these steps are shown explicitly below. p 3 (x) =a 0 + a (x ; x 0 )+a 2 (x ; x 0 )(x ; x ) + a 3 (x ; x 0 )(x ; x ) (x ; z)+(z ; x 2 ) = a 0 + a (x ; x 0 )+ a 2 + a 3 (z ; x 2 ) (x ; x 0 )(x ; x ) + b 3 (x ; z)(x ; x 0 )(x ; x ) where b 3 = a 3 = a 0 + a (x ; x 0 )+b 2 (x ; x 0 ) (x ; z)+(z ; x ) + b 3 (x ; z)(x ; x 0 )(x ; x ) where b 2 = a 2 + b 3 (z ; x 2 ) = a 0 + a + b 2 (z ; x ) (x ; x 0 )+b 2 (x ; z)(x ; x 0 ) + b 3 (x ; z)(x ; x 0 )(x ; x ) = a 0 + b (x ; z)+(z ; x0 ) + b 2 (x ; z)(x ; x 0 ) + b 3 (x ; z)(x ; x 0 )(x ; x ) where b = a + b 2 (z ; x ) = a 0 + b (z ; x 0 ) + b (x ; z)+b 2 (x ; z)(x ; x 0 ) + b 3 (x ; z)(x ; x 0 )(x ; x ) = b 0 + b (x ; z)+b 2 (x ; z)(x ; x 0 ) The recursion relation for the coecients is + b 3 (x ; z)(x ; x 0 )(x ; x ) where b 0 = a 0 + b (z ; x 0 ) b 3 = a 3 b j = a j + b j+ (z ; x j ) for j =2 0 : This procedure is quite similar to Algorithm 7.2. The dierence is that in the algorithm the coecients are not saved. Only b 0 is known when the algorithm nishes. Thus, in general we switch the coecients from to by p n (x) =a 0 + a (x ; x 0 )+a 2 (x ; x 0 )(x ; x )+ ::: + a n (x ; x 0 )(x ; x ) :::(x ; x n; ) p n (x) =b 0 + b (x ; z)+b 2 (x ; z)(x ; x 0 )+ ::: + b n (x ; z)(x ; x 0 ) :::(x ; x n;2 ) b n = a n b j = a j + b j+ (z ; x j ) for j = n ; n; 2 ::: 0 :

21 7.2. Polynomial Interpolation 49 We will not prove that this recursion relation is true in general, but it is easily done by induction. The algorithm which is used to calculate the new coecients is a slight modication of Algorithm 7.2, namely B(N) = A(N) DO 20 I = N-,0,- B(I) = (Z - (I))*B(I+) + A(I) 20 CONTINUE Algorithm 7.5. Modifying the coecients of p n (x) from the abscissas x 0 x ::: x n; x n (array A) to z x 0 ::: x n;2 x n; (array B). Each time this algorithm is called, z is added at the left end of the set of abscissas and the rightmost abscissa is removed. Note that if we only want to calculate p n (z), then the new array B(I) is not needed, and we can simply use Algorithm 7.2. However, to calculate p (j) n (z) for j 2 N [ n] we must use Algorithm 7.5. To calculate p (j) n (z) we simply apply Algorithm 7.5 j+ times and read o the answer as p (j) n (z) =j! b j : The complete algorithm follows where J 0 is the desired derivative. IF ( J.GT. N ) THEN PJZ = 0. ELSE DO 0 I = 0,N B(I) = A(I) 0 CONTINUE FACTRL =. DO 30 JD = 0,J FACTRL = FACTRL*MA(JD, ) DO 20 I = N-,0,- IF ( I.GE. JD ) THEN B(I) = (Z - (I-JD))*B(I+) + B(I) ENDIF 20 CONTINUE 30 CONTINUE PJZ = FACTRL*B(J) ENDIF Algorithm 7.6. The calculation of p (j) n (z) for j 2 N [0 ). The result is stored in PJZ. CHECK THIS ALGORITHM OVER???. The cost of using Algorithm 7.6 is 2n & n Since it takes j+ calls to Algorithm 7.5 to calculate p (j) n (z), the total cost of evaluating the j-th derivative of the interpolating polynomial is 2n(j +) & n(j +), which is quite acceptable Uses of Polynomial Interpolation NEVILLE'S ALGORITHM IS A WAY TOEVALUATE A SET OF DATA POINTS AT ONE POINT REVERSE AND Y TO SOLVE F() = 0??? Exercise 7.2. (a) Show that n is a linear space. (b) Show that the set of all polynomials of degree exactly n is not a linear space unless n =0.

22 492 Ch. 7. Approximations of Functions 5. (P) 2. Let p n (x) =a 0 + a x + :::a n x n and consider the following algorithm for evaluating p n (x) for a given z. PZ = A(0) ZZ =. DO 0 I =,N ZZ = Z*ZZ PZ = PZ + A(I)*ZZ 0 CONTINUE Algorithm 7.7. An alternative to Horner's method for evaluating p n (z). On output, p n (z) =PZ. It is not commonly used because it is slower. (a) Determine the cost of evaluating p n (z) by Algorithm 7.7 and compare it with the cost of using Horner's method, Algorithm 7.. (b) Determine the stability of this algorithm. For simplicity, assume that f a k g and z can be stored exactly in the computer. Thus, you only need to consider the errors in doing the arithmetical operations. (c) Repeat part (b) for Horner's method. 3. Evaluate the polynomial p(x) = x 5 +2x 4 +3x 2 + x ; at x = 3 by hand. Do this in two ways. First, calculate p(3) directly (that is, evaluate each term separately so 2x 4 is evaluated as 23333, etc.). Second, use Horner's method. Determine how many additions and multiplications are required for each method. 4. Apply the algorithm for derivatives of newton polynomial form to compute p 0 (;2) if p(x)=+(x ; )[2 + (x ; 2)[3 + (x ; 3)[4]]] : This is, use the algorithm to rewrite p(x) as c 0 +(x +2)[c +(x + 2)[c 2 +(x ; )[c 3 ]]]. Let p 0 (x) = (x ; ) 0 and consider the evaluation of this polynomial for x 2 [0 2]. Choose z 2 [0 2] which can be stored exactly in the computer and determine the actual round-o errors (consider both absolute and relative errors) using both Algorithm 7.7 and Horner's methods. Are the actual round-o errors \approximately" equal to those you calculated in Problem 2? Hint: You do not need to consider all z 2 [0 2]. Instead, explain why the interval [0 2] can be divided into a small number of subintervals in which the round-o characteristics p 0(z) can be expected to be \approximately" equal. Then choose a few values from each subinterval and compare \theory and experiment". 6. Prove that f[x 0 x ::: x n ]= n k=0 f(x k ) 0 n(x k ) : Hint: Find the leading coecient of the polynomial expressed in Lagrange form. 7. Use Problem 6 to calculate where x 2 x 3 ::: x n are kept xed. Then calculate lim f[x 0 x ::: x n ] x!x 0 lim x 2!x!x 0 f[x 0 x x 2 ] :

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

ERROR IN LINEAR INTERPOLATION

ERROR IN LINEAR INTERPOLATION ERROR IN LINEAR INTERPOLATION Let P 1 (x) be the linear polynomial interpolating f (x) at x 0 and x 1. Assume f (x) is twice continuously differentiable on an interval [a, b] which contains the points

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Essentials of Intermediate Algebra

Essentials of Intermediate Algebra Essentials of Intermediate Algebra BY Tom K. Kim, Ph.D. Peninsula College, WA Randy Anderson, M.S. Peninsula College, WA 9/24/2012 Contents 1 Review 1 2 Rules of Exponents 2 2.1 Multiplying Two Exponentials

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

1 Lecture 8: Interpolating polynomials.

1 Lecture 8: Interpolating polynomials. 1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

Lecture 10 Polynomial interpolation

Lecture 10 Polynomial interpolation Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v 260) Contents 2 Partial Derivatives 1 21 Limits and Continuity 1 22 Partial Derivatives 5 23 Directional Derivatives and the Gradient

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

The following can also be obtained from this WWW address: the papers [8, 9], more examples, comments on the implementation and a short description of

The following can also be obtained from this WWW address: the papers [8, 9], more examples, comments on the implementation and a short description of An algorithm for computing the Weierstrass normal form Mark van Hoeij Department of mathematics University of Nijmegen 6525 ED Nijmegen The Netherlands e-mail: hoeij@sci.kun.nl April 9, 1995 Abstract This

More information

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains Ring Theory (part 4): Arithmetic and Unique Factorization in Integral Domains (by Evan Dummit, 018, v. 1.00) Contents 4 Arithmetic and Unique Factorization in Integral Domains 1 4.1 Euclidean Domains and

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

The WENO Method for Non-Equidistant Meshes

The WENO Method for Non-Equidistant Meshes The WENO Method for Non-Equidistant Meshes Philip Rupp September 11, 01, Berlin Contents 1 Introduction 1.1 Settings and Conditions...................... The WENO Schemes 4.1 The Interpolation Problem.....................

More information

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv Math 1270 Honors ODE I Fall, 2008 Class notes # 1 We have learned how to study nonlinear systems x 0 = F (x; y) y 0 = G (x; y) (1) by linearizing around equilibrium points. If (x 0 ; y 0 ) is an equilibrium

More information

Polynomial Interpolation Part II

Polynomial Interpolation Part II Polynomial Interpolation Part II Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Exercise Session

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

Abstract Minimal degree interpolation spaces with respect to a nite set of

Abstract Minimal degree interpolation spaces with respect to a nite set of Numerische Mathematik Manuscript-Nr. (will be inserted by hand later) Polynomial interpolation of minimal degree Thomas Sauer Mathematical Institute, University Erlangen{Nuremberg, Bismarckstr. 1 1, 90537

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information

Interpolation and Approximation

Interpolation and Approximation Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 2 Interpolation

Chapter 2 Interpolation Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data

More information

Series Solutions. 8.1 Taylor Polynomials

Series Solutions. 8.1 Taylor Polynomials 8 Series Solutions 8.1 Taylor Polynomials Polynomial functions, as we have seen, are well behaved. They are continuous everywhere, and have continuous derivatives of all orders everywhere. It also turns

More information

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

. Consider the linear system dx= =! =  a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real

More information

Dr. Relja Vulanovic Professor of Mathematics Kent State University at Stark c 2008

Dr. Relja Vulanovic Professor of Mathematics Kent State University at Stark c 2008 MATH-LITERACY MANUAL Dr. Relja Vulanovic Professor of Mathematics Kent State University at Stark c 2008 2 Algebraic Epressions 2.1 Terms and Factors 29 2.2 Types of Algebraic Epressions 32 2.3 Transforming

More information

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S,

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, Group, Rings, and Fields Rahul Pandharipande I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, A binary operation φ is a function, S S = {(x, y) x, y S}. φ

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Minimum and maximum values *

Minimum and maximum values * OpenStax-CNX module: m17417 1 Minimum and maximum values * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 In general context, a

More information

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy August 25, 2017 A group of residents each needs a residency in some hospital. A group of hospitals each need some number (one

More information

Roots of Unity, Cyclotomic Polynomials and Applications

Roots of Unity, Cyclotomic Polynomials and Applications Swiss Mathematical Olympiad smo osm Roots of Unity, Cyclotomic Polynomials and Applications The task to be done here is to give an introduction to the topics in the title. This paper is neither complete

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software Practical Algebra A Step-by-step Approach Brought to you by Softmath, producers of Algebrator Software 2 Algebra e-book Table of Contents Chapter 1 Algebraic expressions 5 1 Collecting... like terms 5

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

1. Introduction. Consider a single cell in a mobile phone system. A \call setup" is a request for achannel by an idle customer presently in the cell t

1. Introduction. Consider a single cell in a mobile phone system. A \call setup is a request for achannel by an idle customer presently in the cell t Heavy Trac Limit for a Mobile Phone System Loss Model Philip J. Fleming and Alexander Stolyar Motorola, Inc. Arlington Heights, IL Burton Simon Department of Mathematics University of Colorado at Denver

More information

Groups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group

Groups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group C H A P T E R t h r e E Groups Introduction Some of the standard topics in elementary group theory are treated in this chapter: subgroups, cyclic groups, isomorphisms, and homomorphisms. In the development

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2  1 LECTURE 5 Characteristics and the Classication of Second Order Linear PDEs Let us now consider the case of a general second order linear PDE in two variables; (5.) where (5.) 0 P i;j A ij xix j + P i,

More information

Polynomial Interpolation

Polynomial Interpolation Polynomial Interpolation (Com S 477/577 Notes) Yan-Bin Jia Sep 1, 017 1 Interpolation Problem In practice, often we can measure a physical process or quantity (e.g., temperature) at a number of points

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17 Lecture 20: Lagrange Interpolation and Neville s Algorithm for I will pass through thee, saith the LORD. Amos 5:17 1. Introduction Perhaps the easiest way to describe a shape is to select some points on

More information

IES Parque Lineal - 2º ESO

IES Parque Lineal - 2º ESO UNIT5. ALGEBRA Contenido 1. Algebraic expressions.... 1 Worksheet: algebraic expressions.... 2 2. Monomials.... 3 Worksheet: monomials.... 5 3. Polynomials... 6 Worksheet: polynomials... 9 4. Factorising....

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Damped harmonic motion

Damped harmonic motion Damped harmonic motion March 3, 016 Harmonic motion is studied in the presence of a damping force proportional to the velocity. The complex method is introduced, and the different cases of under-damping,

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Contents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems

Contents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems Dynamics, Chaos, and Fractals (part 1): Introduction to Dynamics (by Evan Dummit, 2015, v. 1.07) Contents 1 Introduction to Dynamics 1 1.1 Examples of Dynamical Systems......................................

More information

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday

More information

Week 4: Differentiation for Functions of Several Variables

Week 4: Differentiation for Functions of Several Variables Week 4: Differentiation for Functions of Several Variables Introduction A functions of several variables f : U R n R is a rule that assigns a real number to each point in U, a subset of R n, For the next

More information

1 Ordinary points and singular points

1 Ordinary points and singular points Math 70 honors, Fall, 008 Notes 8 More on series solutions, and an introduction to \orthogonal polynomials" Homework at end Revised, /4. Some changes and additions starting on page 7. Ordinary points and

More information

Interpolation and polynomial approximation Interpolation

Interpolation and polynomial approximation Interpolation Outline Interpolation and polynomial approximation Interpolation Lagrange Cubic Approximation Bézier curves B- 1 Some vocabulary (again ;) Control point : Geometric point that serves as support to the

More information

BSM510 Numerical Analysis

BSM510 Numerical Analysis BSM510 Numerical Analysis Polynomial Interpolation Prof. Manar Mohaisen Department of EEC Engineering Review of Precedent Lecture Polynomial Regression Multiple Linear Regression Nonlinear Regression Lecture

More information

Normed and Banach spaces

Normed and Banach spaces Normed and Banach spaces László Erdős Nov 11, 2006 1 Norms We recall that the norm is a function on a vectorspace V, : V R +, satisfying the following properties x + y x + y cx = c x x = 0 x = 0 We always

More information

CHAPTER 5. Higher Order Linear ODE'S

CHAPTER 5. Higher Order Linear ODE'S A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 2 A COLLECTION OF HANDOUTS ON SCALAR LINEAR ORDINARY

More information

Math Precalculus I University of Hawai i at Mānoa Spring

Math Precalculus I University of Hawai i at Mānoa Spring Math 135 - Precalculus I University of Hawai i at Mānoa Spring - 2014 Created for Math 135, Spring 2008 by Lukasz Grabarek and Michael Joyce Send comments and corrections to lukasz@math.hawaii.edu Contents

More information

Convergence for periodic Fourier series

Convergence for periodic Fourier series Chapter 8 Convergence for periodic Fourier series We are now in a position to address the Fourier series hypothesis that functions can realized as the infinite sum of trigonometric functions discussed

More information

Algorithmic Lie Symmetry Analysis and Group Classication for Ordinary Dierential Equations

Algorithmic Lie Symmetry Analysis and Group Classication for Ordinary Dierential Equations dmitry.lyakhov@kaust.edu.sa Symbolic Computations May 4, 2018 1 / 25 Algorithmic Lie Symmetry Analysis and Group Classication for Ordinary Dierential Equations Dmitry A. Lyakhov 1 1 Computational Sciences

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Equations (3) and (6) form a complete solution as long as the set of functions fy n (x)g exist that satisfy the Properties One and Two given by equati

Equations (3) and (6) form a complete solution as long as the set of functions fy n (x)g exist that satisfy the Properties One and Two given by equati PHYS/GEOL/APS 661 Earth and Planetary Physics I Eigenfunction Expansions, Sturm-Liouville Problems, and Green's Functions 1. Eigenfunction Expansions In seismology in general, as in the simple oscillating

More information

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Taylor series. Chapter Introduction From geometric series to Taylor polynomials Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week: December 4 8 Deadline to hand in the homework: your exercise class on week January 5. Exercises with solutions ) Let H, K be Hilbert spaces, and A : H K be a linear

More information

Constrained Leja points and the numerical solution of the constrained energy problem

Constrained Leja points and the numerical solution of the constrained energy problem Journal of Computational and Applied Mathematics 131 (2001) 427 444 www.elsevier.nl/locate/cam Constrained Leja points and the numerical solution of the constrained energy problem Dan I. Coroian, Peter

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM Finite Elements January 18, 2019 The finite element Given a triangulation T of a domain Ω, finite element spaces are defined according to 1. the form the functions take (usually polynomial) when restricted

More information

7.5 Partial Fractions and Integration

7.5 Partial Fractions and Integration 650 CHPTER 7. DVNCED INTEGRTION TECHNIQUES 7.5 Partial Fractions and Integration In this section we are interested in techniques for computing integrals of the form P(x) dx, (7.49) Q(x) where P(x) and

More information

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi Lecture No - 27 Interpolation and Approximation (Continued.) In our last lecture we

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

Chapter 13 - Inverse Functions

Chapter 13 - Inverse Functions Chapter 13 - Inverse Functions In the second part of this book on Calculus, we shall be devoting our study to another type of function, the exponential function and its close relative the Sine function.

More information

Physics 70007, Fall 2009 Answers to Final Exam

Physics 70007, Fall 2009 Answers to Final Exam Physics 70007, Fall 009 Answers to Final Exam December 17, 009 1. Quantum mechanical pictures a Demonstrate that if the commutation relation [A, B] ic is valid in any of the three Schrodinger, Heisenberg,

More information