Scientific Computing
|
|
- Jonathan Craig
- 5 years ago
- Views:
Transcription
1 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66
2 Contents 1. Polynomial interpolation 2. Spline functions 3. The best approximation problem 4. Chebyshev polynomials 5. A near-minimax approximation method 6. Legendre Polynomials and Least squares approximation Chapter 2 Interpolation and Approximation p. 2/66
3 1. Polynomial Interpolation Mathematical functions sometimes cannot be evaluated exactly. We are limited to elementary arithmetic operations +,,, and in computers. Combination of these operations allow the evaluation of only polynomials and rational functions. All other functions are approximated based on them. Interpolation is the process of finding and evaluating a function whose graph goes through a set of given points. The interpolation function is usually chosen from a restricted class of functions, and polynomials are the most commonly used class. Spline functions refer to the class of piecewise polynomial functions. Chapter 2 Interpolation and Approximation p. 3/66
4 1. Polynomial Interpolation Interpolation is originally used to tabulate common mathematical functions, but it is a far less important use in the present day. Interpolation is an important tool in producing computable approximations to commonly used functions. To numerically integrate or differentiate a function, we often replace the function with a simpler approximating expressions. Interpolation is widely used in computer graphics to produce smooth curves and surfaces when the geometric objects of interest is given at only a discrete set of data points. Chapter 2 Interpolation and Approximation p. 4/66
5 1.1 Linear Interpolation Linear interpolation is the construction of a straight line passing through two given data points. It is used here as an introduction to more general polynomial interpolation Given two points (x 0,y 0 ) and (x 1,y 1 ) with x 0 x 1, the straight line drawn through them is the graph of the linear polynomial P 1 (x) = (x 1 x)y 0 + (x x 0 )y 1 x 1 x 0 This function interpolates the value y i at the point x i, i = 0, 1, or P 1 (x i ) = y i, i = 0, 1. Chapter 2 Interpolation and Approximation p. 5/66
6 1.2 Quadratic Interpolation Then graph of data are curved rather than straight, we better approximate by looking at polynomial of a degree greater than 1. Given three data points (x 0,y 0 ), (x 1,y 1 ), and (x 2,y 2 ), with x 0,x 1,x 2 begin distinct numbers. The quadratic polynomial passing through these points are constructed as follows: P 2 (x) = y 0 L 0 (x) + y 1 L 1 (x) + y 2 L 2 (x) Chapter 2 Interpolation and Approximation p. 6/66
7 1.2 (Cont d) L 0 (x) = (x x 1)(x x 2 ) (x 0 x 1 )(x 0 x 2 ) L 1 (x) = (x x 0)(x x 2 ) (x 1 x 0 )(x 1 x 2 ) L 2 (x) = (x x 0)(x x 1 ) (x 2 x 0 )(x 2 x 1 ) We see that L i (x j ) = δ ij, 0 i,j 2 and P 2 (x i ) = y i,i = 0, 1, 2. Chapter 2 Interpolation and Approximation p. 7/66
8 1.3 Higher-Degree Interpolation Consider the general case where n + 1 data points, (x 0,y 0 ),...,(x n,y n ) are given with all the x i s distinct. The interpolating polynomial of degree n is given by P n (x) = y 0 L 0 (x) + y 1 L 1 (x) + + y n L n (x) with each L i (x) a polynomial of degree n given by L i (x) = (x x 0) (x x i 1 )(x x i+1 ) (x x n ) (x i x 0 ) (x i x i 1 )(x i x i+1 ) (x i x n ) for i = 0, 1,...,n. This formula is called Lagrange s formula. Chapter 2 Interpolation and Approximation p. 8/66
9 1.3 (Cont d) It is easy to show that L i (x j ) = δ ij, 0 i,j n. In addition, P n (x j ) = y j,j = 0, 1,...,n. With linear interpolation, it was obvious that there was only one straight line passing through two given data points. But with n data points, it is less obvious that there is only one interpolating polynomial of degree n whose graph passes through the points. Question 1. Prove that there is only one polynomial P n (x) among all polynomials of degree n that satisfy the interpolating conditions P n (x i ) = y i, i = 0, 1,...,n where the x i s are distinct. Chapter 2 Interpolation and Approximation p. 9/66
10 1.4 Divided Differences The Lagrange s formula is well-suited for many theoretical uses of interpolation, but it is less desirable when actually computing the value of an interpolating polynomial. As an example, knowing P 2 (x) does not lead to a less expensive way to evaluate P 3 (x). For this reason, we introduce an alternative and more easily calculable formulation for the interpolation polynomials P 1 (x),p 2 (x),...,p n (x). As a needed preliminary to this new formula for interpolation, we introduce a discrete version of the derivative of a function f(x). Chapter 2 Interpolation and Approximation p. 10/66
11 1.4 (Cont d) Let x 0 and x 1 be distinct numbers and define f[x 0,x 1 ] = f(x 1) f(x 0 ) x 1 x 0 This is called the first-order divided difference of f(x). If f(x) is differentiable on an interval containing x 0 and x 1, then the mean value theorem implies f[x 0,x 1 ] = f (c) for some c between x 0 and x 1. If x 0 and x 1 are close enough, then f[x 0,x 1 ] f ( x 0 +x 1 ) 2. Chapter 2 Interpolation and Approximation p. 11/66
12 1.4 (Cont d) We define higher-order divided differences recursively using lower-order ones. Let x 0,x 1, and x 2 be distinct real numbers, and define f[x 0,x 1,x 2 ] = f[x 1,x 2 ] f[x 0,x 1 ] x 2 x 0 This is called the second-order divided difference. For x 0,x 1,x 2, and x 3 distinct, define f[x 0,x 1,x 2,x 3 ] = f[x 1,x 2,x 3 ] f[x 0,x 1,x 2 ] x 3 x 0 Chapter 2 Interpolation and Approximation p. 12/66
13 1.4 (Cont d) In general, let x 0,x 1,...,x n be n + 1 distinct numbers, and define f[x 0,x 1,...,x n ] = f[x 1,...,x n ] f[x 0,...,x n 1 ] x n x 0 This is the divided difference of order n, sometime also called the Newton divided difference. Chapter 2 Interpolation and Approximation p. 13/66
14 1.4 (Cont d) Theorem 1 Let n 1, and assume f(x) is n-time continuously differentiable on some interval α x β. Let x 0,x 1,...,x n be n + 1 distinct numbers in [α,β]. Then f[x 0,x 1,...,x n ] = 1 n! f(n) (c) for some unknown point c lying between the minimum and maximum of the numbers x 0,x 1,...,x n. Chapter 2 Interpolation and Approximation p. 14/66
15 1.5 Properties of Divided Differences The divided differences have a number of special properties that can simplify work with them. First, let (i 0,i 1,...,i n ) denote a permutation (or rearrangement) of the integers (0, 1,...,n). Then it can be shown that f[x i0,x i1,...,x in ] = f[x 0,x 1,...,x n ] The original definition seems to imply that the order of x 0,x 1,...,x n will make a difference in the calculation of f[x 0,x 1,...,x n ]; but the above equation asserts that this is not true. The proof is nontrivial and we will consider only the cases n = 1 and n = 2. Chapter 2 Interpolation and Approximation p. 15/66
16 1.5 (Cont d) For n = 1, f[x 1,x 0 ] = f(x 0) f(x 1 ) x 0 x 1 = f(x 1) f(x 0 ) x 1 x 0 = f[x 0,x 1 ] For n = 2, f[x 0,x 1,x 2 ] = f(x 0 ) (x 0 x 1 )(x 0 x 2 ) + f(x 1 ) (x 1 x 0 )(x 1 x 2 ) + f(x 2 ) (x 2 x 0 )(x 2 x 1 ) Interchanging x 0,x 1,x 2 will interchange the order of the variables, but the sum will remain the same. Chapter 2 Interpolation and Approximation p. 16/66
17 1.5 (Cont d) A second useful property is that the definitions of divided differences can be extended to the case where some or all of the node points x i are coincident provided that f(x) is sufficiently differentiable. For example, define f[x 0,x 0 ] = lim x 1 x 0 f(x 1 ) f(x 0 ) x 1 x 0 = f (x 0 ) For any arbitrary n 1, let all of the nodes approach x 0. This leads to the definition f[x 0,x 0,...,x 0 ] = 1 n! f(n) (x 0 ) Chapter 2 Interpolation and Approximation p. 17/66
18 1.5 (Cont d) For cases where only some of the nodes are coincident, we can rearrange the variables to extended the definition of divided differences. For example, f[x 0,x 1,x 0 ] = f[x 0,x 0,x 1 ] = f[x 0,x 1 ] f[x 0,x 0 ] x 1 x 0 = f[x 0,x 1 ] f (x 0 ) x 1 x 0 Chapter 2 Interpolation and Approximation p. 18/66
19 ton s Divided Differences Interpolation Fo The Lagrange s formula is very inconvenient for actual calculations fora sequence of interpolation polynomials of increasing degree. We will avoid this problem by using the divided differences of the data being interpolated to calculate P n (x). Let P n (x) denote the polynomial interpolating f(x i ) at x i, for i = 0, 1,...,n. Thus, deg(p n ) n and P n (x i ) = y i, i = 0, 1,...,n Chapter 2 Interpolation and Approximation p. 19/66
20 1.6 (Cont d) Then the interpolation polynomials can be written as follows: P 1 (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ] P 2 (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ]. +(x x 0 )(x x 1 )f[x 0,x 1,x 2 ] P n (x) = f(x 0 ) + (x x 0 )f[x 0,x 1 ] + +(x x 0 )(x x 1 ) (x x n 1 )f[x 0,...,x n ] This is called Newton s divided difference formula for the interpolating polynomial. Chapter 2 Interpolation and Approximation p. 20/66
21 1.6 (Cont d) Note that for k 0 P k+1 (x) = P k (x) + (x x 0 ) (x x k )f[x 0,...,x k+1 ] Thus, we can go from degree k to degree k + 1 with a minimum of calculation, once the divided difference coefficients have been computed. We will prove the formula for P 1 (x). Consider P 1 (x 0 ) = f(x 0 ) and [ ] f(x1 ) f(x 0 ) P 1 (x 1 ) = f(x 0 ) + (x 1 x 0 ) x 1 x 0 = f(x 0 ) + [f(x 1 ) f(x 0 )] = f(x 1 ) Chapter 2 Interpolation and Approximation p. 21/66
22 1.6 (Cont d) Thus, deg(p 1 ) 1, and it satisfies the interpolation conditions. By the uniqueness of polynomial interpolation, the Newton s divided differences formula for P 1 (x) is the linear interpolation polynomial to f(x) at x 0,x 1. Question 2. Prove that the Newton s divided difference formula for P 2 (x) is the quadratic interpolation polynomial to f(x) at x 0,x 1,x 2. Chapter 2 Interpolation and Approximation p. 22/66
23 2.1 Spline Interpolation To simplest method to interpolate data is to connect the node points by straight-line segments. This is called piecewise linear interpolation. x y y x Chapter 2 Interpolation and Approximation p. 23/66
24 2.1 (Cont d) We may use the polynomial interpolation, P 6 (x), instead. 5 y x Although it is a smooth graph, it is quite different from that of the piecewise linear interpolation. Chapter 2 Interpolation and Approximation p. 24/66
25 2.1 (Cont d) To pose the problem more generally, suppose n data points (x i,y i ),i = 1,...,n, are given. For simplicity, assume that x 1 < x 2 < < x n and let a = x 1,b = x n. We seek a function s(x) defined on [a, b] that interpolates the data: s(x i ) = y i, i = 1,...,n For smoothness of s(x), we require that s (x) and s (x) be continuous. In addition, the curve connects the data points (x i,y i ) in the same way as the piecewise linear interpolation does. Chapter 2 Interpolation and Approximation p. 25/66
26 2.2 Natural cubic spline If we require that s (x) does not change too rapidly between node points, then s (x) should be as small as possible and, more precisely, we require that b a [s (x)] 2 dx is as small as possible. There is a unique solution s(x) satisfying the following: s(x) is a polynomial of degree 3 on each subinterval [x i 1,x i ],i = 2, 3,...,n. s(x),s (x), and s (x) are continuous for a x b. s (x 1 ) = s (x n ) = 0. The function s(x) is called the natural cubic spline. Chapter 2 Interpolation and Approximation p. 26/66
27 2.2 (Cont d) We will now construct s(x). Introduce the variables M 1,...,M n with M i s (x i ), i = 1, 2,...,n We will express s(x) in terms of the (unknown) values M i ; then, we will produce a system of linear equations from which the values M i can be calculated. Chapter 2 Interpolation and Approximation p. 27/66
28 2.2 (Cont d) Since s(x) is cubic on each interval [x i 1,x i ], the function s (x) is linear on the interval. A linear function is determined by its values at two points, and we use s (x i 1 ) = M i 1, s (x i ) = M i Then for x i 1 x x i. s (x) = (x i x)m i 1 + (x x i 1 )M i x i x i 1 Chapter 2 Interpolation and Approximation p. 28/66
29 2.2 (Cont d) We will now form the second antiderivative of s (x) on [x i 1,x i ] and apply the interpolating conditions s(x i 1 ) = y i 1, s(x i ) = y i Chapter 2 Interpolation and Approximation p. 29/66
30 2.2 (Cont d) We will now form the second antiderivative of s (x) on [x i 1,x i ] and apply the interpolating conditions s(x i 1 ) = y i 1, s(x i ) = y i After quite a bit of manipulation, this results in s(x) = (x i x) 3 M i 1 + (x x i 1 ) 3 M i 6(x i x i 1 ) + (x i x)y i 1 + (x x i 1 )y i x i x i (x i x i 1 ) [ (x i x)m i 1 + (x x i 1 )M i ] Chapter 2 Interpolation and Approximation p. 29/66
31 2.2 (Cont d) The formula applies to each of the intervals [x 1,x 2 ],...,[x n 1,x n ]. The formulas for adjacent intervals [x i 1,x i ] and [x i,x i+1 ] will agree at their common point x = x i because of the interpolating conditions s(x i ) = y i, which is common to the definitions. This implies that s(x) will be continuous over the entire interval [a, b]. Similarly, s (x) is continuous on [a,b]. To ensure the continuity of s (x) over [a,b], the formula for s (x) on [x i 1,x i ] and [x i,x i+1 ] are required to give the same value at their common points x = x i, for i = 2, 3,...,n 1. Chapter 2 Interpolation and Approximation p. 30/66
32 2.2 (Cont d) After a great deal of simplification, this leads to the following system of linear equations: x i x i 1 6 M i 1 + x i+1 x i 1 3 M i + x i+1 x i 6 M i+1 = y i+1 y i x i+1 x i y i y i 1 x i x i 1 for i = 2, 3,...,n 1. These n 2 equations together with M 1 = M n = 0 leads to the values M 1,...,M n and then to the interpolating function s(x). The above system is called a tridiagonal system. Chapter 2 Interpolation and Approximation p. 31/66
33 2.2 (Cont d) Question 3 Calculate the natural cubic spline interpolating the data {(1, 1), (2, 12 ), (3, 13 ), (4, 14 ) } Chapter 2 Interpolation and Approximation p. 32/66
34 2.2 (Cont d) Question 3 Calculate the natural cubic spline interpolating the data {(1, 1), (2, 12 ), (3, 13 ), (4, 14 ) } Answer s(x) = 1 12 x3 1 4 x2 3 1x x x x2 3 7x x x x 4 Chapter 2 Interpolation and Approximation p. 32/66
35 2.3 Other Spline Functions Up until this point, we have not considered the accuracy of the interpolating spline s(x). This is satisfactory where only the data points are known, and we only want a smooth curve that looks correct to the eye. But often, we want the spline to interpolate a known function, and then we are also interested in accuracy. Let f(x) be given on [a,b]. We will consider the case where the interpolation of f(x) is performed at evenly spaced values of x. Let n > 1, h = b a n 1, x i = a + (i 1)h, i = 1, 2...,n Chapter 2 Interpolation and Approximation p. 33/66
36 2.3 (Cont d) Let s n (x) be the natural cubic spline interpolating f(x) at x 1,...,x n. Then it can be shown that max f(x) s n(x) ch 2 a x b where c depends on f (a),f (b), and max a x b f (4) (x). The primary reason that the approximation s n (x) does not converge more rapidly is that f (x) is generally nonzero at x = a and b, whereas s n(a) = s n(b) = 0 by definition. For functions f(x) with f (a) = f (b) = 0, the right-hand side of the above estimation can be replaced by ch 4. Chapter 2 Interpolation and Approximation p. 34/66
37 2.3 (Cont d) To improve on s n (x), we can look at other cubic spline functions s(x) that interpolate f(x). We say that s(x) is a cubic spline on [a,b] if s(x) is cubic on each subinterval [x i 1,x i ]. s(x),s (x), and s (x) are all continuous on [a,b] If s(x) is chosen to satisfy s(x i ) = y i = f(x i ) for i = 1,...,n, then we chose endpoint conditions (or boundary conditions) for s(x) that will result in a better approximation to f(x). If possible require { s (x 1 ) = f (x 1 ) s (x n ) = f (x n ) or { s (x 1 ) = f (x 1 ) s (x n ) = f (x n ) Chapter 2 Interpolation and Approximation p. 35/66
38 3.1 The Best Approximation Problem We will look at the concept of the best possible approximation. This is illustrated with improvements to the Taylor polynomials for f(x) = e x. Let f(x) be a given function that is continuous on some interval a x b. If p(x) is a polynomial, then we are interested in measuring E(p) = max f(x) p(x) a x b the maximum possible error in the approximation of f(x) by p(x) on the interval [a,b]. Chapter 2 Interpolation and Approximation p. 36/66
39 3.1 (Cont d) For each degree n > 0, define ρ n (f) = min deg(p) n E(p) = min deg(p) n [ ] max f(x) p(x) a x b This is the smallest possible value for E(p) that can be attained with a polynomial of degree n. It is called the minimax error. It can be shown that there is a unique polynomial of degree n for which the maximum error on [a,b] is ρ n (f). This polynomial is called the minimax polynomial of order n, and we denote it here by m n (x). Chapter 2 Interpolation and Approximation p. 37/66
40 Example Let f(x) = e x on 1 x 1, and consider the linear polynomial approximation to f(x). The linear Taylor polynomial is t 1 (x) = 1 + x and E(t 1 ) = max 1 x 1 ex t 1 (x) = e The linear minimax polynomial to e x on [ 1, 1] is m 1 (x) = a + bx x, where a = e b ln b 2 and b = e e 1 2. E(m 1 ) = max 1 x 1 ex m 1 (x) = e a b Chapter 2 Interpolation and Approximation p. 38/66
41 Example Again let f(x) = e x for 1 x 1. The cubic minimax polynomial to e x on [ 1, 1] is m 3 (x) = x x x 3 compared to te Taylor approximation t 3 (x) = 1 + x x x3 E(m 3 ) and E(t 3 ) Chapter 2 Interpolation and Approximation p. 39/66
42 3.1 (Cont d) These examples illustrate several general properties of the minimax approximation. m n (x) is usually a significant improvement on the taylor polynomial t n (x). The larger values of the error f(x) m n (x) are dispersed over [a,b]. The Taylor error f(x) t n (x) is much smaller around the point of expansion. The error f(x) m n (x) is oscillatory on [a,b]. It can be shown that this error will change sign at least n + 1 times inside the interval [a,b], and the sizes of the oscillations will be equal. Chapter 2 Interpolation and Approximation p. 40/66
43 3.2 Accuracy of the Minimax Approximation For f(x) = e x, it appears that m n (x) is very accurate for relatively small values of n. This can be made more precise for some commonly occuring functions such as e x, cos x, and others. Assume f(x) has an infinite number of continuous derivatives on [a, b]. The minimax error satisfies ρ n (f) [(b a)/2]n+1 (n + 1)!2 n max a x b f (n+1) (x) This error bound will not always become smaller with increasing n, but it will give a fairly accurate bound for many common functions f(x). Chapter 2 Interpolation and Approximation p. 41/66
44 Example Let f(x) = e x for 1 x 1. Then ρ n (e x ) e (n + 1)!2 n n bound ρ n (f) Chapter 2 Interpolation and Approximation p. 42/66
45 4.1 Chebyshev Polynomials We introduce a family of polynomials, the Chebyshev polynomials, that are used in many parts of numerical analysis and, more generally, in mathematics and physics. A few of their properties will be given in this section, and then they will be used in the next section to produce a polynomial approximation close to the minimax approximation. Chapter 2 Interpolation and Approximation p. 43/66
46 4.1 (Cont d) For an integer n 0, define the function T n (x) = cos ( n cos 1 x ), 1 x 1 This may not appear to be a polynomial, but we will show it is a polynomial of degree n. To simplify the manipulation of the definition, we introduce θ = cos 1 x or x = cosθ, 0 θ π Then T n (x) = cosnθ Chapter 2 Interpolation and Approximation p. 44/66
47 Example 1 y x T 0 (x) = cos 0 = 1 T 1 (x) = cosθ = x T 2 (x) = cos 2θ = 2 cos 2 θ = 2x 2 1 T 3 (x) = cos 3θ = 4 cos 3 θ 3 cosθ = 4x 3 3x Chapter 2 Interpolation and Approximation p. 45/66
48 4.2 The Triple Recursion Relation Recall the trigonometric addition formulas cos(α ± β) = cosαcos β sin α sin β For any n 1, apply these identities to get T n+1 (x) = cos[(n + 1)θ] = cosnθ cos θ sin nθ sin θ T n 1 (x) = cos[(n 1)θ] = cosnθ cosθ + sinnθ sin θ Add these two equations to obtain T n+1 (x) + T n 1 (x) = 2xT n (x) This is called the triple recursion relation for the Chebyshev polynomials. Chapter 2 Interpolation and Approximation p. 46/66
49 4.3 The Minimum Size Property We note that T n (x) 1 for 1 x 1 and for all n 0. Also note that T n (x) = 2 n 1 x n + lower-degree terms, n 1 Introduce a modified version of T n (x) T n (x) = 1 2 n 1T n(x) = xn + lower-degree terms, n 1 2n 1 Then, T n (x) 1 2 n 1 for 1 x 1 and for all n 1. Chapter 2 Interpolation and Approximation p. 47/66
50 4.3 (Cont d) A polynomial whose highest-degree term has a coefficient of 1 is called a monic polynomial. The monic polynomail T n (x) has size 2 1 on [ 1, 1], and this becomes smaller as n 1 the degree n increases. But max 1 x 1 xn = 1. Thus, x n is a monic polynomial whose size does not change with increasing n. Theorem Let n 1 be an integer, and consider all possibel monic polynomials of degree n. Then the degree n monic polynomial with the smallest maximum absolute value on [ 1, 1] is the modified Chebyshev polynomial Tn (x), and its maximum value on [ 1, 1] is 1 2 n 1. Chapter 2 Interpolation and Approximation p. 48/66
51 5.1 A Near-Minimax Approximation Method For polynomial approximations to f(x), we consider using an interpolating polynomial. The most obvious choice is to choose an evenly spaced set of interpolation node points on the interval a x b of interest. Unfortunately, this often gives an interpolating polynomial that is a very poor approximation to f(x). To make it simple, choose the special approximation interval 1 x 1. Initially consider the approximating polynomail of degree n = 3. Let x 0,x 1,x 2,x 3 be the interpolation node points in [ 1, 1], and let c 3 (x) denote the polynomial of degree 3 that interpolates f(x) at x 0,x 1,x 2,x 3. Chapter 2 Interpolation and Approximation p. 49/66
52 5.1 (Cont d) It can be shown that the interpolation error is given by f(x) c 3 (x) = (x x 0)(x x 1 )(x x 2 )(x x 3 ) f (4) (c x ) 4! for 1 x 1 and for some c x in [ 1, 1]. The nodes are to be chosen so that the maximum value of f(x) c 3 (x) on [ 1, 1] is made as small as possible. The only quantity on the right side that we can use to influence the size of the error is the degree 4 polynomial ω(x) = (x x 0 )(x x 1 )(x x 2 )(x x 3 ) Chapter 2 Interpolation and Approximation p. 50/66
53 5.1 (Cont d) We want to choose the interpolation points x 0,x 1,x 2,x 3 so that ω(x) is made as small as possible. max 1 x 1 It is easy to see that ω(x) = x 4 + lower-degree terms This is a monic polynomial of degree 4. From the theorem in the preceding section, the smallest possible value for the maximum is obtained with ω(x) = T 4 (x) = T 4(x) 2 3 = x 4 x Chapter 2 Interpolation and Approximation p. 51/66
54 5.1 (Cont d) ω(x) implicitly defines the interpolation node points and they are the zeros of ω(x), which in turn are the zeros of T 4 (x). In this case, T 4 (x) = cos 4θ and x = cosθ, which is zero when 4θ = ± π 2, ±3π 2, ±5π 2, ±7π 2,... x = cos π 8, cos 3π 8, cos 5π 8, cos 7π 8,... using cos( θ) = cosθ. The first four values of x are distinct, but the successive values repeat the first four values. Thus, the nodes are approximately ± , ± Chapter 2 Interpolation and Approximation p. 52/66
55 Example Let f(x) = e x on [ 1, 1], by evaluating c 3 (x) at a large number of points, we find that Compare this error to max 1 x 1 ex c 3 (x) ρ 3 (e x ) Chapter 2 Interpolation and Approximation p. 53/66
56 5.1 (Cont d) The construction of c 3 (x) generalizes to finding a degree n near-minimax approximation to f(x) on [ 1, 1]. The interpolation error is given by f(x) c n (x) = (x x 0) (x x n ) f (n+1) (c x ), 1 x 1 (n + 1)! and we seek to minimize max (x x 0) (x x n ) 1 x 1 Chapter 2 Interpolation and Approximation p. 54/66
57 5.1 (Cont d) The polynomial being minimized is monic of degree n + 1. This minimum is attained by the monic polynomial T n (x) = 1 2 nt n+1(x) Thus, the interpolation nodes are the zeros of T n+1 (x), which are given by ( ) 2i + 1 x i = cos 2n + 2 π, i = 0, 1,...,n The near-minimax approximation c n (x) of degree n is obtained by interpolating to f(x) at these n + 1 nodes on [ 1, 1]. Chapter 2 Interpolation and Approximation p. 55/66
58 5.2 Odd and Even Functions We say that f(x) is even if f( x) = f(x) for all x. Such functions have graphs that are symmetric about the y-axis. We say f(x) is odd if f( x) = f(x) for all x. Such functions are said to be symmetric about the origin. If f(x) is even or odd on [ 1, 1], then n shold be chosen in a more restricted way: If f(x) is odd, then choose n even. If f(x) is even, then choose n odd. This will result in c n (x) having degree only n 1, but it will given an appropriate formula. Chapter 2 Interpolation and Approximation p. 56/66
59 6.1 Least Squares Approximation We will now turn to polynomial approximation with a small average error over the interval of approximation. If a function f(x) is approximated by p(x) over [a,b], then the average error is defined by E(p;f) 1 b a b a (f(x) p(x)) 2 dx This is also called the root-mean-square-error. Note that minimizing E(p; f) for different choices of p(x) is equivalent to minimizing b a (f(x) p(x))2 dx. Chapter 2 Interpolation and Approximation p. 57/66
60 Example Let f(x) = e x, and let p(x) = α 0 + α 1 x. We want to choose α 0,α 1 so as to minimize the integral g(α 0,α 1 ) 1 1 (e x α 0 α 1 x) 2 dx Its minimum can be found by solving g α 0 = 0, and g α 1 = 0 Solving the system of equations yields α 0 = e e and α 1 = 3e Chapter 2 Interpolation and Approximation p. 58/66
61 Example (Cont d) We denote the resulting linear approximation by l 1 (x) = α 0 + α 1 x It is called the best linear approximation to e x in the sense of least squares. The error is max 1 x 1 ex l 1 (x) Approximation Maximum Error RMS-Error Taylor t 1 (x) Least squares l 1 (x) Chebyshev c 1 (x) Minimax m 1 (x) Chapter 2 Interpolation and Approximation p. 59/66
62 6.1 (Cont d) For E(p;f) for a general function f(x) on [a,b]. We seek a polynomial p(x) of a degree n that minimizes E(p;f). We can write p(x) = α 0 + α 1 x + + α n x n We define g(α 0,...,α n ) b a (f(x) α 0 α n x n ) 2 dx leading to a set of n + 1 equations that must be satisfied by a minimizing set α 0,α 1,...,α n for g. Chapter 2 Interpolation and Approximation p. 60/66
63 6.1 (Cont d) A minimizer for g(α 0,...,α n ) can be found from the conditions g = 0, i = 0, 1,...,n α i For the special case of [a,b] = [0, 1]. The linear system is n j=0 α i i + j + 1 = 1 0 x i f(x)dx, i = 0, 1,...,n The linear system is ill-conditioned and difficult to solve accurately even for n = 5. Chapter 2 Interpolation and Approximation p. 61/66
64 6.2 Legendre Polynomials A better approach to minimizing E(p;f) requires the introduction of a special set of polynomials, the Legendre polynomials, defined by For example, P 0 (x) = 1 1 d n [ P n (x) = (x 2 n!2 n dx n 1) n], n = 1, 2,... P 1 (x) = x P 2 (x) = 1 2 (3x2 1) P 3 (x) = 1 2 (5x3 3x) P 4 (x) = 1 8 (35x4 30x 2 + 3) Chapter 2 Interpolation and Approximation p. 62/66
65 6.2 (Cont d) We introduce (f,g) = b a f(x)g(x)dx. Properties deg P n = n and P n (1) = 1, n 0 P n+1 (x) = 2n+1 n+1 xp n(x) n+1 n P n 1(x), n 1 { 0, i j Orthogonality: (P i,p j ) = 2 2i+1, i = j All zeros of P n (x) are simple and are in [ 1, 1]. Every polynomial p(x) of degree n can be written as p(x) = n i=0 β ip i (x) with the choice of β 0,β 1,...,β n uniquely determined from p(x). Chapter 2 Interpolation and Approximation p. 63/66
66 6.3 Solving for the Least Squares Approximation It is sufficient to solve the problem on the interval [ 1, 1]. We seek to minimize ( ) n n g (f p,f p) = f β i P i,f β i P i Using the orthogonality of P i (x), g = (f,f) 2 = (f,f) n β i (f,p i ) + i=0 n i=0 i=0 n βi 2 (P i,p i ) i=0 (f,p i ) 2 n (P i,p i ) + (P i,p i ) i=0 i=0 [ β i (f,p ] 2 i) (P i,p i ) Chapter 2 Interpolation and Approximation p. 64/66
67 6.6.3 (Cont d) g is smallest when β i = (f,p i), i = 0, 1,...,n (P i,p i ) the minimum for this choice of coefficients is n (f,p i ) 2 g = (f,f) (P i,p i ) 2 We call l n (x) = n i=0 i=0 (f,p i ) (P i,p i ) P i(x) the least squares approximation of degree n to f(x) on [ 1, 1]. Chapter 2 Interpolation and Approximation p. 65/66
68 Example For f(x) = e x on [ 1, 1], the cubic least squares approximation is l 3 (x) = x x x 3 Approximation Maximum Error RMS-Error Taylor t 3 (x) Least squares l 3 (x) Chebyshev c 3 (x) Minimax m 3 (x) Chapter 2 Interpolation and Approximation p. 66/66
LEAST SQUARES APPROXIMATION
LEAST SQUARES APPROXIMATION One more approach to approximating a function f (x) on an interval a x b is to seek an approximation p(x) with a small average error over the interval of approximation. A convenient
More informationINTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x
More informationApproximation theory
Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationInterpolation Theory
Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the
More informationOutline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation
Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data
More information(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).
1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x
More informationA first order divided difference
A first order divided difference For a given function f (x) and two distinct points x 0 and x 1, define f [x 0, x 1 ] = f (x 1) f (x 0 ) x 1 x 0 This is called the first order divided difference of f (x).
More informationLectures 9-10: Polynomial and piecewise polynomial interpolation
Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j
More informationNumerical Methods. King Saud University
Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation
More informationWe consider the problem of finding a polynomial that interpolates a given set of values:
Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.
More informationNUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.
NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.
More informationNumerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.
Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal
More informationNumerical Mathematics & Computing, 7 Ed. 4.1 Interpolation
Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 November 7, 2011 2011 1 /
More informationCubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines
Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...
More informationPolynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where
Polynomials Polynomials Evaluation of polynomials involve only arithmetic operations, which can be done on today s digital computers. We consider polynomials with real coefficients and real variable. p
More information3.1 Interpolation and the Lagrange Polynomial
MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationComputational Physics
Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function
More informationLEAST SQUARES DATA FITTING
LEAST SQUARES DATA FITTING Experiments generally have error or uncertainty in measuring their outcome. Error can be human error, but it is more usually due to inherent limitations in the equipment being
More informationMa 530 Power Series II
Ma 530 Power Series II Please note that there is material on power series at Visual Calculus. Some of this material was used as part of the presentation of the topics that follow. Operations on Power Series
More informationChapter 2 Interpolation
Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data
More informationPower series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0
Lecture 22 Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) Recall a few facts about power series: a n z n This series in z is centered at z 0. Here z can
More informationInterpolation and extrapolation
Interpolation and extrapolation Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University October 30, 207 Interpolation/extrapolation vs fitting Formulation of the problem: there
More informationSome notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation
Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation See your notes. 1. Lagrange Interpolation (8.2) 1 2. Newton Interpolation (8.3) different form of the same polynomial as Lagrange
More informationInterpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning
76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t
More informationPolynomial Approximations and Power Series
Polynomial Approximations and Power Series June 24, 206 Tangent Lines One of the first uses of the derivatives is the determination of the tangent as a linear approximation of a differentiable function
More informationNumerical Analysis: Interpolation Part 1
Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,
More information8.2 Discrete Least Squares Approximation
Chapter 8 Approximation Theory 8.1 Introduction Approximation theory involves two types of problems. One arises when a function is given explicitly, but we wish to find a simpler type of function, such
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive
More informationLecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1
More informationTaylor and Maclaurin Series. Approximating functions using Polynomials.
Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear
More informationInterpolation APPLIED PROBLEMS. Reading Between the Lines FLY ROCKET FLY, FLY ROCKET FLY WHAT IS INTERPOLATION? Figure Interpolation of discrete data.
WHAT IS INTERPOLATION? Given (x 0,y 0 ), (x,y ), (x n,y n ), find the value of y at a value of x that is not given. Interpolation Reading Between the Lines Figure Interpolation of discrete data. FLY ROCKET
More informationMathematics for Engineers. Numerical mathematics
Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set
More informationi x i y i
Department of Mathematics MTL107: Numerical Methods and Computations Exercise Set 8: Approximation-Linear Least Squares Polynomial approximation, Chebyshev Polynomial approximation. 1. Compute the linear
More informationChapter 1 Numerical approximation of data : interpolation, least squares method
Chapter 1 Numerical approximation of data : interpolation, least squares method I. Motivation 1 Approximation of functions Evaluation of a function Which functions (f : R R) can be effectively evaluated
More informationLegendre s Equation. PHYS Southern Illinois University. October 18, 2016
Legendre s Equation PHYS 500 - Southern Illinois University October 18, 2016 PHYS 500 - Southern Illinois University Legendre s Equation October 18, 2016 1 / 11 Legendre s Equation Recall We are trying
More informationLecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2
More informationERROR IN LINEAR INTERPOLATION
ERROR IN LINEAR INTERPOLATION Let P 1 (x) be the linear polynomial interpolating f (x) at x 0 and x 1. Assume f (x) is twice continuously differentiable on an interval [a, b] which contains the points
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationSPLINE INTERPOLATION
Spline Background SPLINE INTERPOLATION Problem: high degree interpolating polynomials often have extra oscillations. Example: Runge function f(x = 1 1+4x 2, x [ 1, 1]. 1 1/(1+4x 2 and P 8 (x and P 16 (x
More informationTaylor and Maclaurin Series. Approximating functions using Polynomials.
Taylor and Maclaurin Series Approximating functions using Polynomials. Approximating f x = e x near x = 0 In order to approximate the function f x = e x near x = 0, we can use the tangent line (The Linear
More information1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =
Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values
More informationMTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.
MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:
More information8.5 Taylor Polynomials and Taylor Series
8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:
More information3.4 Introduction to power series
3.4 Introduction to power series Definition 3.4.. A polynomial in the variable x is an expression of the form n a i x i = a 0 + a x + a 2 x 2 + + a n x n + a n x n i=0 or a n x n + a n x n + + a 2 x 2
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationCALCULUS JIA-MING (FRANK) LIOU
CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion
More informationAP Calculus Chapter 9: Infinite Series
AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin
More informationCHAPTER 4. Interpolation
CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given
More informationExam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20
Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton
More informationReview of Power Series
Review of Power Series MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Introduction In addition to the techniques we have studied so far, we may use power
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationCurve Fitting and Approximation
Department of Physics Cotton College State University, Panbazar Guwahati 781001, Assam December 6, 2016 Outline Curve Fitting 1 Curve Fitting Outline Curve Fitting 1 Curve Fitting Topics in the Syllabus
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationPUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes
PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES Notes. x n+ = ax n has the general solution x n = x a n. 2. x n+ = x n + b has the general solution x n = x + (n )b. 3. x n+ = ax n + b (with a ) can be
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationswapneel/207
Partial differential equations Swapneel Mahajan www.math.iitb.ac.in/ swapneel/207 1 1 Power series For a real number x 0 and a sequence (a n ) of real numbers, consider the expression a n (x x 0 ) n =
More informationPURE MATHEMATICS AM 27
AM SYLLABUS (2020) PURE MATHEMATICS AM 27 SYLLABUS 1 Pure Mathematics AM 27 (Available in September ) Syllabus Paper I(3hrs)+Paper II(3hrs) 1. AIMS To prepare students for further studies in Mathematics
More informationInterpolation and Approximation
Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:
More information1 Lecture 8: Interpolating polynomials.
1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression
More informationApproximation Theory
Approximation Theory Function approximation is the task of constructing, for a given function, a simpler function so that the difference between the two functions is small and to then provide a quantifiable
More information1 Roots of polynomials
CS348a: Computer Graphics Handout #18 Geometric Modeling Original Handout #13 Stanford University Tuesday, 9 November 1993 Original Lecture #5: 14th October 1993 Topics: Polynomials Scribe: Mark P Kust
More informationNovember 20, Interpolation, Extrapolation & Polynomial Approximation
Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic
More informationSeries Solutions of Differential Equations
Chapter 6 Series Solutions of Differential Equations In this chapter we consider methods for solving differential equations using power series. Sequences and infinite series are also involved in this treatment.
More information13 Path Planning Cubic Path P 2 P 1. θ 2
13 Path Planning Path planning includes three tasks: 1 Defining a geometric curve for the end-effector between two points. 2 Defining a rotational motion between two orientations. 3 Defining a time function
More informationApplied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight
Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................
More informationMath 4263 Homework Set 1
Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More information8.7 MacLaurin Polynomials
8.7 maclaurin polynomials 67 8.7 MacLaurin Polynomials In this chapter you have learned to find antiderivatives of a wide variety of elementary functions, but many more such functions fail to have an antiderivative
More informationInfinite Series. 1 Introduction. 2 General discussion on convergence
Infinite Series 1 Introduction I will only cover a few topics in this lecture, choosing to discuss those which I have used over the years. The text covers substantially more material and is available for
More informationQ 0 x if x 0 x x 1. S 1 x if x 1 x x 2. i 0,1,...,n 1, and L x L n 1 x if x n 1 x x n
. - Piecewise Linear-Quadratic Interpolation Piecewise-polynomial Approximation: Problem: Givenn pairs of data points x i, y i, i,,...,n, find a piecewise-polynomial Sx S x if x x x Sx S x if x x x 2 :
More informationFurther Mathematical Methods (Linear Algebra) 2002
Further Mathematical Methods (Linear Algebra) Solutions For Problem Sheet 9 In this problem sheet, we derived a new result about orthogonal projections and used them to find least squares approximations
More informationNumerical Methods of Approximation
Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook
More informationQ1. Discuss, compare and contrast various curve fitting and interpolation methods
Q1. Discuss, compare and contrast various curve fitting and interpolation methods McMaster University 1 Curve Fitting Problem statement: Given a set of (n + 1) point-pairs {x i,y i }, i = 0,1,... n, find
More informationFunction approximation
Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be
More informationLecture 10 Polynomial interpolation
Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationn 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes
Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and
More informationMath Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.
Math 22 - Review for Exam 3. Answer each of the following questions as either True or False. Circle the correct answer. (a) True/False: If a n > 0 and a n 0, the series a n converges. Soln: False: Let
More informationInfinite series, improper integrals, and Taylor series
Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions
More informationChapter 3: Root Finding. September 26, 2005
Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division
More informationSOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES
SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES TAYLOR AND MACLAURIN SERIES Taylor Series of a function f at x = a is ( f k )( a) ( x a) k k! It is a Power Series centered at a. Maclaurin Series of a function
More informationCompletion Date: Monday February 11, 2008
MATH 4 (R) Winter 8 Intermediate Calculus I Solutions to Problem Set #4 Completion Date: Monday February, 8 Department of Mathematical and Statistical Sciences University of Alberta Question. [Sec..9,
More informationThis ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0
Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x
More informationLECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).
CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f
More informationMath Numerical Analysis Mid-Term Test Solutions
Math 400 - Numerical Analysis Mid-Term Test Solutions. Short Answers (a) A sufficient and necessary condition for the bisection method to find a root of f(x) on the interval [a,b] is f(a)f(b) < 0 or f(a)
More informationAP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period:
WORKSHEET: Series, Taylor Series AP Calculus (BC) Chapter 9 Test No Calculator Section Name: Date: Period: 1 Part I. Multiple-Choice Questions (5 points each; please circle the correct answer.) 1. The
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationCALCULUS: Math 21C, Fall 2010 Final Exam: Solutions. 1. [25 pts] Do the following series converge or diverge? State clearly which test you use.
CALCULUS: Math 2C, Fall 200 Final Exam: Solutions. [25 pts] Do the following series converge or diverge? State clearly which test you use. (a) (d) n(n + ) ( ) cos n n= n= (e) (b) n= n= [ cos ( ) n n (c)
More informationPower Series Solutions to the Legendre Equation
Department of Mathematics IIT Guwahati The Legendre equation The equation (1 x 2 )y 2xy + α(α + 1)y = 0, (1) where α is any real constant, is called Legendre s equation. When α Z +, the equation has polynomial
More informationPolynomial Interpolation Part II
Polynomial Interpolation Part II Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Exercise Session
More informationInterpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter
Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3
More informationTaylor and Maclaurin Series
Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as
More informationInput: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?
Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation
More informationFIXED POINT ITERATION
FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial
More informationChapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics
Chapter 3a Topics in differentiation Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Problems in differentiation Problems in differentiation Problem 1.
More informationTaylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.
11.10 Taylor and Maclaurin Series Copyright Cengage Learning. All rights reserved. We start by supposing that f is any function that can be represented by a power series f(x)= c 0 +c 1 (x a)+c 2 (x a)
More information14 Fourier analysis. Read: Boas Ch. 7.
14 Fourier analysis Read: Boas Ch. 7. 14.1 Function spaces A function can be thought of as an element of a kind of vector space. After all, a function f(x) is merely a set of numbers, one for each point
More informationCurve Fitting and Interpolation
Chapter 5 Curve Fitting and Interpolation 5.1 Basic Concepts Consider a set of (x, y) data pairs (points) collected during an experiment, Curve fitting: is a procedure to develop or evaluate mathematical
More informationSolution of Algebric & Transcendental Equations
Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection
More information