Numerical Analysis: Approximation of Functions

Size: px
Start display at page:

Download "Numerical Analysis: Approximation of Functions"

Transcription

1 Numerical Analysis: Approximation of Functions Mirko Navara navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a October 13, 2016 Requirements for the assessment Adequate presence at seminars (you may install Maple on your computer* and do the tasks elsewhere). * Contact Aleš Němeček for the license. Tasks 6 5 points 18 points suffice References [Navara, Němeček] Navara, M., Němeček, A.: Numerical Methods (in Czech). 2nd edition, CTU, Prague, [Knuth] Knuth, D.E.: Fundamental Algorithms. Vol. 1 of The Art of Computer Programming, 3rd ed., Addison- Wesley, Reading, MA, [KJD] Kubíček, M., Janovská, D., Dubcová, M.: Numerical Methods and Algorithms. Institute of Chemical Technology, Prague, [Num. Recipes] Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes (The Art of Scientific Computing). 2nd edition, Cambridge University Press, Cambridge, [Stoer, Bulirsch] Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis. Springer Verlag, New York, Handbook of Linear Algebra. Chapman & Hall/CRC, Boca Ra- [Handbook Lin. Alg.] Hogben, L. (ed.): ton/london/new York, Motivation Task: A polynomial can be factorized to the product of linear polynomials, but only for orders up to 4. Numerical solution is possible, but it is imprecise and division leaves a remainder. Also this is a subject of numerical anaysis. Task: Linear algebra: A system of linear equations Ax = b has a unique solution x = A 1 b if and only if det A 0. Programming: Real numbers cannot be tested for equality! Numerical anaysis: Imprecision of coefficients causes imprecision of outputs, which for det A 0 exceeds all bounds. This is called an ill-conditioned task. Task: Linear algebra: det A is a sum of products ( 1) sign(p) a 1,p(1) a 2,p(2)... a n,p(n) = ( 1) sign(p) a i,p(i), p p i n 1

2 where we sum up over all permutations p of indices 1,..., n. Numerical anaysis: This represents appr. n n! operations. n no. of operations Gaussian Elimination has a complexity proportional to n 3. APPROXIMATION OF FUNCTIONS Task: Estimate future currency exchange rates from the preceding ones. Task: A random variable with Gaussian normal distribution has a cumulative distribution function Φ(u) = 1 u ( ) t 2 exp dt. 2π 2 This is a transcendent function. Numerical integration gives an approximate result with given precision. (In fact, even the exponential function is computed only numerically, only the 4 basic arithmetical operations are implemented in the processor.) Faster evaluation can be done with a prepared table of values of Gauss integral. Task: Estimate the remaining capacity of a battery from its voltage. We use finitely many values, but we need an approximation on the whole interval. Moreover, we require monotonicity. Task: A digital image is zoomed, rotated, etc. This changes the number of pixels and their orientation. Draw the V-A characteristic of a diode from measured data: Motivating problems on approximation 1. Task 1: Dependence of an exchange rate on time. 2. Task 2: Fast estimation of Gauss integral (using a preprocessed table). 3. Task 3: V-A characteristic of a diode based on measured values. 2

3 4. Task 4: Temperatures from a meteorological station. Basic task of approximation Given: n different points x 0,..., x n 1 R (nodes) y 0,..., y n 1 R set of functions F, defined at least in the nodes x 0,..., x n 1 Task: Find a function ϕ F, which minimizes the differences y i ϕ(x i ), i = 0,..., n 1. Assumption: F is the linear hull of k known, so-called approximating functions ϕ 0,..., ϕ k 1, k n: { } F = ϕ 0,..., ϕ k 1 = c j ϕ j : c j R. It remains to determine real coefficients c j, j = 0,..., k 1, of the linear combination j<k ϕ = j<k c j ϕ j. Linearity of the procedure of approximation We mostly assume linear dependence of the output on the inputs ( superposition principle ); this is related to the independence of the output on the chosen scale. Than any approximation depends linearly on the entries of the arithmetical vector y = (y 0,..., y n 1 ) R n. The resulting approximation is ϕ = j<k c j ϕ j, where ϕ = (ϕ(x 0 ),..., ϕ(x n 1 )) R n, ϕ j = (ϕ j (x 0 ),..., ϕ j (x n 1 )) R n, j = 0,..., k 1. (No other values are used.) Vector formulation of approximation: For a given vector y, we look for the nearest vector ϕ from the linear hull of known vectors ϕ j, j = 0,..., k 1. Vector y has coordinates (y 0,..., y n 1 ) w.r.t. the standard basis ψ 0 = (1, 0,..., 0), ψ1 = (0, 1, 0,..., 0),..., ψn 1 = (0,..., 0, 1), 1. For j = 0,..., n 1, we can put y = ψ j and find approximating functions ϱ j, represented by vectors ϱ j. 2. For a general input vector y, the approximation is a linear combination of results of the special cases: ϕ = j<n y j ϱ j. Functions ϱ j, j = 0,..., n 1, give sufficient information about the solution for any y = (y 0,..., y n 1 ). Interpolation Special case of approximation, where we require exact match at nodes. Given: n different nodes x 0,..., x n 1 R y 0,..., y n 1 R set of functions F, defined at least at the nodes x 0,..., x n 1 Task: Select a function ϕ F such that ϕ(x i ) = y i, i = 0,..., n 1. Simple interpolation 3

4 Assumption: Substitution gives { } F = ϕ 0,..., ϕ n 1 = c j ϕ j : c j R. j<n c j ϕ j (x i ) = y i, i = 0,..., n 1, j<n which is a system of n linear equations for unknowns c j, j = 0,..., n 1. For uniqueness, we put k = n and we require linear independence of arithmetical vectors ϕ j = (ϕ j (x 0 ),..., ϕ j (x n 1 )), j = 0,..., k 1. The complexity is proportional to n 3 ; we shall achieve smaller complexity for special tasks. Polynomial interpolation We interpolate by a polynomial of degree less than n, i.e., n 1. It has n coefficients, which is needed for existence and uniquness of the solution. Interpolating functions can be chosen as ϕ j (t) = t j, j = 0,..., n 1. It may be advantageous to use another bases of the n-dimensional linear space of all polynomials of degree less than n. It is easy to integrate them, differentiate, etc. Solvability: Advantages of polynomial interpolation Theorem: There is exactly on polynomial of degree less than n which solves the interpolation task for n nodes. Universality: Weierstrass Theorem: Let f be a continuous function at a closed interval I and let ε > 0. Then there exists a polynomial ϕ such that t I : f(t) ϕ(t) ε. Disadvantages of polynomial interpolation Weierstrass Theorem does not say anything about the degree of the polynomial, thus it may be useless. Polynomial dependencies are rather rare in practice. Simple interpolation works, but we shall show better methods. Lagrange construction of the interpolating polynomial 1. We solve special cases when the jth entry of vectoru y is unit, others are zeros; this results in polynomials ϱ j, j = 0,..., n 1, { 1 if i = j, ϱ j (x i ) = δ ij = 0 if i j. (δ ij is the Kronecker delta) Polynomial ϱ j of degree n 1 has n 1 roots x 0,..., x j 1, x j+1,..., x n 1 ; ϱ j (t) = e j (t x 0 )... (t x j 1 )(t x j+1 )... (t x n 1 ) = e j (t x i ), i<n,i j 4

5 where e j follows from the equality ϱ j (x j ) = 1: 1 e j = (x j x i ), ϱ j(t) = i<n,i j i<n,i j i<n,i j 2. The general solution is the linear combination ϕ = y j ϱ j, j<n (t x i ) (x j x i ). Check: ϕ(t) = y j ϱ j (t) = y j j<n j<n (t x i ) i<n,i j i<n,i j (x j x i ). ϕ(x i ) = y j ϱ j (x i ) = y j δ ij = y i. j<n j<n The complexity is proportional to n 2. The idea of the Lagrange construction of the interpolating polynomial is applicable to more general tasks. Motivation for dissatisfaction: Approximating exchange rates, we continuously receive new data. Is it necessary to comput all again from scratch? Some intermediate results can be used, provided that we recorded them. The previous result can be used and only corrected by certain polynomial. Newton construction of the interpolating polynomial 5

6 It is still the same (unique) polynomial The constant polynomial c 0 = y 0 has the right value at x 0. The resulting polynomial ϕ is a sum of an appropriate polynomial ω 0 vanishing at x 0 : t (t x 0 ) ω 0 (t), where ω 0 is a polynomial of degree n 2: ϕ(t) = c 0 + (t x 0 ) ω 0 (t), c 0 = y 0, ω 0 (t) = ϕ(t) c 0 t x 0 (cancel out). Values at nodes: ω 0 (x i ) = y i c 0 x i x 0. Induction: Polynomial ω 0 is of the form (ω 1 is a polynomial of degree n 3): ω 0 (t) = c 1 + (t x 1 ) ω 1 (t), c 1 = ω 0 (x 1 ), ω 1 (t) = ω 0(t) c 1 t x 1, ω 1 (t) = c 2 + (t x 2 ) ω 2 (t), c 2 = ω 1 (x 2 ), ω 2 (t) = ω 1(t) c 2 t x 2, ω 2 (t) = c 3 + (t x 3 ) ω 3 (t), c 3 = ω 2 (x 3 ), ω 3 (t) = ω 2(t) c 3 t x 3,... ω n 2 (t) = c n 1 = ω n 3 (x n 2 ). 6

7 Back substitution gives Expansion (not recommended!): ϕ(t) = c 0 + (t x 0 ) [c 1 + (t x 1 ) [c (t x n 3 ) [c n 2 + (t x n 2 ) c n 1 ]...]]. ϕ(t) = c 0 + (t x 0 ) c 1 + (t x 0 ) (t x 1 ) c c n 1 = c j (t x i ). j<n i<j i<n 1 (t x i ) The complexity of the computation of coefficients is n 2, then each function value is obtained with the complexity n. Nevill s algorithm We need to evaluate the interpolating polynomial just for a single argument t; the coefficients are irrelevant. One point (x i, y i ) gives an approximation by a constant polynomial y i. Two points (x i, y i ), (x i+1, y i+1 ) are approximated by a linear polynomial; its value at t is a linear combination of values y i, y i+1, namely y i,i+1 = (t x i+1) y i + (x i t) y i+1 x i x i+1, etc. y i,i+1,...,i+m 1... the value at t of the interpolating polynomial with m nodes x i, x i+1,..., x i+m 1 y i+1,...,i+m 1,i+m... the value at t of the interpolating polynomial with m nodes x i+1,..., x i+m 1, x i+m The value at t of the interpolating polynomial with m + 1 nodes x i, x i+1,..., x i+m is y i,i+1,...,i+m = (t x i+m) y i,i+1,...,i+m 1 + (x i t) y i+1,...,i+m 1,i+m x i x i+m We proceed from values y 0, y 1,..., y n 1 to y 0,1,...,n 1 = ϕ(t) = result. Numerical errors can be reduced if, instead of the polynomial values, we compute only their differences, using recurrent formulas The result is, e.g., C i,m = y i,...,i+m y i,...,i+m 1, D i,m = y i,...,i+m y i+1,...,i+m, C i,m = (x i t) (C i+1,m 1 D i,m 1 ) x i x i+m, D i,m = (x i+m t) (C i+1,m 1 D i,m 1 ) x i x i+m. y 0,1,...,n 1 = y 0 + n 1 m=1 C 0,m. Error of approximation by the interpolating polynomial Zero at nodes (only numerical errors occur). Error in other points if we approximate a function f by the interpolating polynomial. Derivation of the error of approximation by the interpolating polynomial Assumptions: f has continuous derivatives up to order n at a closed interval I {x 0,..., x n 1 }, u I \ {x 0,..., x n 1 } 7

8 We define a function g u (t) = f(t) ϕ(t) P u W (t), }{{} error of interpolation at t f(u) ϕ(u) where P u = is a suitable constant, W (u) W (t) = (t x i ) is a polynomial of degree n with roots x 0,..., x n 1 and a unit coefficient at the highest i<n power, t n (these conditions determine the polynomial uniquely), Its nth derivative is W (n) (t) = n! g u = f ϕ P u W has roots u, x 0,..., x n 1 and a continuous nth derivative g (n) u = f (n) P u n! Between every two roots of g u, there is a root of its derivative g u. Between n + 1 roots of g u, there are n roots of g u, n 1 roots of g u,... 1 root of g (n) u, say ξ. 0 = g u (n) (ξ) = f (n) (ξ) P u n! = f (n) f(u) ϕ(u) (ξ) n! W (u) f(u) ϕ(u) = f (n) (ξ) n! f(u) ϕ(u) = f (n) (ξ) n! f(u) ϕ(u) = f (n) (ξ) n! W (u) W (u) W (u) We replace f (n) (ξ) by its upper estimate M n : t I : f (n) (t) M n f(u) ϕ(u) M n n! W (u) Using an upper estimate W of W, t I : W (t) W, we get an upper estimate independent of u: f(u) ϕ(u) M n n! W 8

9 (Red: upper estimate of error; green: real error.) Two cases: Extrapolation: approximation outside the interval I(x 0,..., x n 1 ) of nodes Interpolation (in the narrow sense): approximation inside I(x 0,..., x n 1 ) Recommendation for minimization of error: cosine distribution of nodes On interval [ 1, 1], choose nodes on a general interval [a, b] choose x i = b + a 2 z i = cos π(i ), i = 0,..., n 1, n + b a 2 z i = b + a 2 + b a 2 cos π(i ) n Approximation error of polynomial interpolation for a cosine distribution of nodes 9

10 (Red: upper estimate of error; green: real error.) Motivating problem - polynomial interpolation Dependence of interpolating polynomial on local changes 10

11 Hermite interpolating polynomial - example Given: x 0 = 0, x 1 = 1, y 0,0, y 1,0, y 0,1, y 1,1 R Task: Find a polynomial ϕ of degree at most 3, such that ϕ(x 0 ) = y 0,0, ϕ(x 1 ) = y 1,0, ϕ (x 0 ) = y 0,1, ϕ (x 1 ) = y 1,1 As in the Lagrange construction of the interpolating polynomial, we first find polynomials η, ϱ, σ, τ of degree 3: ψ ψ(0) ψ(1) ψ (0) ψ (1) η ϱ σ τ Polynomial η has a double root 1, thus it is of the form η(t) = (a 0 t + b 0 )(t 1) 2, 11

12 where a 0, b 0 can be determined from the values at 0: ϱ has a double root 0, thus it is of the form where a 1, b 1 can be determined from the values at 1: η(0) = b 0 = 1 η (0) = a 0 2b 0 = 0 a 0 = 2, b 0 = 1, η(t) = (2t + 1)(t 1) 2 ϱ(t) = (a 1 t + b 1 ) t 2, ϱ(1) = a 1 + b 1 = 1 ϱ (1) = 3a 1 + 2b 1 = 0 a 1 = 2, b 1 = 3, ϱ(t) = ( 2t + 3)t 2 σ has a double root 1 and a single root 0, thus it is of the form where c 0 can be determined from the derivative at 0: σ(t) = c 0 t (t 1) 2, σ (0) = c 0 = 1 σ(t) = t(t 1) 2 τ has a double root 0 and a single root 1, thus it is of the form where c 1 can be determined from the derivative at 1: τ(t) = c 1 t 2 (t 1), τ (1) = c 1 = 1 τ(t) = t 2 (t 1) 12

13 ϕ is a linear combination of η, ϱ, σ, τ, ϕ = y 0,0 η + y 1,0 ϱ + y 0,1 σ + y 1,1 τ Approximation by Taylor series (more exactly, by Taylor polynomial) Special case of Hermite interpolating polynomial with one node, where the first n derivatives (including the 0 s) are given. Given: Node x 0 n values y 0,0, y 0,1,..., y 0,n 1 R Task: Find a polynomial ϕ of degree less than n, such that The solution is of the form ϕ (j) (x 0 ) = y 0,j, j = 0,..., n 1. ϕ = y 0,j ψ j, j<n ψ (i) j (x 0 ) = δ ij, ψ j (t) = 1 j! (t x 0) j ϕ(t) = j<n y 0,j j! (t x 0 ) j 13

14 If y 0,0, y 0,1,..., y 0,n 1 are the derivatives of some function f at x 0, i.e., y 0,j = f (j) (x 0 ), then ϕ is a finite Taylor series with center x 0 : ϕ(t) = f (j) (x 0 ) (t x 0 ) j. j! j<n If f has a continuous nth derivative at interval I(u, x 0 ), then f(u) ϕ(u) = f (n) (ξ) n! (u x 0 ) n, where ξ I(u, x 0 ) f(u) ϕ(u) M n n! (u x 0 ) n The only difference from the error of the interpolating polynomial is that polynomial W (u) with roots x 0,..., x n 1 is replaced by polynomial (u x 0 ) n (of the same degree) with a single root x 0 of multiplicity n. Approximation by Taylor polynomial is very precise in the neighborhood of x 0, but less precise in more distant points. Approximations by Taylor polynomials of degrees 5, 10, and 15 for positive arguments. Approximation by Taylor polynomial 14

15 Approximations by Taylor polynomials of degrees 5, 10, and 15 for negative arguments. Approximation by Taylor polynomial 15

16 Approximations by Taylor polynomials of degrees 5, 10, and 15 for positive and negative arguments. Spline interpolation Disadvantage of polynomial interpolation: small change of input can cause big change of output at distant points (the influence is not localized). Spline is a piecewise polynomial functionãŋ. (It coincides with polynomials of small degree at particular intervals.) The simplest case is linear spline, which is the approximation by a piecewise linear function. Cubic spline Given: n nodes x 0,..., x n 1 ordered increasingly, n values y 0,..., y n 1. Task: Find function ϕ, defined on interval [x 0, x n 1 ], such that: ϕ(x i ) = y i, i = 0,..., n 1, at each interval [x i 1, x i ], i = 1,..., n 1, function ϕ coincides with a polynomial ϕ i of degree at most 3, the first two derivatives of ϕ are continuous at interval [x 0, x n 1 ]. (We shall find it necessary to make this task more precise.) It suffices to ensure continuity of derivatives at n 1 nodes x 1,..., x n 2 : ϕ i(x i ) = ϕ i+1(x i ), i = 1,..., n 2, ϕ i (x i ) = ϕ i+1(x i ), i = 1,..., n 2. Suppose that values ϕ (x i ) = ϕ i (x i) = ϕ i+1 (x i), i = 1,..., n 2 are known. Then ϕ is continuous. 16

17 Polynomials ϕ i, i = 1,..., n 1 can be found just as the Hermite interpolating polynomial in the previous example; the only difference is that the nodes 0, 1 are replaced by x i 1, x i. The general case is obtained by an easy linear transformation of coordinates: t = x i 1 + (x i x i 1 ) u, u = t x i 1 x i x i 1. On interval [x i 1, x i ], i = 1,..., n 1, we get ϕ i (t) = y i 1 η i (t) + y i ϱ i (t) + ϕ (x i 1 )σ i (t) + ϕ (x i )τ i (t), where η i, ϱ i, σ i, τ i are polynomials of degree at most 3 (computed as the polynomials η, ϱ, σ, τ in the example of Hermite interpolating polynomial). It remains to find ϕ (x i ), i = 0,..., n 1: y i 1 η i (x i ) + y i ϱ i (x i ) + ϕ (x i 1 )σ i (x i ) + ϕ (x i )τ i (x i ) = = y i η i+1(x i ) + y i+1 ϱ i+1(x i ) + ϕ (x i )σ i+1(x i ) + ϕ (x i+1 )τ i+1(x i ), where i = 1,..., n 2 (system of n 2 linear equations with n unknowns). There remain 2 undefined parameters. The usual choice ϕ (x 0 ) = ϕ (x n 1 ) = 0 leads to so-called natural spline. This choice influences the result only at the endpoints of the interval [x 0, x n 1 ]. The algorithm consists of 2 parts: 1. Computation of coefficients ϕ (x i ), i = 0,..., n Evaluation of the spline. The matrix of the system of linear equations is three-diagonal (its particular properties allow more effective solutions). Splins cannot be used for extrapolation! Still many implementations allow it. Comment: The choice of nodes has smaller influence than in polynomial interpolation. Influence of local changes on a spline Influence of local changes on a spline 17

18 Motivating problem - spline Given: n nodes x 0,..., x n 1 n values y 0,..., y n 1 k functions ϕ 0,..., ϕ k 1, k n, defined at least at the nodes. Task: Find coefficients c 0,..., c k 1 R of a linear combination of functions ϕ j ϕ = j<k c j ϕ j which minimizes H 2 = (ϕ(x i ) y i ) 2 = i<n i<n ( ) 2 c j ϕ j (x i ) y i j<k Modified criteria H 2w = i<n w i (ϕ(x i ) y i ) 2 can be solved analogously, H 1 = i<n ϕ(x i ) y i 18

19 not used, the solution is not unique, H 0 = max i<n ϕ(x i) y i is more difficult to solve (Chebyshev approximation). Solution of approximation by LSM We introduce an inner (=scalar) product on R n, defined for u = (u 0,..., u n 1 ), v = (v 0,..., v n 1 ) by u v = i<n u i v i. We have to approximate vector y by a linear combination ϕ = c j ϕ j, the criterion is H 2 = ( ϕ y) ( ϕ y) = ϕ y 2. Solution: The orthogonal projection; it satisfies the system of equations (for m = 0,..., k 1) j<k ( ϕ y) ϕ m, ( ϕ y) ϕ m = 0, ϕ ϕ m = y ϕ m. Vectors ϕ and y have the same inner products with vectors ϕ m. ( ) c j ϕ j ϕ m = y ϕ m, j<k c j ( ϕ j ϕ m ) = y ϕ m, m = 0,..., k 1; j<k system of linear equations for unknowns c 0,..., c k 1 - system of normal equations. Special case: Approximation by a polynomial of degree less than k; we may choose ϕ j (t) = t j, ϕ j ϕ m = i<n x j i xm i = i<n x j+m i. Motivating problem - LSM 19

20 Approximation of V-A characteristic of a diode by a linear combination of a constant and two exponencials with appropriate bases. In linear subspace P = ϕ 0,..., ϕ k 1, we find an orthogonal basis ( ψ 0,..., ψ k 1 ), ψ j ψ m = 0 for j m. Orthogonality depends not only on functions ϕ 0,..., ϕ k 1, but also on the choice of nodes! We look for a solution in the form ϕ = d j ψ j, where d j, j = 0,..., k 1 are coordinates w.r.t. a new basis. The matrix of the system of normal equations is diagonal: d j ( ψj ψ j ) = y ψ j, j = 0,..., k 1, j<k d j = y ψ j ψ j ψ = y ψ j j ψ, j = 0,..., k 1. j 2 Moreover, vectors ψ j can be chosen unit, then also the denominator is 1. Constructs an orthogonal basis ψ 0 = ϕ 0 ψ 1 = ϕ 1 + α 10 ψ0 ψ 1 ψ 0 = 0 ϕ 1 ψ 0 + α 10 ψ0 ψ 0 = 0 α 10 = ϕ 1 ψ 0 ψ 0 ψ 0 ψ 2 = ϕ 2 + α 20 ψ0 + α 21 ψ1 ψ 2 ψ 0 = 0 ϕ 2 ψ 0 + α 20ψ0 ψ 0 + α 21 ψ1 }{{ ψ } 0 = 0 0 α 20 = ϕ 2 ψ 0 ψ 0 ψ 0 20

21 ψ 2 ψ 1 = 0 ϕ 2 ψ 1 + α 20 ψ0 }{{ ψ } 1 +α 21ψ1 ψ 1 = 0 0 α 21 = ϕ 2 ψ 1 ψ 1 ψ 1... ψ j = ϕ j + m<j α jm ψm p, p < j : ψ j ψ p = 0 = ϕ j ψ p + m<j α jm ψm ψ p = ϕ j ψ p + α jp ψp ψ p α jp = ϕ j ψ p ψ p ψ p Approximation by LSM with the choice of approximating functions 1, cos 2π t T, sin 2π t T, cos 4π t T, sin 4π t T,... For equidistant nodes at an interval of length T, (at most n) vectors ϕ j are orthogonal. Given: Bounded interval I, continuous function f na I, k N Task: Find a polynomial ϕ of degree less than k minimizing x i = a + i T, i = 0,..., n 1, n H 0 = max ϕ(t) f(t) t I There are methods solving this task, but their complexity is much higher; instead of that, we usually apply modified LSM: For simplicity, we take interval I = 1, 1 ; generalization to any interval a, b is done by a linear transformation the inverse transformation is x = b + a 2 z = x b+a 2 b a 2 + b a 2 z, We choose n k nodes x 0,..., x n 1 1, 1 with cosine distribution: z i = cos π(i ) n., i = 0,..., n 1. For a basis of the linear space of all polynomials of degree less than k, we take Chebyshev polynomials: 21

22 Chebyshev polynomials They can be computed by a recurrent formula ϕ j (t) = cos(j arccos t), j = 0,..., k 1, k < n. ϕ 0 (t) = 1, ϕ 1 (t) = t, ϕ j (t) = 2tϕ j 1 (t) ϕ j 2 (t), j 2 The range of functions ϕ j, j 1, at interval I is 1, 1. The vectors ϕ 0,..., ϕ k 1 are orthogonal, 0 if j m, n ϕ j ϕ m = 2 if j = m > 0, n if j = m = 0. For k = n, we obtain the interpolating polynomial (with the recommended distribution of nodes). The solution for k < n differs by ignoring the summands of higher order. Coefficients c k,..., c n 1 are usually small (they depend on higher derivatives of the approximated function!). An upper estimate of the error at nodes is n 1 ϕ(x i ) f(x i ) c j. j=k Remarks on Chebyshev approximation We do not optimize the criterion H 0, but the result is not much different from the optimal solution. We cannot say much about the error at other points than the nodes; nevertheless, the method can be recommended. The recurrent formula is used not only for computation of Chebyshev polynomials, but also to their direct evaluation at given points. It is not recommended to expand the result to the standard form ϕ(t) = b j t j. Chebyshev approximation can be generalized to an approximation by a product of a known function and an unknown polynomial. j<k 22

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Numerical Analysis: Solving Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

Polynomial Interpolation

Polynomial Interpolation Polynomial Interpolation (Com S 477/577 Notes) Yan-Bin Jia Sep 1, 017 1 Interpolation Problem In practice, often we can measure a physical process or quantity (e.g., temperature) at a number of points

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Convergence Rates on Root Finding

Convergence Rates on Root Finding Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion

More information

Numerical Methods I: Polynomial Interpolation

Numerical Methods I: Polynomial Interpolation 1/31 Numerical Methods I: Polynomial Interpolation Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 16, 2017 lassical polynomial interpolation Given f i := f(t i ), i =0,...,n, we would

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 2 Interpolation

Chapter 2 Interpolation Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Interpolation Theory

Interpolation Theory Numerical Analysis Massoud Malek Interpolation Theory The concept of interpolation is to select a function P (x) from a given class of functions in such a way that the graph of y P (x) passes through the

More information

Chapter 1 Numerical approximation of data : interpolation, least squares method

Chapter 1 Numerical approximation of data : interpolation, least squares method Chapter 1 Numerical approximation of data : interpolation, least squares method I. Motivation 1 Approximation of functions Evaluation of a function Which functions (f : R R) can be effectively evaluated

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Computational Physics

Computational Physics Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function

More information

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20 Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013 Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Chapter 3 Interpolation and Polynomial Approximation

Chapter 3 Interpolation and Polynomial Approximation Chapter 3 Interpolation and Polynomial Approximation Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128A Numerical Analysis Polynomial Interpolation

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b].

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b]. Maria Cameron 1 Interpolation by spline functions Spline functions yield smooth interpolation curves that are less likely to exhibit the large oscillations characteristic for high degree polynomials Splines

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations International journal of scientific and technical research in engineering (IJSTRE) www.ijstre.com Volume Issue ǁ July 206. Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

2. Interpolation. Closing the Gaps of Discretization...

2. Interpolation. Closing the Gaps of Discretization... 2. Interpolation Closing the Gaps of Discretization... Numerisches Programmieren, Hans-Joachim Bungartz page 1 of 48 2.1. Preliminary Remark The Approximation Problem Problem: Approximate a (fully or partially

More information

Evaluation, transformation, and parameterization of epipolar conics

Evaluation, transformation, and parameterization of epipolar conics Evaluation, transformation, and parameterization of epipolar conics Tomáš Svoboda svoboda@cmp.felk.cvut.cz N - CTU CMP 2000 11 July 31, 2000 Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/svoboda/svoboda-tr-2000-11.pdf

More information

MODULE 13. Topics: Linear systems

MODULE 13. Topics: Linear systems Topics: Linear systems MODULE 13 We shall consider linear operators and the associated linear differential equations. Specifically we shall have operators of the form i) Lu u A(t)u where A(t) is an n n

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

1 Introduction. 2 Binary shift map

1 Introduction. 2 Binary shift map Introduction This notes are meant to provide a conceptual background for the numerical construction of random variables. They can be regarded as a mathematical complement to the article [] (please follow

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS

NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS TWMS J Pure Appl Math V5, N2, 24, pp22-228 NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS S ASADI, AH BORZABADI Abstract In this paper, Haar wavelet benefits are applied to the delay

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where Polynomials Polynomials Evaluation of polynomials involve only arithmetic operations, which can be done on today s digital computers. We consider polynomials with real coefficients and real variable. p

More information

November 20, Interpolation, Extrapolation & Polynomial Approximation

November 20, Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 20, 2016 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

1 Review of Interpolation using Cubic Splines

1 Review of Interpolation using Cubic Splines cs412: introduction to numerical analysis 10/10/06 Lecture 12: Instructor: Professor Amos Ron Cubic Hermite Spline Interpolation Scribes: Yunpeng Li, Mark Cowlishaw 1 Review of Interpolation using Cubic

More information

PART II : Least-Squares Approximation

PART II : Least-Squares Approximation PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Orthogonal Polynomials and Gaussian Quadrature

Orthogonal Polynomials and Gaussian Quadrature Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions We introduce a new class of numerical differentiation schemes constructed for the efficient solution of time-dependent PDEs that arise in wave phenomena. The schemes are constructed via the prolate spheroidal

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... Introduce the topic of numerical methods Consider the Error analysis and sources of errors Introduction A numerical method which

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions Applied Mathematical Sciences, Vol. 5, 2011, no. 23, 1145-1152 A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions Z. Avazzadeh

More information

Hermite Interpolation

Hermite Interpolation Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Numerical Methods I: Interpolation (cont ed)

Numerical Methods I: Interpolation (cont ed) 1/20 Numerical Methods I: Interpolation (cont ed) Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 30, 2017 Interpolation Things you should know 2/20 I Lagrange vs. Hermite interpolation

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information

Expected heights in heaps. Jeannette M. de Graaf and Walter A. Kosters

Expected heights in heaps. Jeannette M. de Graaf and Walter A. Kosters Expected heights in heaps Jeannette M. de Graaf and Walter A. Kosters Department of Mathematics and Computer Science Leiden University P.O. Box 951 300 RA Leiden The Netherlands Abstract In this paper

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

1 Lecture 8: Interpolating polynomials.

1 Lecture 8: Interpolating polynomials. 1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Symmetric functions and the Vandermonde matrix

Symmetric functions and the Vandermonde matrix J ournal of Computational and Applied Mathematics 172 (2004) 49-64 Symmetric functions and the Vandermonde matrix Halil Oruç, Hakan K. Akmaz Department of Mathematics, Dokuz Eylül University Fen Edebiyat

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

MATHEMATICAL FORMULAS AND INTEGRALS

MATHEMATICAL FORMULAS AND INTEGRALS HANDBOOK OF MATHEMATICAL FORMULAS AND INTEGRALS Second Edition ALAN JEFFREY Department of Engineering Mathematics University of Newcastle upon Tyne Newcastle upon Tyne United Kingdom ACADEMIC PRESS A Harcourt

More information

MA1023-Methods of Mathematics-15S2 Tutorial 1

MA1023-Methods of Mathematics-15S2 Tutorial 1 Tutorial 1 the week starting from 19/09/2016. Q1. Consider the function = 1. Write down the nth degree Taylor Polynomial near > 0. 2. Show that the remainder satisfies, < if > > 0 if > > 0 3. Show that

More information

Document downloaded from:

Document downloaded from: Document downloaded from: http://hdl.handle.net/1051/56036 This paper must be cited as: Cordero Barbero, A.; Torregrosa Sánchez, JR.; Penkova Vassileva, M. (013). New family of iterative methods with high

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (1998) 80: 39 59 Numerische Mathematik c Springer-Verlag 1998 Electronic Edition Numerical integration over a disc. A new Gaussian quadrature formula Borislav Bojanov 1,, Guergana Petrova,

More information

8 STOCHASTIC SIMULATION

8 STOCHASTIC SIMULATION 8 STOCHASTIC SIMULATIO 59 8 STOCHASTIC SIMULATIO Whereas in optimization we seek a set of parameters x to minimize a cost, or to maximize a reward function J( x), here we pose a related but different question.

More information

Math 365 Homework 5 Due April 6, 2018 Grady Wright

Math 365 Homework 5 Due April 6, 2018 Grady Wright Math 365 Homework 5 Due April 6, 018 Grady Wright 1. (Polynomial interpolation, 10pts) Consider the following three samples of some function f(x): x f(x) -1/ -1 1 1 1/ (a) Determine the unique second degree

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Generalization Of The Secant Method For Nonlinear Equations

Generalization Of The Secant Method For Nonlinear Equations Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

1 Motivation for Newton interpolation

1 Motivation for Newton interpolation cs412: introduction to numerical analysis 09/30/10 Lecture 7: Newton Interpolation Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Motivation for Newton interpolation

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 35, pp. 148-163, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SPHERICAL QUADRATURE FORMULAS WITH EQUALLY SPACED NODES ON LATITUDINAL CIRCLES DANIELA

More information

Numerical Methods with MATLAB

Numerical Methods with MATLAB Numerical Methods with MATLAB A Resource for Scientists and Engineers G. J. BÖRSE Lehigh University PWS Publishing Company I(T)P AN!NTERNATIONAL THOMSON PUBLISHING COMPANY Boston Albany Bonn Cincinnati

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Spline Methods Draft. Tom Lyche and Knut Mørken

Spline Methods Draft. Tom Lyche and Knut Mørken Spline Methods Draft Tom Lyche and Knut Mørken 25th April 2003 2 Contents 1 Splines and B-splines an Introduction 3 1.1 Convex combinations and convex hulls..................... 3 1.1.1 Stable computations...........................

More information

Maximal Entropy for Reconstruction of Back Projection Images

Maximal Entropy for Reconstruction of Back Projection Images Maximal Entropy for Reconstruction of Back Projection Images Tryphon Georgiou Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN 5545 Peter J Olver Department of

More information