Function approximation

Similar documents
Scientific Computing: An Introductory Survey

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Interpolation and Approximation

Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation

Course Notes: Week 1

Introduction Linear system Nonlinear equation Interpolation

Polynomial Interpolation Part II

CS 323: Numerical Analysis and Computing

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Lecture 10 Polynomial interpolation

3.1 Interpolation and the Lagrange Polynomial

CHAPTER 4. Interpolation

CS 323: Numerical Analysis and Computing

Scientific Computing

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

BSM510 Numerical Analysis

SECTION 7: CURVE FITTING. MAE 4020/5020 Numerical Methods with MATLAB

Computational Methods. Least Squares Approximation/Optimization

Numerical Marine Hydrodynamics

Review I: Interpolation

CS 257: Numerical Methods

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

SYDE 112, LECTURE 7: Integration by Parts

1 Lecture 8: Interpolating polynomials.

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Q1. Discuss, compare and contrast various curve fitting and interpolation methods

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

The Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof:

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Numerical Methods I: Polynomial Interpolation

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Interpolation. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Interpolation 1 / 34

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

INTERPOLATION Background Polynomial Approximation Problem:

Numerical Analysis: Interpolation Part 1

Lecture 07: Interpolation

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Numerical Methods for Differential Equations Mathematical and Computational Tools

Data representation and approximation

Introduction to Parallel Programming in OpenMP Dr. Yogish Sabharwal Department of Computer Science & Engineering Indian Institute of Technology, Delhi

A description of a math circle set of activities around polynomials, especially interpolation.

MATH 320, WEEK 7: Matrices, Matrix Operations

Physics 331 Introduction to Numerical Techniques in Physics

Generating Function Notes , Fall 2005, Prof. Peter Shor

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Section 5.2 Series Solution Near Ordinary Point

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Interpolation and the Lagrange Polynomial

Non-polynomial Least-squares fitting

MATH ASSIGNMENT 07 SOLUTIONS. 8.1 Following is census data showing the population of the US between 1900 and 2000:

Math 123, Week 2: Matrix Operations, Inverses

Empirical Models Interpolation Polynomial Models

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Interpolating Accuracy without underlying f (x)

We consider the problem of finding a polynomial that interpolates a given set of values:

Chapter 9. Non-Parametric Density Function Estimation

Applied Math 205. Full office hour schedule:

Linear Least-Squares Data Fitting

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

MATH 319, WEEK 2: Initial Value Problems, Existence/Uniqueness, First-Order Linear DEs

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

1 Probabilities. 1.1 Basics 1 PROBABILITIES

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

Chapter 2 Interpolation

1 Series Solutions Near Regular Singular Points

Integer Multiplication

1 Motivation for Newton interpolation

Math 502 Fall 2005 Solutions to Homework 3

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

CS412: Introduction to Numerical Methods

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

FINITE-DIMENSIONAL LINEAR ALGEBRA

lecture 4: Constructing Finite Difference Formulas

LEAST SQUARES APPROXIMATION

30.4. Matrix Norms. Introduction. Prerequisites. Learning Outcomes

lecture 5: Finite Difference Methods for Differential Equations

CS S Lecture 5 January 29, 2019

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Interpolation Theory

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

1 Operations on Polynomials

Lecture 2: Computing functions of dense matrices

Intro Polynomial Piecewise Cubic Spline Software Summary. Interpolation. Sanzheng Qiao. Department of Computing and Software McMaster University

Chapter 1: Systems of Linear Equations and Matrices

A first order divided difference

Chapter 9. Non-Parametric Density Function Estimation

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Nonconstant Coefficients

The Newton-Raphson method accelerated by using a line search - comparison between energy functional and residual minimization

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Lecture 4: Applications of Orthogonality: QR Decompositions

Transcription:

Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be the output of some other numerical procedure, or it may be the solution to a differential equation. The approximating function is usually chosen because it is relatively simpler to evaluate and analyze. Depending on the context, we might want an approximation that is accurate for a narrow range of arguments (like a Taylor series), or we might want guaranteed global accuracy over a wide range of arguments. We might want an approximation that preserves properties like monotonicity or positivity (e.g. when approximating a probability density). We might want to exactly match measurements at specified points, or we might want an approximation that smooths out noisy data. We might care a great deal about the cost of forming the approximating function if it is only used a few times, or we might care more about the cost of evaluating the approximation after it has been formed. There are a huge number of possible tradeoffs, and it is worth keeping these types of questions in mind in practice. Though function approximation is a huge subject, we will mostly focus on approximation by polynomials and piecewise polynomials. In particular, we will concentrate on interpolation, or finding (piecewise) polynomial approximating functions that exactly match a given function at specified points. Polynomial interpolation This is the basic polynomial interpolation problem: given data {(x i, y i )} d i=0 where all the t i are distinct, find a degree d polynomial p(x) such that p(x i ) = y i for each i. Such a polynomial always exists and is unique. The Vandermonde approach Maybe the most obvious way to approach to this problem is to write c j x j,

where the unknown x j are determined by the interpolation conditions p(x i ) = c j x j i = y j. In matrix form, we can write the interpolation conditions as Ac = y where a ij = x j i (and we re now thinking of the index j as going from zero to d). The matrix A is a Vandermonde matrix. The Vandermonde matrix is nonsingular, and we can solve Vandermonde systems using ordinary Gaussian elimination in O(d 3 ) time. This is usually a bad way to compute things numerically. The problem is that the condition numbers of Vandermonde systems grow exponentially with the system size, yielding terribly ill-conditioned problems even for relatively small problems. The Lagrange approach The problem with the Vandermonde matrix is not in the basic setup, but in how we chose to represent the space of degree d polynomials. In general, we can write c j q j (x) where {q j (x)} is some other basis for the space of polynomials of degree at most d. The power basis {x j } just happens to be a poor choice from the perspective of conditioning. One alternative to the power basis is a basis of Lagrange polynomials: j i L i (x) = (x x i) j i (x j x i ). The polynomial L i is characterized by the property { 1, j = i L i (x j ) = 0, otherwise.

Therefore, if we write the interpolating polynomial in the form c j L j (x), the interpolation conditions yield the linear system i.e. we simply have Ic = y, y j L j (x), It is trivial to find the coefficients in a representation of an interpolant via Lagrange polynomials. But what if we want to evaluate the Lagrange form of the interpolant at some point? The most obvious algorithm costs O(d 2 ) per evaluation, which is more expensive than the O(d) cost of evaluating a polynomial in the usual monomial basis using Horner s rule. Horner s rule There are typically two tasks in applications of polynomial interpolation. The first task is getting some representation of the polynomial; the second task is to actually evaluate the polynomial. In the case of the power basis {x j } d, we would usually evaluate the polynomial in O(d) time using Horner s method. You have likely seen this method before, but it is perhaps worth going through it one more time. Horner s scheme can be written in terms of a recurrence, writing p(x) as p 0 (x) where p j (x) = c j + xp j+1 (x) and p d (x) = c d. For example, if we had three data points, we would write p 2 (x) = c 2 Usually, we would just write a loop: p 1 (x) = c 1 + xp 2 (x) = c 1 + xc 2 p 0 (x) = c 0 + xp 1 (x) = c 0 + xc 1 + x 2 c 2.

function px = peval(c,x) px = c(end) x; for j = length(c) 1: 1:1 px = c(j) + x. px; end But even if we would usually write the loop with no particular thought to the recurrence, it is worth remembering how to write the recurrence. The idea of Horner s rule extends to other bases. For example, suppose we now write a quadratic as An alternate way to write this is c 0 q 0 (x) + c 1 q 1 (x) + c 2 q 2 (x). q 0 (c 0 + q 1 /q 0 (c 1 + c 2 q 2 /q 1 )); more generally, we could write q 0 (x)p 0 (x) where p d (x) = c d and p j (x) = c j + p j+1 (x)q j+1 (x)/q j (x). In the case of the monomial basis, this is just Horner s rule, but the recurrence holds more generally. The Newton approach The Vandermonde approach to interpolation requires that we solve an illconditioned linear system (at a cost of O(d 3 )) to find the interpolating polynomial. It then costs O(d) per point to evaluate the polynomial. The Lagrange approach gives us a trivial linear system for the coefficients, but it then costs O(d 2 ) per point to evaluate the resulting representation. Newton s form of the interpolant will give us a better balance: O(d 2 ) time to find the coefficients, O(d) time to evaluate the function. Newton s interpolation scheme uses the polynomial basis q 0 (x) = 1 j q j (x) = (x x k ), j > 0. k=1

If we write c j q j (x), the interpolating conditions have the form Uc = y, where U is an upper triangular matrix with entries u ij = q j (x i ) = d (t i t j ) for i = 0,... d and j = 0,..., d. Because U is upper triangular, we can compute the coefficients c j in O(d 2 ) time; and we can use the relationship q j (x) = (x x j )q j 1 (x) as the basis for a Horner-like scheme to evaluate p(x) in O(d) time (this is part of a problem on HW 5). In practice, we typically do not form the matrix U in order to compute x. Instead, we express the components of x in terms of divided differences. That is, we write c j = y[x 1,..., x j+1 ] where the coefficients y[x i,..., x j ] are defined recursively by the relationship y[x i ] = y i, k=j y[x i, x i+1,..., x j ] = y[x i, x i+1,..., x j 1 ] y[x i+1,..., x j ] x i x j. Evaluating the x j coefficients by divided differences turns out to be numerically preferable to forming U and solving by back-substitution.