LEAST SQUARES APPROXIMATION

Similar documents
Scientific Computing

How might we evaluate this? Suppose that, by some good luck, we knew that. x 2 5. x 2 dx 5

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

Approximation theory

Downloaded from

Methods of Integration

A quadratic expression is a mathematical expression that can be written in the form 2

Chapter 4: Interpolation and Approximation. October 28, 2005

Power Series Solutions to the Legendre Equation

Curve Fitting and Approximation

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

6.3 Partial Fractions

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series

ECONOMICS 207 SPRING 2006 LABORATORY EXERCISE 5 KEY. 8 = 10(5x 2) = 9(3x + 8), x 50x 20 = 27x x = 92 x = 4. 8x 2 22x + 15 = 0 (2x 3)(4x 5) = 0

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Math 2: Algebra 2, Geometry and Statistics Ms. Sheppard-Brick Chapter 4 Test Review

EXAMPLES OF PROOFS BY INDUCTION

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

Problem 1. Produce the linear and quadratic Taylor polynomials for the following functions:

Power Series Solutions to the Legendre Equation

4 Integration. Copyright Cengage Learning. All rights reserved.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Review of Power Series

Taylor and Maclaurin Series

UNIT 3 INTEGRATION 3.0 INTRODUCTION 3.1 OBJECTIVES. Structure

Lesson 3: Linear differential equations of the first order Solve each of the following differential equations by two methods.

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lesson 7.1 Polynomial Degree and Finite Differences

Fixed point iteration and root finding

17.2 Nonhomogeneous Linear Equations. 27 September 2007

Solving Differential Equations Using Power Series

Solving Differential Equations Using Power Series

Lecture 22: Integration by parts and u-substitution

Review of Integration Techniques

Algebra III Chapter 2 Note Packet. Section 2.1: Polynomial Functions

Numerical Methods. King Saud University

Math123 Lecture 1. Dr. Robert C. Busby. Lecturer: Office: Korman 266 Phone :

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

MA22S3 Summary Sheet: Ordinary Differential Equations

FIXED POINT ITERATION

1.2. Indices. Introduction. Prerequisites. Learning Outcomes

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Review all the activities leading to Midterm 3. Review all the problems in the previous online homework sets (8+9+10).

Series Solution of Linear Ordinary Differential Equations


Nonlinear Integral Equation Formulation of Orthogonal Polynomials

Chapter 2. First-Order Differential Equations

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

TAYLOR SERIES [SST 8.8]

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Solutions to Exercises, Section 2.5

Numerical Methods. Equations and Partial Fractions. Jaesung Lee

i x i y i

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence

Question 1: The graphs of y = p(x) are given in following figure, for some Polynomials p(x). Find the number of zeroes of p(x), in each case.

The Perrin Conjugate and the Laguerre Orthogonal Polynomial

March 5, 2009 Name The problems count as marked. The total number of points available is 131. Throughout this test, show your work.

Review: complex numbers

, a 1. , a 2. ,..., a n

Assignment. Disguises with Trig Identities. Review Product Rule. Integration by Parts. Manipulating the Product Rule. Integration by Parts 12/13/2010

MATH 1231 MATHEMATICS 1B Calculus Section 4.4: Taylor & Power series.

November 20, Interpolation, Extrapolation & Polynomial Approximation

LEAST SQUARES DATA FITTING

Math 334 A1 Homework 3 (Due Nov. 5 5pm)

Lecture 4.6: Some special orthogonal functions

Math 240 Calculus III

Solution of Algebric & Transcendental Equations

Newton s Method and Linear Approximations

Test 2 Review Math 1111 College Algebra

Math 3 Variable Manipulation Part 3 Polynomials A

CHAPTER 2 POLYNOMIALS KEY POINTS

SOLVING TRIGONOMETRIC INEQUALITIES: METHODS, AND STEPS (By Nghi H. Nguyen Updated Nov 02, 2014)

Class notes: Approximation

Calculation of Orthogonal Polynomials and Their Derivatives

Additional Practice Lessons 2.02 and 2.03

Linear Algebra. and

Chapter 2 notes from powerpoints

Section 5.6. Integration By Parts. MATH 126 (Section 5.6) Integration By Parts The University of Kansas 1 / 10

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

Tropical Polynomials

Further Mathematical Methods (Linear Algebra) 2002

CALCULUS JIA-MING (FRANK) LIOU

Interpolation Theory

1. The profit from selling local ballet tickets depends on the ticket price. Using past receipts, we find that the profit can be modeled

n=0 xn /n!. That is almost what we have here; the difference is that the denominator is (n + 1)! in stead of n!. So we have x n+1 n=0

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Chapter 4: Higher-Order Differential Equations Part 1

ODE Homework Series Solutions Near an Ordinary Point, Part I 1. Seek power series solution of the equation. n(n 1)a n x n 2 = n=0

( 3) ( ) ( ) ( ) ( ) ( )

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Note on Chebyshev Regression

SOLVING TRIGONOMETRIC INEQUALITIES (CONCEPT, METHODS, AND STEPS )

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Other Things & Some Applications

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

Math /Foundations of Algebra/Fall 2017 Numbers at the Foundations: Real Numbers In calculus, the derivative of a function f(x) is defined

Honors Advanced Mathematics November 4, /2.6 summary and extra problems page 1 Recap: complex numbers

Ch 5.7: Series Solutions Near a Regular Singular Point, Part II

Transcription:

LEAST SQUARES APPROXIMATION One more approach to approximating a function f (x) on an interval a x b is to seek an approximation p(x) with a small average error over the interval of approximation. A convenient definition of the average error of the approximation is given by E(p; f ) { b } 1 [f (x) p(x)] dx b a a (1) This is also called the root-mean-square-error (denoted subsequently by RMSE) in the approximation of f (x) by p(x). Note first that choosing p(x) to minimize E(p; f ) is equivalent to minimizing b a [f (x) p(x)] dx thus dispensing with the square root and multiplying fraction (although the minimums are generally different). The minimizing of (1) is called the least squares approximation problem.

Example Let f (x) = e x, p(x) = α 0 + α 1 x, α 0 and α 1 unknown. Approximate f (x) over [, 1]. Choose α 0, α 1 to minimize g(α 0, α 1 ) = Integrating, g(α 0, α 1 ) (e x α 0 α 1 x) dx () ( e x + α 0 + α 1x α 0 e x α 1 xe x + α 0 α 1 x ) dx g(α 0, α 1 ) = c 1 α 0 + c α 1 + c 3 α 0 α 1 + c 4 α 0 + c 5 α 1 + c 6 with constants {c 1,..., c 6 }, e.g. c 1 =, c 6 = ( e 1 e ) /. g is a quadratic polynomial in the two variables α 0, α 1. To find its minimum, solve the system g α 0 = 0, g α 1 = 0

It is simpler to differentiate () g(α 0, α 1 ) = (e x α 0 α 1 x) dx directly, obtaining (e x α 0 α 1 x) () dx = 0 This simplifies to (e x α 0 α 1 x) ( x) dx = 0 So α 0 = α 0 = e e e x dx = e e, 1 3 α 1 = xe x dx = e. = 1.175, α 1 = 3e. = 1.1036

Using these values for α 0 and α 1, we denote the resulting linear approximation by l 1 (x) = α 0 + α 1 x It is called the best linear approximation to e x in the sense of least squares. For the maximum error, max x 1 ex l 1 (x) =. 0.439

Errors in linear approximations of e x Approximation Max Error RMSE Taylor t 1 (x) 0.718 0.46 Least squares l 1 (x) 0.439 0.16 Chebyshev c 1 (x) 0.37 0.184 Minimax m 1 (x) 0.79 0.190 y y=e x y=l 1 (x) 1 x 1 Figure: The linear least squares approximation to e x

THE GENERAL CASE Approximate f (x) on [a, b], and let n 0. Seek p(x) to minimize the RMSE. Write g(α 0, α 1,..., α n ) p(x) = α 0 + α 1 x + + α n x n b a [f (x) α 0 α 1 x α n x n ] dx Find coefficients α 0, α 1,..., α n to minimize this integral. The integral g(α 0, α 1,..., α n ) is a quadratic polynomial in the n + 1 variables α 0, α 1,..., α n. To minimize g(α 0, α 1,..., α n ), invoke the conditions g α i = 0, i = 0, 1,..., n This yields a set of n + 1 equations that must be satisfied by a minimizing set α 0, α 1,..., α n for g. Manipulating this set of conditions leads to a simultaneous linear system.

To better understand the form of the linear system, consider the special case of [a, b] = [0, 1]. Differentiating g(α 0, α 1,..., α n ) = with respect to each α i, we obtain 0 0 0 0 [f (x) α 0 α 1 x α n x n ] dx (e x α 0 α n x n ) () dx = 0 (e x α 0 α n x n ) ( x) dx = 0. (e x α 0 α n x n ) ( x n ) dx = 0

Then the linear system is n α j i + j + 1 = 0 x i f (x) dx, i = 0, 1,..., n We will study the solution of simultaneous linear systems in Chapter 6. There we will see that this linear system is ill-conditioned and is difficult to solve accurately, even for moderately sized values of n such as n = 5. As a consequence, this is not a good approach to solving for a minimizer of g(α 0, α 1,..., α n ). For a better way to solve the least squares approximation problem, we need Legendre polynomials.

LEGENDRE POLYNOMIALS Define the Legendre polynomials as follows (for x [, 1]) For example, P n (x) = 1 n! n d n [ (x dx n 1 ) ] n, n = 0, 1,,... P 0 (x) = 1 P 1 (x) = x P (x) = 1 ( 3x 1 ) P 3 (x) = 1 ( 5x 3 3x ) P 4 (x) = 1 ( 35x 4 30x + 3 ) 8 The Legendre polynomials have many special properties, and they are widely used in numerical analysis and applied mathematics.

y 1 1 x P 1 (x) P (x) P 3 (x) P 4 (x) Figure: Legendre polynomials of degrees 1,, 3, 4

PROPERTIES Degree and normalization: Triple recursion relation: P n+1 (x) = n + 1 deg P n = n, P n (1) = 1, n 0 n + 1 xp n(x) P 0 (x) = 1, P 1 (x) = x Orthogonality and size: (P i, P j ) = Here we use the notation (f, g) = n n + 1 P n(x), n 1 0, i j j + 1, for general functions f (x) and g(x). i = j f (x)g(x) dx

Zeroes: All zeroes of P n (x) are located in [, 1] ; all zeroes are simple roots of P n (x) Basis: Every polynomial p(x) of degree n can be written in the form n p(x) = β j P j (x) with the choice of β 0, β 1,..., β n uniquely determined from p(x): β j = (p, P j), j = 0, 1,..., n (P j, P j )

FINDING THE LEAST SQUARES APPROXIMATION Here we discuss the least squares approximation problem on only the interval [, 1]. Approximation problems on other intervals [a, b] can be accomplished using a linear change of variable. We seek to find a polynomial p(x) of degree n that minimizes This is equivalent to minimizing [f (x) p(x)] dx We begin by writing p(x) in the form (f p, f p) (3) p(x) = n β j P j (x)

Use p(x) = n β jp j (x) to obtain g (β 0, β 1,..., β n ) (f p, f p) = f Rewrite it as g = (f, f ) = (f, f ) + n β j P j, f n [ β j (P j, P j ) β j (f, P j ) ] n β i P i i=0 n (f, P j ) n [ (P j, P j ) + (P j, P j ) β j (f, P ] j) (P j, P j ) We see that g (β 0, β 1,..., β n ) is smallest when β j = (f, P j), j = 0, 1,..., n (P j, P j )

For the choice of coefficients the minimum is We call β j = (f, P j), j = 0, 1,..., n (P j, P j ) g = (f, f ) l n (x) = n n (f, P j ) (P j, P j ) (f, P j ) (P j, P j ) P j(x) (4) the least squares approximation of degree n to f (x) on [, 1]. If β n = 0, then its actual degree is less than n.

Example Approximate f (x) = e x on [, 1]. We use (4) with n = 3: l 3 (x) = 3 β j P j (x), β j = (f, P j) (P j, P j ) (5) The coefficients {β 0, β 1, β, β 3 } are as follows. j 0 1 3 β j.35040 0.73576 0.14313 0.0013 Using (5) and simplifying, l 3 (x) =.99694 +.997955x +.5367x +.176139x 3

y 0.011 1 x 0.00460 Figure: Error in the cubic least squares approximation to e x

The error in various cubic approximations Approximation Max Error RMSE Taylor t 3 (x).0516.0145 Least squares l 3 (x).011.00334 Chebyshev c 3 (x).00666.00384 Minimax m 3 (x).00553.00388