GENG2140, S2, 2012 Week 7: Curve fitting

Similar documents
Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Exact and Approximate Numbers:

Chapter 3: Root Finding. September 26, 2005

Numerical Methods. King Saud University

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

CS 323: Numerical Analysis and Computing

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

MA6452 STATISTICS AND NUMERICAL METHODS UNIT IV NUMERICAL DIFFERENTIATION AND INTEGRATION

4.9 APPROXIMATING DEFINITE INTEGRALS

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

MTH301 Calculus II Glossary For Final Term Exam Preparation

Introductory Numerical Analysis

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

ROOT FINDING REVIEW MICHELLE FENG

8.7 MacLaurin Polynomials

CS 323: Numerical Analysis and Computing

Solution of Nonlinear Equations

Numerical Methods. Roots of Equations

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Numerical Methods for Engineers

19.4 Spline Interpolation

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Lösning: Tenta Numerical Analysis för D, L. FMN011,

MAT 460: Numerical Analysis I. James V. Lambers

CHAPTER 4 ROOTS OF EQUATIONS

Solution of Algebric & Transcendental Equations

Chapter 4: Interpolation and Approximation. October 28, 2005

Mathematics for Engineers. Numerical mathematics

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Scientific Computing: An Introductory Survey

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Numerical Methods. Scientists. Engineers

MA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points,

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

Wellston City Schools Calculus Curriculum Calendar

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Engg. Math. II (Unit-IV) Numerical Analysis

cha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO

15 Nonlinear Equations and Zero-Finders

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Chapter 8: Techniques of Integration

CS412: Introduction to Numerical Methods

CS 257: Numerical Methods

Lecture 8. Root finding II

Scientific Computing: An Introductory Survey

Ch. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA

Numerical Methods of Approximation

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

Solving Non-Linear Equations (Root Finding)

Numerical techniques to solve equations

Optimization. Totally not complete this is...don't use it yet...

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

MATH 4211/6211 Optimization Basics of Optimization Problems

Numerical Analysis Exam with Solutions

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Empirical Models Interpolation Polynomial Models

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Scientific Computing. Roots of Equations

Numerical Integration (Quadrature) Another application for our interpolation tools!

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

FUNCTIONS AND MODELS

Virtual University of Pakistan

INSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad MECHANICAL ENGINEERING TUTORIAL QUESTION BANK

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

converges to a root, it may not always be the root you have in mind.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Roots of equations, minimization, numerical integration

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

GLOSSARY. Accumulation function A function of the form a

Line Search Methods. Shefali Kulkarni-Thaker

Numerical Methods. King Saud University

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Numerical Methods in Informatics

Numerical Methods in Physics and Astrophysics

function independent dependent domain range graph of the function The Vertical Line Test

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

NUMERICAL MATHEMATICS AND COMPUTING

September Math Course: First Order Derivative

Mathematics AQA Advanced Subsidiary GCE Core 1 (MPC1) January 2010

Transcription:

GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and polynomial curves to data points. f(x) Polynomial Curve Connects exactly y = ax + b First order Straight line Two points y = ax + bx + c Second order Quadratic Three points y = ax + bx + cx + d Third order Cubic Four points Why get an approximate fit when we could just increase the degree of the polynomial equation and get an exact match? A divergent case (exact fit cannot be calculated, or it might take too long). Averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly. Runge's phenomenon (oscillations at the edges of an interval which occurs when interpolating between equidistant points with high degree polynomials). Low-order polynomials tend to be smooth and high order polynomial curves tend to be "lumpy".

Methods 1. LAGRANGE INTERPOLATION: Curve made to pass through all the points exactly by a polynomial. Advantages: No need to solve linear equations Less susceptible to round-off errors Allows interpolation even when function values are expressed by symbols Disadvantages: Susceptible to Runge's phenomenon Changing the interpolation points requires recalculating the entire interpolant. 2. SPLINES: A series of unique polynomials are fitted between each pair of the data points by piecewise polynomials called a spline. Advantages: Spline interpolation is preferred over Lagrange interpolation high level of smoothness The interpolation error can be made small even when using low degree polynomials for the spline. avoids the problem of Runge's phenomenon. Disadvantages: Computationally intensive Rather slow

Linear spline: Draws straight lines between consecutive data Quadratic spline: Draws a second-order polynomial instead of a straight line going through consecutive data points. Cubic spline: A series of unique cubic polynomials are fitted between each of the data points. 100 random data points Clearly, no correlation can be established between the random 100 data points. A Lagrange interpolation would require a 99 th order polynomial to fit to these data points. However, cubic splines can interpolate all 100 points without the drastic behavior which a 99 th order polynomial would exhibit. Cubic spline curves through the random data

3. LEAST SQUARE METHOD Finds the parameter of a model that the best-fit the data when the residual is the minimum. A residual is defined as the difference between the actual value of the dependent variable and the value predicted by the model. Does not seek an exact fit, but fits to the trend of the data Apply when data is not reliable Least square fit

Week 8: Numerical methods for solving nonlinear algebraic equations (root finding) Nonlinear problems : In mathematics, a nonlinear system is one that does not satisfy the superposition principle, or one whose output is not directly proportional to its input. Most physical systems are inherently nonlinear in nature. Nonlinear equations are difficult to solve. It is often difficult to determine the existence or number solutions to nonlinear equations. Whereas for system of linear equations the number of solutions must be either zero, one or infinitely many, nonlinear equations can have any number of solutions. Determining existence and uniqueness of solutions is more complicated for nonlinear equations than for linear equations. For example, e x -x=0 has no solution, e x -x-1=0 has a repeated root. Solution of Non-Linear Equations: Two types Polynomial equations (algebraic equations) Transcendental equations (non-algebraic equations) Polynomial equations: Formed by equating a polynomial to zero, that is, given a function f(x), we seek value x for which f(x) = 0. Solution x is called a root of equation, or zero of function f. Thus, the problem is known as root or zero finding. f(x) = x 2-5x + 6 = 0 y=f(x) x (x -2) (x -3)=0 x = 2; x = 3 (zeros of the function) 2 3 x

Transcendental equations (non-algebraic equations) Made of transcendental functions, that is functions not expressible as a finite combination of the algebraic operations of addition, subtraction, multiplication, division, raising to a power, and extracting a root. Examples: log x, sin x, cos x, e x Such an equation cannot be solved for one factor in terms of another and are expressible in algebraic terms only as infinite series. Some methods of finding solutions to a transcendental equation use graphical or numerical methods - Graphical method may be time-consuming, numerical methods preferred Direct methods : 1. Give the exact value of the roots (in the absence of round off errors) in a finite number of steps. 2. These methods determine all the roots at the same time. Numerical (iterative) methods: 1. Based on the idea of successive approximations. Starting with one or more initial approximations to the root, we obtain a sequence of iterations which in the limit converges to the root. 2. These methods determine one or two roots at a time.

The problems we want to answer are: Does the problem have a solution? Is the solution unique? Is the iteration well-defined? Does the iteration converge to a limit? How quickly does the method converge? Solution of non-linear equations Find the value of x for which f(x) = 0 Heart of numerical analysis is iteration 1. Bisection method (binary search): Repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. A convergence test is specified In order to decide when a sufficiently accurate solution has (hopefully) been found. y = f(x) f(x l ) f(x r ) x l f(x r ) x u x f(x u ) Make initial guess of x l and x r At each step the method divides the interval in two by computing the midpoint x r = (x l +x u )/2 f(x l ) f(x r ) < 0 ; Lower subinterval, make x u =x r f(x l ) f(x r ) > 0 ; Upper subinterval, make x l =x r f(x l ) f(x r ) =0 ; Root = x r

In each step, the interval is reduced by 50%. The process is continued until the interval (error), e a is sufficiently small. e x x x < tolerance To determine in advance the number of iterations that the bisection method would need to converge to a root to within a certain tolerance. x x 2 < tolerance Simple and robust but is slow. Therefore used to obtain a rough approximation to solution which is then used as a starting point for more rapidly converging methods, such as the Newton-Rhapson method. 2. Newton-Rhapson method Extremely powerful technique - in general the convergence is quadratic. Most widely used of all root finding methods. Method Make an initial guess which is reasonably close to the true root. Take the derivative of the function at a point Draw a tangent line from the point and extend it to the x-axis This x-intercept will typically be a better approximation to the function's root than the original guess. Iterate the Newton-Rhapson formula until the error falls below the tolerance. x = x ( ) ( ) i = 0,1,2. n The method will usually converge, provided this initial guess is close enough to the unknown zero.

Disadvantages Cases where it performs poorly multiple roots May result in slow convergence due to nature of certain functions An analytical expression for the derivative may not be easily obtainable In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method. 1) Multiple root at zero In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method.

3. Secant method Graphical interpretation is similar to the Newton - Rhapson method. Instead of a tangent line, we draw a secant line (a straight line which crosses the curve at two points). Derivative approximated by a backward finite divided difference. Secant formula: x = x f(x )(x x ) f(x ) f(x ) Since the calculation of x i+1 requires x i and x i-1, two initial points must be prescribed at the beginning; Advantage: Derivative of the function need not be calculated.

Week 10: One Dimensional Unconstrained Optimization Most of the mathematical models we have dealt with to this point have been descriptive models. That is, they have been derived to simulate the behavior of an engineering device or system. In contrast optimization typically deals with finding the best result, or optimum solution, of a problem Engineers must continuously design devices that perform tasks in an efficient fashion. In doing so, they are constrained by the limitations of the physical world. Further, they must keep costs down. Thus, they are always confronting optimization problems that balance performance and limitations. Some optimization examples. 1. Design aircraft for minimum weight and maximum strength 2. Optimal trajectories of space vehicle. 3. Design civil engineering structures for minimum cost. 4. Design pump and heat transfer equipment for maximum efficiency. 5. Shortest route of sales person visiting various cities during one sales trip.

Root location and optimization are related in the sense that both involve guessing and searching for a point on a function. f(x) f (x) = 0 f (x) < 0 X Maxima X f(x)=0 root root f (x) =0 f (x)>0 X Minima x One-dimensional optimization - minimum and maximum of a function f(x) of a single variable x Multi-dimensional optimization - minimum and maximum of a function of two or more variables f(x,y). Unconstrained optimization problem min x f(x) or max x f(x) Constrained optimization problem min x f(x) or max x f(x) subject to g(x) = 0 We will consider one-dimensional unconstrained optimization problem only

1. Golden section method Incremental search for max/min using Golden ratio The search interval in each iteration reduced by the GR = 0.618 Step 1. Choose two initial guesses, x l, x u Step 2. Choose two interior points x 1, x 2 according to the GR Step 3. If f(x 1 ) > f(x 2 ) discard region left to x 2, set x l =x 2 If f(x 2 ) > f(x 1 ) discard region right to x 1, set x u =x 1 Step 4. Iterate until interval is less than tolerance Comparison of Bisection method and Golden section method Bisection method Method of incremental search Used to find roots of f(x) Golden Section method Method of incremental search Used to find max and min values of f(x) Find x for f(x)=0 Find x for f (x) =0 and f (x) > 0 for max point f (x) < 0 for min point Requires 2 points for the incremental search Looks for sign change of the function between the two points Interval section reduces by 0.5 in each iteration: x x 2 Requires min 3 points (upto 4 points) for incremental search Looks for higher/lower value of the function between the three points Interval section reduces by the Golden number 0.685 in each iteration: 5 1 (x 2 x )

2. Quadratic (parabolic) interpolation A second order polynomial often provides a good approximation to the shape of f(x) near an optimum. Just as there is only one straight line connecting two points, there is only one quadratic or parabola connecting three points. Thus, if we have three points (x 0, x 1, x 2 ) that jointly bracket an optimum, we can fit a parabola to the points. Then we can differentiate it, set the results equal to zero, and solve for an estimate of the optimal x 3. It can be shown that : x = f(x )(x x ) + f(x )(x x ) + f(x )(x x ) 2 f(x )(x x ) + 2 f(x )(x x ) + 2 f(x )(x x ) where x o, x 1 and x 2 are the initial guesses, and x 3 is the optimal x obtained by the quadratic fitting. Advantages: Useful only when the function is quite smooth in the interval. Convergence is almost quadratic, (approximately 1.324). superior to that of other methods with only linear convergence (such as line search). Does not requiring the computation of derivatives, popular alternative to other methods that do require them (such as gradient descent and Newton's method).

Disadvantages: Convergence (even to a local extremum) is not guaranteed e.g. three points are collinear. (the resulting parabola is degenerate and thus does not provide a new candidate point). Can get hung up with just one end of the interval converging. Thus, the convergence can be slow. If function derivatives are available, Newton's method is applicable and exhibits quadratic convergence.

Week 10/11: Numerical Integration (Quadrature) In numerical analysis, numerical integration (numerical quadrature) is used for calculating the numerical value of a definite integral. The basic problem considered by numerical integration is to compute an approximate solution to a definite integral of a function f(x): A = f(x) dx which is the area of the curve (shaded) bounded by the limits a and b. Definite integral can be thought of as an infinite sum of rectangles of infinitesimal width.

There are several reasons for carrying out numerical integration. In real applications, some integrals may be too complex, and cannot be found exactly. Some integrals may require special functions which themselves are a challenge to compute, and are too slow. The integrand f(x) may be known only at certain points, such as obtained by sampling (observations at a certain number of points). In that case, we do not have a nice formula but only some data points. A formula for the integrand may be known, but it may be difficult or impossible to find an antiderivative e.g, f(x) = exp( x 2 ), the antiderivative of which (the error function, times a constant) cannot be written in elementary form.

Applications Chemical engineering: an analytic function is integrated numerically to determine the heat required to raise the temperature of a material Civil engineering: Numerical integration to determine the total wind force acting on the mast of a racing sailboat. The total force exerted on the mast can be expressed as z F = 200 ( 5 + z )e/ dz This non-linear equation is difficult to evaluate analytically. Therefore, it is convenient to apply numerical integration such as Simpson rule. Elect. engineering: Determination of the root mean square (rms) current of a electric circuit Mech. engineering: Calculation of work required to move a block

Methods Newton-Cotes formulas - approximates a complicated function tabulated at equally spaced intervals by an approximating polynomial. o Trapezoidal rule If the approximating polynomial is a first order (a straight line joining two end points) it is called the trapezoidal rule. o Simpson's rule If there is an additional point available in between, the three points can be connected by a parabola. If there are two additional points equally spaced in-between, the four points can be connected by a third order polynomial more accurate Gaussian quadrature points are not equally spaced.