Curve Fitting with the Least Square Method

Similar documents
Review of Taylor Series. Read Section 1.2

Chapter Newton s Method

Report on Image warping

Complex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1)

Newton s Method for One - Dimensional Optimization - Theory

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Problem Set 9 Solutions

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Errors for Linear Systems

Numerical Heat and Mass Transfer

Graphical Analysis of a BJT Amplifier

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Single Variable Optimization

Section 8.3 Polar Form of Complex Numbers

Linear Approximation with Regularization and Moving Least Squares

Singular Value Decomposition: Theory and Applications

NUMERICAL DIFFERENTIATION

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Math1110 (Spring 2009) Prelim 3 - Solutions

The Finite Element Method

Tracking with Kalman Filter

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

1 GSW Iterative Techniques for y = Ax

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

Assortment Optimization under MNL

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Physics 4B. A positive value is obtained, so the current is counterclockwise around the circuit.

Math 261 Exercise sheet 2

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski

( ) [ ( k) ( k) ( x) ( ) ( ) ( ) [ ] ξ [ ] [ ] [ ] ( )( ) i ( ) ( )( ) 2! ( ) = ( ) 3 Interpolation. Polynomial Approximation.

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Topic 5: Non-Linear Regression

CHAPTER 14 GENERAL PERTURBATION THEORY

Some modelling aspects for the Matlab implementation of MMA

= z 20 z n. (k 20) + 4 z k = 4

Chapter 9: Statistical Inference and the Relationship between Two Variables

Consistency & Convergence

A Hybrid Variational Iteration Method for Blasius Equation

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

NON-LINEAR CONVOLUTION: A NEW APPROACH FOR THE AURALIZATION OF DISTORTING SYSTEMS

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Difference Equations

1 Matrix representations of canonical matrices

EE215 FUNDAMENTALS OF ELECTRICAL ENGINEERING

Kernel Methods and SVMs Extension

MMA and GCMMA two methods for nonlinear optimization

More metrics on cartesian products

10.34 Numerical Methods Applied to Chemical Engineering Fall Homework #3: Systems of Nonlinear Equations and Optimization

Linear Feature Engineering 11

Module 9. Lecture 6. Duality in Assignment Problems

CHARACTERISTICS OF COMPLEX SEPARATION SCHEMES AND AN ERROR OF SEPARATION PRODUCTS OUTPUT DETERMINATION

One Dimension Again. Chapter Fourteen

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Lecture 21: Numerical methods for pricing American type derivatives

Calculation of time complexity (3%)

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Convexity preserving interpolation by splines of arbitrary degree

Lecture 2: Numerical Methods for Differentiations and Integrations

: Numerical Analysis Topic 2: Solution of Nonlinear Equations Lectures 5-11:

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

Compilers. Spring term. Alfonso Ortega: Enrique Alfonseca: Chapter 4: Syntactic analysis

Chapter 3 Differentiation and Integration

Statistics II Final Exam 26/6/18

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

FTCS Solution to the Heat Equation

CS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras

Queueing Networks II Network Performance

THE VIBRATIONS OF MOLECULES II THE CARBON DIOXIDE MOLECULE Student Instructions

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Mean Field / Variational Approximations

Generalized Linear Methods

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

ACTM State Calculus Competition Saturday April 30, 2011

2 Finite difference basics

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

APPENDIX A Some Linear Algebra

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

5.68J/10.652J Feb 13, 2003 Numerical Solution to Kinetic Equations Lecture 1

Feature Selection: Part 1

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

Section 3.6 Complex Zeros

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Solutions to Problem Set 6

Least squares cubic splines without B-splines S.K. Lucas

The Study of Teaching-learning-based Optimization Algorithm

Analytical Chemistry Calibration Curve Handout

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

8.6 The Complex Number System

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Transcription:

WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London

Context We wsh to model the postve feedback of the producton of AHL. For ths purpose we need to nterpolate a set of N expermental measurements, Y 1... N wth the functon x f x Ax B x Interpolaton Functon 1 0.8 0.6 0.4 0. A=1 B=1 A=1 B= A=1 B=4 0 0 5 10 There are several ways to do the nterpolaton, some are more robust than others. We chose to use the least square methods that s to mnmze the expresson, ) A A B Y B The mnmum of the functon s obtaned for the pont (A, such as 0 We have therefore the followng necessary condtons A (1) A Y 0 B B () B A Y B A B 0

Equaton (1) can be smplfed to yeld the condton (1) A Y B F( B Lkewse () can be modfed nto () A Y B B 3 G( The ntersectons of the curves, ) A A B Y B A FB and GB A are potental extrema of the. It s easy to prove that they actually are local mnma. If the data are knd to us there s only one ntersecton. In the general case we have more than one ntersectons. To determne whch local mnmum s the absolute mnmum (the pont we are A, for all the canddates the overall mnmum s of course the after), we need to compute pont that returns the lowest value. How many Local Mnma are there? There s no way to know how many local mnma there are but t s easy to know how many there are n the worst case scenaro It s easy to prove that the equaton B GB F can be turned nto a polynomal equaton of degree 5N-5. So there cannot be more the 5N-5 ntersecton ponts whch can stll be many. Do we know where they are located? Up to a pont. We are only nterested n the postve values of B, so we have 0 as a lower bound for B. Unfortunately we do not have a smple upper bound for B. An easy pragmatc soluton s avalable to us however. Just plot B G( F(! It wll be easy to dentfy a value of B (call t B lm ) whch s sure to be an upper bound (the estmaton does not have to be that precse!!!). A lttle physcal sense also helps: f you have done your experments properly you have acqured some data n the saturaton phase. If ths s the case you can be sure that max s larger than B, and max can therefore be sued as an upper bound for B n the search for the overall mnmum for A,.

How do I fnd the solutons of G( = F( We now have a lower and an upper bound for B. For complex equatons lke the one we are nterested n I would recommend the followng strategy (whch s not brute force but s stll computatonally ntensve). Implementaton wll requre a few programmng sklls (I recommend Matlab or C as language). 1) Cut the segment, 0 B nto a large number (p+1) of equally-spaced ponts lm Blm / p. Ideally p s large (1000 or even better). ) Compute B FB G for all the ponts B p lm / 3) For =0 to = p, Compare the sgn of G F wth G F 1 1 If there has been a change of sgn between then there s a zero of the functon between and 1. Fnd ths zero of the functon by dchotomy. Provdng the ntal samplng of 0, B lm has been fne enough (p large enough) we have found all the solutons of the functon. Remnder : Fndng zeros of a functon by dchotomy Dchotomy = cuttng n two Let us assume we have a functon f contnuous on a, b such as f a 0 and f b 0. Note f we have f a 0 and f b 0 nstead t does not matter ust swtch f for f!! It can be proven that f has a zero between a and b (f beng contnuous). Please note that there may be more than one zero between a and b. The method detaled below s gong to yeld one of them only, not all of them. To get all the zeros between a and b you need to resample a,b more fnely. Let us call the precson of the estmaton of ths zero. We want to return a value x such as x x 0 where f x 0 0. For ths purpose we buld two seres U 0 and V 0 wth the followng rules Intalzaton: U0 a, V0 b U Computng Rank +1: Compute V U 1 U V / If t s postve then V 1 V F. else V U 1 U V / U 1 Repeat the operaton whle U V

A less complcated way to fnd the overall Mnmum Excel offers a way to mplement the least square method wth any nterpolaton functon. All t needs s - the expresson of the nterpolaton functon - the data, Y 1... N - a startng pont for the search (A 0,B 0 ) However, the optmzaton algorthm s not as robust as we could hope and therefore nothng ensures that the software wll not return a local mnmum nstead of the overall mnmum. We can use the prevous results to get a more robust nterpolaton. The dea s to use a (potentally) large number of startng ponts and let Excel do the rest. We assume that the upper bound B lm has been found. 1) Cut the segment, 0 B nto a large number (p+1) of equally-spaced ponts p lm Blm /. Ideally for every value Blm / p we would assocate a value of of A such as (, ) s a good startng pont. If your experments were done properly then you dd some measures n the saturated phase of the curve. you can therefore use = Y max = Max (Y I ) all the tme. ) For every value of, run the Excel Smulaton wth (Y max, ) as startng pont. We call the result (A, B ). A 3) Compute the error functon A Y for (A, B ) B, The pont (A, B ) that acheves the lowest value of A, wll be a good approxmaton of the mnmum of A, A Y B f you have used enough ponts (p s large enough).