Lecture 20. Curve fitting II. m y= f (x)= c j. (x) (1) f j. n [ y i )] 2 (2) f (x i. MSE= 1 n i =1

Similar documents
Lecture 19. Curve fitting I. 1 Introduction. 2 Fitting a constant to measured data

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

Linear Regression Models

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

The Method of Least Squares. To understand least squares fitting of data.

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Nonlinear regression

CHAPTER 5. Theory and Solution Using Matrix Techniques

Machine Learning for Data Science (CS 4786)

Machine Learning Assignment-1

Math 312 Lecture Notes One Dimensional Maps

Linear Regression Demystified

Special Modeling Techniques

x x x Using a second Taylor polynomial with remainder, find the best constant C so that for x 0,

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

Most text will write ordinary derivatives using either Leibniz notation 2 3. y + 5y= e and y y. xx tt t

Chapter 2 The Monte Carlo Method

Introduction to Signals and Systems, Part V: Lecture Summary

Notes on iteration and Newton s method. Iteration

Algebra of Least Squares

Introduction to Optimization Techniques. How to Solve Equations

1 Duality revisited. AM 221: Advanced Optimization Spring 2016

Markov Decision Processes

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

SOLUTION SET VI FOR FALL [(n + 2)(n + 1)a n+2 a n 1 ]x n = 0,

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Differentiable Convex Functions

DETERMINATION OF MECHANICAL PROPERTIES OF A NON- UNIFORM BEAM USING THE MEASUREMENT OF THE EXCITED LONGITUDINAL ELASTIC VIBRATIONS.

SOLUTIONS TO EXAM 3. Solution: Note that this defines two convergent geometric series with respective radii r 1 = 2/5 < 1 and r 2 = 1/5 < 1.

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

( a) ( ) 1 ( ) 2 ( ) ( ) 3 3 ( ) =!

Question 1: The magnetic case

Empirical Process Theory and Oracle Inequalities

Optimization Methods: Linear Programming Applications Assignment Problem 1. Module 4 Lecture Notes 3. Assignment Problem

A NOTE ON THE TOTAL LEAST SQUARES FIT TO COPLANAR POINTS

Solutions to Final Exam Review Problems

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Statistical Inference Based on Extremum Estimators

CALCULATING FIBONACCI VECTORS

The Phi Power Series

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

Machine Learning for Data Science (CS 4786)

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations

Expectation and Variance of a random variable

PAPER : IIT-JAM 2010

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Simple Linear Regression

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals.

Solution of Linear Constant-Coefficient Difference Equations

Maximum Likelihood Estimation

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

Eigenvalues and Eigenvectors

CALCULATION OF FIBONACCI VECTORS

AP Calculus BC Review Applications of Derivatives (Chapter 4) and f,

MATH 10550, EXAM 3 SOLUTIONS

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

NUMERICAL METHODS FOR SOLVING EQUATIONS

CHAPTER 10 INFINITE SEQUENCES AND SERIES

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y

Ma 4121: Introduction to Lebesgue Integration Solutions to Homework Assignment 5

Fitting an ARIMA Process to Data

( ) (( ) ) ANSWERS TO EXERCISES IN APPENDIX B. Section B.1 VECTORS AND SETS. Exercise B.1-1: Convex sets. are convex, , hence. and. (a) Let.

E a 0,a 1,...,a k y i P k x i 2

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

Fourier Series and the Wave Equation

Section 14. Simple linear regression.

WEIGHTED LEAST SQUARES - used to give more emphasis to selected points in the analysis. Recall, in OLS we minimize Q =! % =!

Chapter 4. Fourier Series

Introduction to regression

MATHEMATICS. 61. The differential equation representing the family of curves where c is a positive parameter, is of

Continuous Functions

Assessment and Modeling of Forests. FR 4218 Spring Assignment 1 Solutions

Worksheet 23 ( ) Introduction to Simple Linear Regression (continued)

State Space Representation

Name of the Student:

Zeros of Polynomials

COMM 602: Digital Signal Processing

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

Machine Learning Brett Bernstein

An Introduction to Randomized Algorithms

Quantile regression with multilayer perceptrons.

UNIVERSITY OF CALIFORNIA - SANTA CRUZ DEPARTMENT OF PHYSICS PHYS 116C. Problem Set 4. Benjamin Stahl. November 6, 2014

APPENDIX F Complex Numbers

(all terms are scalars).the minimization is clearer in sum notation:

ECON 3150/4150, Spring term Lecture 3

Exponential and Trigonometric Functions Lesson #1

: Transforms and Partial Differential Equations

MA1200 Exercise for Chapter 7 Techniques of Differentiation Solutions. First Principle 1. a) To simplify the calculation, note. Then. lim h.

ME 410 MECHANICAL ENGINEERING SYSTEMS LABORATORY REGRESSION ANALYSIS

Matsubara-Green s Functions

Newton Homotopy Solution for Nonlinear Equations Using Maple14. Abstract

Appendix F: Complex Numbers

PROBLEMS AND SOLUTIONS 2

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

9. Simple linear regression G2.1) Show that the vector of residuals e = Y Ŷ has the covariance matrix (I X(X T X) 1 X T )σ 2.

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

Transcription:

Lecture 20 Curve fittig II Itroductio I the previous lecture we developed a method to solve the geeral liear least-squares problem. Give samples (x i, y i, the coefficiets c j of a model are foud which miimize the mea-squared error m y= f (x= c j f j (x (1 MSE= 1 i =1 j=1 f ( x i ] 2 (2 The MSE is a quadratic fuctio of the c j ad best-fit coefficiets are the solutio to a system of liear equatios. I this lecture we cosider the o-liear least-squares problem. We have a model of the form y= f (x ;c 1, c 2 (3 where the c j are geeral parameters of the fuctio f, ot ecessarily coefficiets. A example is fittig a arbitrary sie wave to data where the model is The mea-squared error y= f (x ;c 1,c 3 =c 1 si (c 2 x+c 3 MSE (c 1,, c m = 1 i=1 f (x i ;c 1 ] 2 (4 will o loger be a quadratic fuctio of the c j, ad the best-fit c j will o loger be give as the solutios of a liear system. Before we cosider this geeral case, however, let's look at a special situatio i which a o-liear model ca be liearized. Liearizatio I some cases it is possible to trasform a oliear problem ito a liear problem. For example, the model y=c 1 e c 2 x is oliear i parameter c 2. However, takig the logarithm of both sides gives us (5 l y=l c 1 +c 2 x (6 If we defie ^y=l y ad ^c1 =l c 1 the our model has the liear form ^y=^c1 +c 2 x (7 EE 221 Numerical Computig Scott Hudso 2017-11-03

Lecture 20: Curve fittig II 2/5 Fig. 1: Dashed lie: y=πe 2 x. Dots: te samples with added oise. Solid lie: fit of the model y=c 1 e c 2 x obtaied by fittig a liear model l y=^c 1 +c 2 x ad the calculatig c 1 =e^c 1. Oce we've solved for ^c 1 we ca calculate c 1 =e^c 1. Example. Noise was added to te samples of y=πe 2 x, 0 x 2. The followig code computed the fit of the liearized model. ylog = log(y; a = (mea(x.*ylog-mea(x*mea(ylog/(mea(x.^2-mea(x^2; b = mea(ylog-a*mea(x; c1 = exp(b; c2 = a; disp([c1,c2]; 3.0157453-1.2939429 Noliear least squares The geeral least-squares problem is to fid the c j that miimize MSE (c 1,, c m = 1 i=1 f (x i ;c 1 ] 2 (8 This is simply the optimizatio i dimesios problem that we dealt with i a previous lecture. We ca use ay of those techiques, such as Powell's method, to solve this problem. It is coveiet, however, to have a frot ed fuctio that forms the MSE give the data (x i, y i ad the fuctio f (x ;c 1 ad passes that to our miimizatio routie of choice. The fuctio fitleastsquares i the Appedix is a example of such a frot ed. The followig example illustrates the use of fitleastsquares.

Lecture 20: Curve fittig II 3/5 Example. Eleve data samples i the iterval 0 x 1 of the fuctio y=2 cos(6 x+0.5 were geerated. Normally distributed oise with stadard deviatio 0.1 was added to the y data. A fit of the model y=c 1 cos(c 2 x+c 3 gave c 1 =1.94, c 2 =5.91, c 3 =0.53. The fit is show i Fig. 2 ad was geerated with the followig code. rad('seed',2; y = 2*cos(6*x+0.5+rad(x,'ormal'*0.1; fuctio yf = fmod(x,c yf = c(1*cos(c(2*x+c(3; edfuctio [c,fctmi] = fitleastsquares(x,y,fmod,c0,0.01,1e-6; disp(c; 1.9846917 5.8614475 0.5453276 Fig. 2: Fit of the model y=c 1 cos(c 2 x+c 3 to 11 data poits. The lsqrsolve (Scilab ad lsqcurvefit (Matlab fuctios The Scilab fuctio lsqrsolve solves geeral least-squares problems. We create a fuctio to calculate the residuals. The followig code solves the problem i the previous example usig lsqrsolve.

Lecture 20: Curve fittig II 4/5 rad('seed',2; y = 2*cos(6*x+0.5+rad(x,'ormal'*0.1; fuctio r = residuals(c,m r = zeros(m,1; for i=1:m r(i = y(i-c(1*cos(c(2*x(i+c(3; ed edfuctio c = lsqrsolve(c0,residuals,legth(x; disp(c; I Matlab the fuctio lsqcurvefit ca be used to implemet a least-squares fit. The first step is to create a file specifyig the model fuctio i terms of the parameter vector c ad the x data. I this example the file is amed fmod.m fuctio ymod = fmod(c,x ymod = c(1*cos(c(2*x+c(3; The, i the mai program we pass the fuctio fmod as the first argumet to lsqcurvefit, alog with the iitial estimate of the parameter vector c0 ad the x ad y data. y = 2*cos(6*x+0.5+rad(size(x*0.1; c = lsqcurvefit(@fmod,c0,x,y; disp(c;

Lecture 20: Curve fittig II 5/5 Appedix Scilab code 0001 ////////////////////////////////////////////////////////////////////// 0002 // fitleastsquares.sci 0003 // 2014-11-11, Scott Hudso, for pedagogic purposes 0004 // Give data poits x(i,y(i ad a fuctio 0005 // fct(x,c where c is a vector of m parameters, fid c values that 0006 // miimize sum over i (y(i-fct(x(i,c^2 usig Powell's method. 0007 // c0 is iitial guess for parameters. cstep is iitial step size 0008 // for parameter search. 0009 ////////////////////////////////////////////////////////////////////// 0010 fuctio [c,fctmi] = fitleastsquares(xdata,ydata,fct,c0,cstep,tol 0011 Data = legth(xdata; 0012 0013 fuctio w=fmse(ctest 0014 w = 0; 0015 for i=1:data 0016 w = w+(ydata(i-fct(xdata(i,ctest^2; 0017 ed 0018 w = w/data; 0019 edfuctio 0020 0021 [c,fctmi] = optimpowell(fmse,c0,cstep,tol; 0022 edfuctio