Least squares cubic splines without B-splines S.K. Lucas

Similar documents
Polynomial Regression Models

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION

( ) [ ( k) ( k) ( x) ( ) ( ) ( ) [ ] ξ [ ] [ ] [ ] ( )( ) i ( ) ( )( ) 2! ( ) = ( ) 3 Interpolation. Polynomial Approximation.

Convexity preserving interpolation by splines of arbitrary degree

Lecture 12: Discrete Laplacian

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

One-sided finite-difference approximations suitable for use with Richardson extrapolation

NUMERICAL DIFFERENTIATION

Nice plotting of proteins II

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

DEGREE REDUCTION OF BÉZIER CURVES USING CONSTRAINED CHEBYSHEV POLYNOMIALS OF THE SECOND KIND

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 10 Support Vector Machines II

Perron Vectors of an Irreducible Nonnegative Interval Matrix

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Lecture 2: Numerical Methods for Differentiations and Integrations

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CALCULUS CLASSROOM CAPSULES

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Report on Image warping

Chapter 13: Multiple Regression

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Linear Feature Engineering 11

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Global Sensitivity. Tuesday 20 th February, 2018

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

The Expectation-Maximization Algorithm

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Feature Selection: Part 1

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

Numerical Heat and Mass Transfer

Homework Notes Week 7

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

More metrics on cartesian products

Least Squares Fitting of Data

Modeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:

Note 10. Modeling and Simulation of Dynamic Systems

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Least Squares Fitting of Data

STAT 3008 Applied Regression Analysis

THE SUMMATION NOTATION Ʃ

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

Module 3: Element Properties Lecture 1: Natural Coordinates

Appendix B. The Finite Difference Scheme

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

The Geometry of Logit and Probit

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Inexact Newton Methods for Inverse Eigenvalue Problems

Expectation propagation

Supplemental document

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

Difference Equations

1 Matrix representations of canonical matrices

The Finite Element Method

Linear Approximation with Regularization and Moving Least Squares

MAE140 - Linear Circuits - Fall 13 Midterm, October 31

Integrals and Invariants of Euler-Lagrange Equations

arxiv: v1 [math.co] 12 Sep 2014

Monotonic Interpolating Curves by Using Rational. Cubic Ball Interpolation

Applied Mathematics Letters

: Numerical Analysis Topic 2: Solution of Nonlinear Equations Lectures 5-11:

Weighted Fifth Degree Polynomial Spline

IV. Performance Optimization

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

EEE 241: Linear Systems

MMA and GCMMA two methods for nonlinear optimization

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

A Hybrid Variational Iteration Method for Blasius Equation

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL

Lecture 3. Ax x i a i. i i

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Maejo International Journal of Science and Technology

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Formulas for the Determinant

Research Article Green s Theorem for Sign Data

November 5, 2002 SE 180: Earthquake Engineering SE 180. Final Project

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Errors for Linear Systems

APPENDIX A Some Linear Algebra

Some modelling aspects for the Matlab implementation of MMA

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

Transcription:

Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan Mathematcal Socety May 3, Accepted July 3. Justfcaton Readers of the Gazette wll be famlar wth the use of least squares to fnd the polynomal that best fts a set of data ponts. As wth all curve fttng problems, there are many stuatons where one low order polynomal s nadequate and a hgh order polynomal ft s napproprate. A least squares cubc splne would provde a better ft. There are thousands of artcles publshed on varants of splnes, ncludng least squares cubc splnes. One of the frst least squares artcles was de Boor and Rce [], and a comprehensve explanatory textbook s Derckx []. Unfortunately, every example n the lterature and on the web of a least squares cubc splne makes use of B-splnes. Whle B-splnes have a certan elegance, they are suffcently complex to be beyond the typcal undergraduate level, and have the dsadvantage of beng more expensve to evaluate than tradtonal cubc splnes. For example, Schumacker [4] ponts out that t s more effcent to convert a cubc B-splne to a tradtonal cubc splne and then evaluate f you requre two or more functon evaluatons per nterval. The only excepton to usng B-splnes s Ferguson and Staley [3], where they fnd a least squares cubc ft to data that enforces contnuty of functon and frst dervatve, but not second dervatve. Thus, ther formulaton does not lead to a cubc splne. My colleague, Basl Benjamn, had been usng what s essentally the same cubc ft as [3] when fttng smooth curves to tran lne data. Ths motvated me to seek the alternatve that also enforces contnuty of the second dervatve. The am of ths note, then, s to show how least squares cubc splnes can be formulated n a straghtforward manner usng a tradtonal cubc splne defnton. Whle we shall show that the system of lnear equatons to be solved s larger than when usng B-splnes, (N +) (N +) gven N ntervals, the theory nvolved wll be much more accessble to readers wthout advanced numercal analyss experence.. Dervaton Assume that we are gven a set of ponts {(x, y )} n = for whch we requre a least squares ft. Let t < t < < t N+ be N + nodes, where t x and t N+ x n. The rest of the t s do not need to be placed at data ponts. A pecewse cubc functon f(x) s defned on the doman [t, t N+ ] such that f (x) = f(x) on [t, t + ] s a polynomal of degree at most three. We wsh to fnd the coeffcents of the varous cubcs such that we mnmse the sum of the squares of the errors at the data ponts, wth the constrants that the cubcs have contnuty of functon, frst and second dervatve, so that f(x) s n fact a cubc splne. Our approach wll be the standard one of Lagrange multplers: to mnmse f(x) wth constrants g (x) =, we mnmse f(x) + λ g (x). Followng Ferguson and Staley [3], we defne our pecewse cubc as f(x) = (u (x) + )v (x)z + u (x)( v (x))z + + h u (x)v (x)z + h u v (x)z +, ()

for x [t, t + ], =,,..., N, where h = t + t, u (x) = (x x )/h, and v = u (x). () Ths ensures that f(t ) = z and f (t ) = z for =, 3,..., N. In other words, the functon and ts frst dervatve are already contnuous. Whle not a standard way of wrtng a cubc, ths form wll make the followng analyss far more straghtforward. If you prefer a standard cubc for effcent nested evaluaton, () can be rewrtten on [t, t + ] as [ f(x) = z + z (x t ) + 3 (z h z + ) ] (z h + z + ) (x t ) + [ (z h 3 z + ) + ] (z h + z + ) (x t ) 3. The constrants that ensure f(x) s a cubc splne are contnuty of the second dervatve at t, =, 3,..., N. Ths reduces to h z + (h + h )z + h z + 3h (z + z ) 3 h (z z ) = (4) h h for =, 3,..., N. Now let us assume that the data {(x, y )} n = s ordered on x so that we can easly dentfy how many ponts (n ) are n each nterval and where n the lst (p ) they begn. Under these condtons, the constraned mnmsaton problem s reduced to mnmsng S = where N p +n = [ N λ = j=p (y j α j z β j z + γ j z δ j z +) + h z + (h + h )z + h z + 3 h (z + z ) 3 h (z z ) h h α j = (u (x j ) + )v (x j), β j = u (x j)( v (x j )), γ j = h u (x j )v (x j), δ j = h u (x j)v (x j ). The {λ } N = are the Lagrange multplers, and combned wth the unknowns {z, z }N+ = there are 3N + unknowns. As wth any least squares problem, we can take the partal dervatves of S wth respect to each unknown, beng careful to recognse that each unknown occurs n several terms on the rght hand sde of (5), and equate to zero. Ths leads to the followng lnear system of equatons: A B T C T B E F T C F where (wth the abbrevaton k for p k +n k j=p k ) z z λ = D G ], (3) (5) (6), (7) A s the (N + ) (N + ) symmetrc trdagonal matrx wth man dagonal α j, ( α j + β j), ( 3 α 3j + β j),..., ( N α Nj + N β N,j), N β Nj, and codagonal α j β j, α j β j,..., N α Nj β Nj, B s the (N +) (N +) trdagonal matrx wth man dagonal α j γ j, ( α j γ j + β j δ j ), ( 3 α 3j γ 3j + β j δ j ),..., ( N α Nj γ Nj + N β N,j δ N,j ), N β Nj δ Nj, upper dagonal β j γ j, β j γ j,..., N β Nj γ Nj, and lower dagonal α j δ j, α j δ j,..., N α Nj δ Nj,

C s the (N ) (N + ) upper dagonal matrx wth c = 3h + /h, c,+ = 3(h /h + h + /h ), and c,+ = 3h /h + for =,,..., N, all other terms zero, D s the (N + ) vector wth terms y j α j, ( y j α j + y j β j ), ( 3 y j α 3j + y j β j ),..., ( N y j α Nj + N y j β N,j ), N y j β Nj, E s the (N + ) (N + ) symmetrc trdagonal matrx wth man dagonal γj, ( γj + δj ), ( 3 γ3j + δj ),..., ( N γnj + N δn,j ), N δnj, and codagonal γ j δ j, γ j δ j,..., N γ Nj δ Nj, F s the (N ) (N + ) upper dagonal matrx wth f = h +, f,+ = (h + h + ), and f,+ = h for =,,..., N, all other terms zero, G s the (N + ) vector y j γ j, ( y j γ j + y j δ j ), ( 3 y j γ 3j + y j δ j ),..., ( N y j γ Nj + N y j δ N,j ), N y j δ nj, z = [z, z,..., z N+ ] T, z = [z, z,..., z N+ ]T, λ = [λ, λ 3,..., λ N ] T, and the zero matrx on the left hand sde s (N ) (N ), and the zero vector on the rght hand sde s (N ). 3. Implementaton Settng up and solvng (7) by Gaussan elmnaton s not dffcult, partcularly f approprate ntermedate varables are used. Equaton (3) can then be used to output the soluton as standard pecewse cubcs. For N pecewse cubcs we requre at least 3N + data ponts, but there s no constrant on the poston of these ponts. In fact t s perfectly acceptable (f unusual) to have ntervals wth no data ponts whatsoever. Pseudocode for a functon that mplements ths formulaton of a least squares cubc splne s lsted n fgure. Fgure shows two examples of least squares cubc splne fts to data usng ths algorthm. The frst s the classc ttanum heat data used to test curve fttng routnes, where splne ntervals have been chosen for a good ft. The approprate postonng of nterval endponts contnues to be an area of actve research. The second s a smple ft to snusodal data wth small Gaussan nose, where data s not avalable n each nterval. As one would expect, the solutons are exactly those obtaned usng a B-splne approach. References [] C. de Boor and J.R. Rce, Least squares cubc splne approxmaton I fxed knots, Techncal Report CSD-TR, Computer Scences, Purdue Unversty (968). Also avalable at ftp://ftp.cs.wsc.edu/approx/tr.pdf [] P. Derckx, Curve and surface fttng wth splnes, Oxford Unversty Press, 993. [3] J. Ferguson and P.A. Staley, Least squares pecewse cubc curve fttng, Comm. ACM 6 38 38 (973). [4] L.L. Schumaker, Splne functons: basc theory, John Wley and Sons, New York, 98. 3

Algorthm to fnd a least squares cubc splne ft to data (x, y ) n = on ntervals [t, t + ] N =. Input: Data (x, y ) n = and nterval endponts {t } N+ =, number of data ponts n and ntervals N. Output: Functon {z } N+ = and dervatve {z }N+ = data at nterval endponts. procedure least squares splne(x, y, n, t, N, z, z ) sort(x, y) // sort data ponts on x coordnates for =, N h = t + t p =, pn =, j = for =, n // dentfy whch data s n whch nterval f x t j+ pn j = pn j + else j = j +, p j =, pn j = whle x > t j+ // deal wth ntervals wth no ponts j = j +, p j =, pn j = endwhle pn j = endf Set taa, tab, tag, tad, tbb, tbg, tbd, tgg, tgd, tdd to zero vectors of length N Set D and G to zero vectors of length N + for =, N // Set up ntermedate sums and vectors for j = p, p + pn u = (x j t )/h, v = u, α = (u + )v, β = u ( v), γ = h uv, δ = h u v taa = taa +α, tab = tab +αβ, tag = tag +αγ, tad = tad +αδ, tbb = tbb +β tbg = tbg + βγ, tbd = tbd + βδ, tgg = tgg + γ, tgd = tgd + γδ, tdd = tdd + δ D = D + y j α, D + = D + + y j β, G = G + y j γ, G + = G + + y j δ Set A, B, E to zero matrces of sze (N + ) (N + ) Set C, F to zero matrces of sze (N ) (N + ) for =, N // Set up the ntermedate matrces A = A + taa, A +,+ = tbb, A,+ = tab, A +, = A,+, B = B + tag, B +,+ = tbd, B,+ = tbg, B +, = tad E = E + tgg, E +,+ = tdd, E,+ = tgd, E +, = E,+ for =, N C = 3h + /h, C,+ = 3h /h +, C,+ = C C,+ F = h +, F,+ = (h + h + ), F,+ = h A B T C T D Set up and solve B T E F T x = G C F Extract z = {x } N+ =, z = {x } =N+ N+ end(least squares splne) Fgure : Pseudocode for the least squares cubc splne algorthm. 4

.4..8.6.4..8.6 6 7 8 9.8.6.4. 3 Fgure : Examples of fttng least squares cubc splnes to data, open squares are data ponts, closed crcles are splne nterval endponts. 5