Least Squares Fitting of Data

Similar documents
Least Squares Fitting of Data by Linear or Quadratic Structures

Least Squares Fitting of Data

Block designs and statistics

Least Squares Fitting of Data

Feature Extraction Techniques

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Chapter 6 1-D Continuous Groups

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Topic 5a Introduction to Curve Fitting & Linear Regression

Kernel Methods and Support Vector Machines

Support Vector Machines MIT Course Notes Cynthia Rudin

Perspective Projection of an Ellipse

A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

In this chapter, we consider several graph-theoretic and probabilistic models

Lecture 13 Eigenvalue Problems

Bayes Decision Rule and Naïve Bayes Classifier

Physics 215 Winter The Density Matrix

Physics 139B Solutions to Homework Set 3 Fall 2009

Lecture 21 Nov 18, 2015

Ch 12: Variations on Backpropagation

Distance Between Ellipses in 2D

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Sharp Time Data Tradeoffs for Linear Inverse Problems

Information About Ellipses

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

The Transactional Nature of Quantum Information

a a a a a a a m a b a b

MA304 Differential Geometry

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks

Chaotic Coupled Map Lattices

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Polygonal Designs: Existence and Construction

Reconstructing an Ellipsoid from its Perspective Projection onto a Plane

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

An Approach to Conic Sections

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

The linear sampling method and the MUSIC algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

COS 424: Interacting with Data. Written Exercises

1 Bounding the Margin

Least squares fitting with elliptic paraboloids

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

Principal Components Analysis

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Introduction to Machine Learning. Recitation 11

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

CS Lecture 13. More Maximum Likelihood

A new type of lower bound for the largest eigenvalue of a symmetric matrix

On Conditions for Linearity of Optimal Estimation

Quantum Chemistry Exam 2 Take-home Solutions

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

Variations on Backpropagation

HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

A method to determine relative stroke detection efficiencies from multiplicity distributions

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Exact tensor completion with sum-of-squares

A Simple Regression Problem

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Solving Systems of Polynomial Equations

Lecture 21. Interior Point Methods Setup and Algorithm

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

Bootstrapping Dependent Data

Bipartite subgraphs and the smallest eigenvalue

Intersection of Ellipsoids

MODIFICATION OF AN ANALYTICAL MODEL FOR CONTAINER LOADING PROBLEMS

Distributed Subgradient Methods for Multi-agent Optimization

A Quantum Observable for the Graph Isomorphism Problem

Estimating Parameters for a Gaussian pdf

Combining Classifiers

The Methods of Solution for Constrained Nonlinear Programming

Randomized Recovery for Boolean Compressed Sensing

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Feedforward Networks

Linear Transformations

Support recovery in compressed sensing: An estimation theoretic approach

8.1 Force Laws Hooke s Law

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Interactive Markov Models of Evolutionary Algorithms

Computational and Statistical Learning Theory

Machine Learning Basics: Estimators, Bias and Variance

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

3.8 Three Types of Convergence

i ij j ( ) sin cos x y z x x x interchangeably.)

3.3 Variational Characterization of Singular Values

Lower Bounds for Quantized Matrix Completion

Transcription:

Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a copy of this license, visit http://creativecoons.org/licenses/by/4.0/ or send a letter to Creative Coons, PO Box 1866, Mountain View, CA 94042, USA. Created: July 15, 1999 Last Modified: Deceber 12, 2017 Contents 1 Linear Fitting of 2D Points of the For x, fx)) 2 2 Linear Fitting of nd Points Using Orthogonal Regression 2 3 Planar Fitting of 3D Points of the For x, y, fx, y)) 3 4 Hyperplanar Fitting of nd Points Using Orthogonal Regression 4 5 kd Flat Fitting of nd Points Using Orthogonal Regression 5 6 Fitting a Circle to 2D Points 7 6.1 Direct Least Squares Algorith................................... 7 6.2 Estiation of Coefficients to a Quadratic Equation........................ 8 7 Fitting a Sphere to 3D Points 9 7.1 Direct Least Squares Algorith................................... 9 7.2 Estiation of Coefficients to a Quadratic Equation........................ 10 8 Fitting an Ellipse to 2D Points 11 9 Fitting an Ellipsoid to 3D Points 11 10 Fitting a Paraboloid to 3D Points of the For x, y, fx, y)) 12 1

This docuent describes soe algoriths for fitting 2D or 3D point sets by linear or quadratic structures using least squares iniization. 1 Linear Fitting of 2D Points of the For x, fx)) This is the usual introduction to least squares fit by a line when the data represents easureents where the y-coponent is assued to be functionally dependent on the x-coponent. Given a set of saples {x i, y i )}, deterine A and B so that the line y = Ax + B best fits the saples in the sense that the su of the squared errors between the y i and the line values Ax i + B is iniized. Note that the error is easured only in the y-direction. Define the error function for the least squares iniization to be EA, B) = [Ax i + B) y i ] 2 1) This function is nonnegative and its graph is a paraboloid whose vertex occurs when the gradient satistfies E = 0, 0). This leads to a syste of two linear equations in A and B which can be easily solved. Precisely, 0, 0) = E = 2 [Ax i + B) y i ]x i, 1) 2) and so x2 i x i x i 1 A B = x iy i y i 3) The solution provides the least squares solution y = Ax + B. If ipleented directly, this forulation can lead to an ill-conditioned linear syste. To avoid this, you should first copute the averages x = 1 x i and ȳ = 1 y i and subtract the fro the data, x i x i x and y i y i ȳ. The fitted line is y ȳ = Ax x), where A = x i x)y i ȳ) x 4) i x) 2 An ipleentation is GteApprHeightLine2.h. 2 Linear Fitting of nd Points Using Orthogonal Regression It is also possible to fit a line using least squares where the errors are easured orthogonally to the proposed line rather than easured vertically. The following arguent holds for saple points and lines in n diensions. Let the line be Lt) = td + A where D is unit length. Define X i to be the saple points; then X i = A + d i D + p i D i, where d i = D X i A) and D i is soe unit length vector perpendicular to D with appropriate coefficient p i. Define Y i = X i A. The vector fro X i to its projection onto the line is 2

Y i d i D = p i D i. The squared length of this vector is p 2 i = Y i d i D) 2. The error function for the least squares iniization is EA, D) = p2 i. Two alternate fors for this function are EA, D) = Y T i [I DD T] ) Y i 5) and EA, D) = D T [ ] ) Y i Y i )I Y i Y T i D = D T MA)D 6) Copute the derivative of equation 5) with respect to A to obtain [ A = 2 I DD T] Y i 7) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + sd for any scalar s. This is siply a stateent that the average is on the best-fit line, but any other point on the line ay serve as the origin for that line. Equation 6) is a quadratic for D T MA)D whose iniu is the sallest eigenvalue of MA), coputed using standard eigensyste solvers. A corresponding unit length eigenvector D copletes our construction of the least squares line. The covariance atrix of the input points is C = Y iy T i. Defining δ = YT i Y i, we see that M = δi C, where I is the identity atrix. Therefore, M and C have the sae eigenspaces. The eigenspace corresponding to the iniu eigenvalue of M is the sae as the eigenspace corresponding to the axiu eigenvalue of C. In an ipleentation, it is sufficient to process C and avoid the additional cost to copute M. An ipleentation for 2D is GteApprOrthogonalLine2.h and an ipleentation for 3D is GteApprOrthogonalLine3.h. 3 Planar Fitting of 3D Points of the For x, y, fx, y)) The assuption is that the z-coponent of the data is functionally dependent on the x- and y-coponents. Given a set of saples {x i, y i, z i )}, deterine A, B, and C so that the plane z = Ax + By + C best fits the saples in the sense that the su of the squared errors between the z i and the plane values Ax i +By i +C is iniized. Note that the error is easured only in the z-direction. Define the error function for the least squares iniization to be EA, B, C) = [Ax i + By i + C) z i ] 2 8) This function is nonnegative and its graph is a hyperparaboloid whose vertex occurs when the gradient satistfies E = 0, 0, 0). This leads to a syste of three linear equations in A, B, and C which can be easily solved. Precisely, 0, 0, 0) = E = 2 [Ax i + By i + C) z i ]x i, y i, 1) 9) 3

and so x2 i x iy i x i The solution provides the least squares solution z = Ax + By + C. x iy i x i A x iz i y2 i y i B y = y iz i i 1. 10) C z i If ipleented directly, this forulation can lead to an ill-conditioned linear syste. To avoid this, you should first copute the averages x = 1 x i, ȳ = 1 y i, and z = 1 z i, and then subtract the fro the data, x i x i x, y i y i ȳ, and z i z i z. The fitted plane is z z = Ax x)+by ȳ), where A = x i x) 2 1 x i x)y i ȳ) B x x i x)z i z) i x)y i ȳ) y i ȳ) 2 11) y i ȳ)z i z) An ipleentation is GteApprHeightPlane3.h. 4 Hyperplanar Fitting of nd Points Using Orthogonal Regression It is also possible to fit a plane using least squares where the errors are easured orthogonally to the proposed plane rather than easured vertically. The following arguent holds for saple points and hyperplanes in n diensions. Let the hyperplane be N X A) = 0 where N is a unit length noral to the hyperplane and A is a point on the hyperplane. Define X i to be the saple points; then X i = A + λ i N + p i N i, where λ i = N X i A) and N i is soe unit length vector perpendicular to N with appropriate coefficient p i. Define Y i = X i A. The vector fro X i to its projection onto the hyperplane is λ i N. The squared length of this vector is λ 2 i = N Y i) 2. The error function for the least squares iniization is EA, N) = λ2 i. Two alternate fors for this function are and where C = Y iy T i. EA, N) = Y T i [NN T] ) Y i 12) ) EA, N) = N T Y i Y T i N = N T CN 13) Copute the derivative of equation 12) with respect to A to obtain A = 2 NN T) Y i 14) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + W, where W is any vector perpendicular to N. This is siply a stateent that the average is on the best-fit hyperplane, but any other point on the hyperplane ay serve as the origin for that hyperplane. 4

Equation 13) is a quadratic for D T CD whose iniu is the sallest eigenvalue of C, coputed using standard eigensyste solvers. A corresponding unit length eigenvector N copletes our construction of the least squares hyperplane. An ipleentation for 3D is GteApprOrthogonalPlane3.h. 5 kd Flat Fitting of nd Points Using Orthogonal Regression Orthogonal regression to fit n-diensional points by a line or by a hyperplane ay be generalized to fitting by a flat of affine subspace. To ephasize the diension of the flat, soeties the terinology used is k-flat, where the diension is k with 0 k n. A line is a 1-flat and a hyperplane is a n 1)-flat. For diension n = 3, we fit with flats that are either lines or planes. For diensions n 4 and flat diensions 2 k n 2, the generalization of orthogonal regression is the following. The bases entioned here are for the linear portion of the affine subspace; that is, the basis vectors are relative to an origin at a point A. The flat has an orthonoral basis {F j } k and the orthogonal copleent has an orthonoral basis {P j } n k. The union of the two bases is an orthonoral basis for Rn. Any input point X i, 1 i, can be represented by X i = A + k n k f ij F j + p ij P j = A + k n k f ij F j + p ij P j 15) The left-parenthesized ter is the portion of X i that lives in the flat and the right-parenthesized is the portion that is the deviation of X i fro the flat. The least squares proble is about choosing the two bases so that the su of squared lengths of the deviations is as sall as possible. Define Y i = X i A. The basis coefficients are f ij = F j Y i and p ij = P j Y i. The squared length of the deviation is n k p2 ij. The error function for the least squares iniization is the su of the squared lengths for all inputs, E = n k p2 ij. Two alternate fors for this function are n k EA, P 1,... P n k ) = Y T i P j P T j Y i 16) and where C = Y iy T i. n k EA, P 1,... P n k ) = P T j ) n k Y i Y T i P j = P T j CP j 17) Copute the derivative of equation 16) with respect to A to obtain n k A = 2 P j P T j Y i 18) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + W, where W is any vector in the orthogonal copleent 5

of the subspace spanned by the P j ; this subspace is that spanned by the F j. This is siply a stateent that the average is on the best-fit flat, but any other point on the flat ay serve as the origin for that flat. Because we are choosing A to be the average of the input points, the atrix C is the covariance atrix for the input points. The last ter in equation 17) is a su of quadratic fors involving the atrix C and the vectors P j that are a basis unit-length, utually perpendicular). The iniu value of the quadratic for is the sallest eigenvalue λ 1 of C, so we ay choose P 1 to be a unit-length eigenvector of C corresponding to λ 1. We ust choose P 2 to be unit length and perpendicular to P 1. If the eigenspace for λ 1 is 1-diensional, the next sallest value we can attain by the quadratic for is the sallest eigenvalue λ 2 for which λ 1 < λ 2. P 2 is chosen to be a corresponding unit-length eigenvector. However, if λ 1 has an eigenspace of diension larger than 1, we can choose P 2 in that eigenspace but which is perpendicular to P 1. Generally, let {λ l } r l=1 be the r distinct eigenvalues of the covariance atrix C; we know 1 r n and λ 1 < λ 2 < < λ r. Let the diension of the eigenspace for λ l be d l 1; we know that r l=1 d l = n. List the eigenvalues and eigenvectors in order of increasing eigenvalue, including repeated values, λ 1 λ 1 V 1 1 V 1 d 1 }{{} d 1 ters λ 2 λ 2 V 2 1 V 2 d 2 }{{} d 2 ters λ r λ r V r 1 V r d r }{{} d r ters 19) The list has n ites. The eigenvalue d l has an eigenspace with orthonoral basis {V l j} d l. In this list, choose the first k eigenvectors to be P 1 through P k and choose the last n k eigenvectors to be F 1 through F n k. It is possible that one or ore) of the P j and one or ore) of the F j are in the eigenspace for the sae eigenvalue. In this case, the fitted flat is not unique, and one should re-exaine the choice of diension k for the fitted flat. This is analogous to the following situations in diension n = 3. The input points are nearly collinear but you are trying to fit those points with a plane. The covariance atrix likely has two distinct eigenvalues λ 1 < λ 2 with d 1 = 2 and d 2 = 1. The basis vectors are P 1 = V 1 1, F 1 = V 1 2, and F 2 = V 2 1. The first two of these are fro the sae eigenspace. The input points are spread out over a portion of a plane and are not nearly collinear) but you are trying to fit those points with a line. The covariance atrix likely has two distinct eigenvalues λ 1 < λ 2 with d 1 = 1 and d 2 = 2. The basis vectors are P 1 = V 1 1, P 2 = V 2 1, and F 1 = V 2 2. The last two of these are fro the sae eigenspace. The input points are not well fit by a flat of any diension. For exaple, your input points are uniforly distributed over a sphere. The covariance atrix likely has one distinct eigenvalue λ 1 of ultiplicity d 1 = 3. Neither a line nor a plane is a good fit to the input points in either case, a P-vector and an F-vector are in the sae eigenspace. The coputational algorith is to copute the average A and covariance atrix of the points. Use an eigensolver whose output eigenvalues are sorted in nondecreasing order. Choose the F j to be the last n k eigenvectors output by the eigensolver. 6

6 Fitting a Circle to 2D Points There are several ways to define a least squares error function to try to fit a circle to points. The typical approach is to define a su of squares function and use the Levenberg Marquardt algorith for iterative iniization. The ethods proposed here do not follow this paradig. The first section uses the su of squares of the differences between the purported circle radius and the distance fro an input point to the purported circle. The algorith is effectively a fixed-point iteration. The second section estiates the coefficients of a quadratic equation of two variables. It reduces to coputing the eigenvector associated with the iniu eigenvalue of a atrix. Regardless of the error function, in practice the fitting produces reasonable results when the input points are distributed over the circle. If the input points are restricted to a sall arc of a circle, the fitting is typically not of good quality. The proble is that sall perturbations in the input points can lead to large changes in the estiates for the center and radius. 6.1 Direct Least Squares Algorith Given a set of points {x i, y i )}, 3, fit the with a circle x a)2 + y b) 2 = r 2, where a, b) is the circle center and r is the circle radius. An assuption of this algorith is that not all the points are collinear. The error function to be iniized is Ea, b, r) = L i r) 2 20) where L i = x i a) 2 + y i b) 2. Copute the partial derivative with respect to r to obtain Setting the derivative equal to zero yields r = 2 L i r) 21) r = 1 Copute the partial derivatives with respect to a and b to obtain L i 22) a = 2 L i r) Li a = 2 b Setting these derivatives equal to zero yields a = 1 = 2 L i r) Li b = 2 x i + r 1 L i a, b = 1 ) xi a) + r Li a, ) 23) yi b) + r Li b y i + r 1 Replace r fro equation 22) and using using L i / a = a x i )/L i and L i / b = b y i )/L i, we obtain two nonlinear equations in a and b, L i b 24) a = x + L L a = F a, b), b = ȳ + L L b = Ga, b) 25) 7

where the last equality of each expression defines the functions F a, b) and Ga, b) and where x = 1 x i, ȳ = 1 y i, L = 1 L i, La = 1 a x i L i, Lb = 1 b y i L i 26) Fixed-point iteration can be applied to solve these equations: a 0 = x, b 0 = ȳ, and a i+1 = F a i, b i ) and b i+1 = Ga i, b i ) for i 0. An ipleentation is GteApprCircle2.h. 6.2 Estiation of Coefficients to a Quadratic Equation The quadratic equation that represents a circle is 0 = c 0 + c 1 x + c 2 y + c 3 x 2 + y 2 ) = c 0, c 1, c 2, c 3 ) 1, x, y, x 2 + y 2 ) = C V 27) where the last equality defines V and a unit-length vector C = c 0, c 1, c 2, c 3 ) with c 3 0 for a nondegenerate circle). Given a set of input points {x i, y i )}, the least squares error function is chosen to be ) EC) = C V i ) 2 = C T V i V T i C = C T MC 28) where V i = 1, x i, y i, ri 2) with r2 i = x2 i + y2 i. The last equality defines the 4 4 syetric atrix M, 1 x i y i r2 i M = x i x2 i x iy i x ir 2 i y i x iy i y2 i y iri 2 r2 i x iri 2 y iri 2 r4 i 29) The error function is a quadratic for that is iniized when C is a unit-length eigenvector corresponding to the iniu eigenvalue of M. An eigensolver can be used to copute the eigenvector. The resulting equation appears not to be invariant to translations of the input points. You should instead consider odifying the inputs by coputing the averages x = 1 x i and ȳ = 1 y i and then replacing x i x i x and y i y i ȳ. The ri 2 are then coputed fro the adjusted x i and y i values. The atrix becoes 1 0 0 r2 i 0 ˆM = x2 i x iy i x ir 2 i 0 x iy i y2 i y iri 2 30) r2 i x iri 2 y iri 2 r4 i The eigenvector C is coputed. The circle center in the original coordinate syste is x, ȳ) c 1, c 2 )/2c 3 ) and the circle radius is r = c 2 1 + c2 2 4c 0c 3 /2c 3 ). An ipleentation is teplate class ApprQuadraticCircle2 in GteApprQuadratic2.h. 8

7 Fitting a Sphere to 3D Points There are several ways to define a least squares error function to try to fit a sphere to points. The typical approach is to define a su of squares function and use the Levenberg Marquardt algorith for iterative iniization. The ethods proposed here do not follow this paradig. The first section uses the su of squares of the differences between the purported sphere radius and the distance fro an input point to the purported sphere. The algorith is effectively a fixed-point iteration. The second section estiates the coefficients of a quadratic equation of three variables. It reduces to coputing the eigenvector associated with the iniu eigenvalue of a atrix. Regardless of the error function, in practice the fitting produces reasonable results when the input points are distributed over the sphere. If the input points are restricted to a sall solid angle, the fitting is typically not of good quality. The proble is that sall perturbations in the input points can lead to large changes in the estiates for the center and radius. 7.1 Direct Least Squares Algorith Given a set of points {x i, y i, z i )}, 4, fit the with a sphere x a)2 + y b) 2 + z c) 2 = r 2 where a, b, c) is the sphere center and r is the sphere radius. An assuption of this algorith is that not all the points are coplanar. The error function to be iniized is Ea, b, c, r) = L i r) 2 31) where L i = x i a) 2 + y i b) 2 + z i c). Copute the partial derivative with respect to r to obtain Setting the derivative equal to zero yields r = 2 L i r). 32) r = 1 Copute the partial derivatives with respect to a, b, and c to obtain L i. 33) a = 2 L i r) Li a = 2 b c = 2 L i r) Li b = 2 = 2 L i r) Li c = 2 ) xi a) + r Li a, ) yi b) + r Li b, ) zi c) + r Li c 34) Setting these derivatives equal to zero yields a = 1 x i + r 1 L i a, b = 1 y i + r 1 L i b, c = 1 z i + r 1 L i c 35) 9

Replacing r fro equation 33) and using L i / a = a x i )/L i, L i / b = b y i )/L i, and L i / c = c z i )/L i, we obtain three nonlinear equations in a, b, and c, a = x + L L a = F a, b, c), b = ȳ + L L b = Ga, b, c), c = z + L L c = Ha, b, c) 36) where the last equality defines functions F a, b, c), Ga, b, c), and Ha, b, c) and where x = 1 x i, ȳ = 1 y i, z = 1 z i L = 1 L i, La = 1 a x i L i, Lb = 1 b y i L i, Lc = 1 c z i L i 37) Fixed-point iteration can be applied to solving these equations: a 0 = x, b 0 = ȳ, c 0 = z, and a i+1 = F a i, b i, c i ), b i+1 = Ga i, b i, c i ), and c i+1 = Ha i, b i, c i ) for i 0. An ipleentation is GteApprSphere3.h. 7.2 Estiation of Coefficients to a Quadratic Equation The quadratic equation that represents a sphere is 0 = c 0 + c 1 x + c 2 y + c 3 z + c 4 x 2 + y 2 + z 2 ) = c 0, c 1, c 2, c 3, c 4 ) 1, x, y, z, x 2 + y 2 + z 2 ) = C V 38) where the last equality defines V and a unit-length vector C = c 0, c 1, c 2, c 3, c 4 ) with c 4 0 for a nondegenerate sphere). Given a set of input points {x i, y i, z i )}, the least squares error function is chosen to be ) EC) = C V i ) 2 = C T V i V T i C = C T MC 39) where V i = 1, x i, y i, z i, ri 2) with r2 i = x2 i + y2 i + z2 i. The last equality defines the 5 5 syetric atrix M, 1 x i y i z i r2 i x i x2 i x iy i x iz i x ir 2 i M = y i x iy i y2 i y iz i y iri 2 40) z i x iz i y iz i z2 i z iri 2 x iri 2 y iri 2 z iri 2 r4 i r2 i The error function is a quadratic for that is iniized when C is a unit-length eigenvector corresponding to the iniu eigenvalue of M. An eigensolver can be used to copute the eigenvector. The resulting equation appears not to be invariant to translations of the input points. You should instead consider odifying the inputs by coputing the averages x = 1 x i, ȳ = 1 y i, and z = 1 z i and then replacing x i x i x, y i y i ȳ, and z i z i z. The ri 2 are then coputed fro the adjusted 10

x i, y i, and z i values. The atrix becoes 1 0 0 0 r2 i 0 x2 i x iy i x iz i x iri 2 ˆM = 0 x iy i y2 i y iz i y iri 2 0 x iz i y iz i z2 i z iri 2 x iri 2 y iri 2 z iri 2 r4 i r2 i 41) The eigenvector C is coputed. The sphere center in the original coordinate syste is x, ȳ, z) c 1, c 2, c 3 )/2c 4 ) and the sphere radius is r = c 2 1 + c2 2 + c2 3 4c 0c 4 /2c 4 ). An ipleentation is teplate class ApprQuadraticSphere3 in GteApprQuadratic3.h. 8 Fitting an Ellipse to 2D Points Given a set of points {X i }, 3, fit the with an ellipse X U)T R T DRX U) = 1 where U is the ellipse center, R is an orthonoral atrix representing the ellipse orientation, and D is a diagonal atrix whose diagonal entries represent the reciprocal of the squares of the half-lengths lengths of the axes of the ellipse. An axis-aligned ellipse with center at the origin has equation x/a) 2 + y/b) 2 = 1. In this setting, U = 0, 0), R = I the identity atrix), and D = diag1/a 2, 1/b 2 ). The error function to be iniized is EU, R, D) = where L i is the distance fro X i to the ellipse with the given paraeters. L i r) 2 42) This proble is ore difficult than that of fitting circles. The distance L i is coputed according to the algorith described in Distance fro a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid. The function E is iniized iteratively using Powell s direction-set ethod to search for a iniu. An ipleentation is GteApprEllipse2.h. 9 Fitting an Ellipsoid to 3D Points Given a set of points {X i }, 3, fit the with an ellipsoid X U)T R T DRX U) = 1 where U is the ellipsoid center and R is an orthonoral atrix representing the ellipsoid orientation. The atrix D is a diagonal atrix whose diagonal entries represent the reciprocal of the squares of the half-lengths of the axes of the ellipsoid. An axis-aligned ellipsoid with center at the origin has equation x/a) 2 + y/b) 2 + z/c) 2 = 1. In this setting, U = 0, 0, 0), R = I the identity atrix), and D = diag1/a 2, 1/b 2, 1/c 2 ). The error function to be iniized is EU, R, D) = L i r) 2 43) where L i is the distance fro X i to the ellipse with the given paraeters. 11

This proble is ore difficult than that of fitting spheres. The distance L i is coputed according to the algorith described in Distance fro a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid. The function E is iniized iteratively using Powell s direction-set ethod to search for a iniu. An ipleentation is GteApprEllipsoid3.h. 10 Fitting a Paraboloid to 3D Points of the For x, y, fx, y)) Given a set of saples {x i, y i, z i )} and assuing that the true values lie on a paraboloid z = fx, y) = p 1 x 2 + p 2 xy + p 3 y 2 + p 4 x + p 5 y + p 6 = P Qx, y) 44) where P = p 1, p 2, p 3, p 4, p 5, p 6 ) and Qx, y) = x 2, xy, y 2, x, y, 1), select P to iniize the su of squared errors EP) = P Q i z i ) 2 45) where Q i = Qx i, y i ). The iniu occurs when the gradient of E is the zero vector, E = 2 P Q i z i )Q i = 0. 46) Soe algebra converts this to a syste of 6 equations in 6 unknowns: ) Q i Q T i P = z i Q i. 47) The product Q i Q T i is a product of the 6 1 atrix Q i with the 1 6 atrix Q T i, the result being a 6 6 atrix. Define the 6 6 syetric atrix A = Q iq T i and the 6 1 vector B = z iq i. The choice for P is the solution to the linear syste of equations AP = B. The entries of A and B indicate suations over the appropriate product of variables. For exaple, sx 3 y) = x3 i y i: sx 4 ) sx 3 y) sx 2 y 2 ) sx 3 ) sx 2 y) sx 2 ) p 1 szx 2 ) sx 3 y) sx 2 y 2 ) sxy 3 ) sx 2 y) sxy 2 ) sxy) p 2 szxy) sx 2 y 2 ) sxy 3 ) sy 4 ) sxy 2 ) sy 3 ) sy 2 ) p 3 szy sx 3 ) sx 2 y) sxy 2 ) sx 2 = 2 ) 48) ) sxy) sx) p 4 szx) sx 2 y) sxy 2 ) sy 3 ) sxy) sy 2 ) sy) p 5 szy) sx 2 ) sxy) sy 2 ) sx) sy) s1) sz) p 6 An ipleentation is GteApprParaboloid3.h. 12