Least Squares Fitting of Data

Size: px
Start display at page:

Download "Least Squares Fitting of Data"

Transcription

1 Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA This work is licensed under the Creative Coons Attribution 4.0 International License. To view a copy of this license, visit or send a letter to Creative Coons, PO Box 1866, Mountain View, CA 94042, USA. Created: July 15, 1999 Last Modified: Deceber 12, 2017 Contents 1 Linear Fitting of 2D Points of the For x, fx)) 2 2 Linear Fitting of nd Points Using Orthogonal Regression 2 3 Planar Fitting of 3D Points of the For x, y, fx, y)) 3 4 Hyperplanar Fitting of nd Points Using Orthogonal Regression 4 5 kd Flat Fitting of nd Points Using Orthogonal Regression 5 6 Fitting a Circle to 2D Points Direct Least Squares Algorith Estiation of Coefficients to a Quadratic Equation Fitting a Sphere to 3D Points Direct Least Squares Algorith Estiation of Coefficients to a Quadratic Equation Fitting an Ellipse to 2D Points 11 9 Fitting an Ellipsoid to 3D Points Fitting a Paraboloid to 3D Points of the For x, y, fx, y)) 12 1

2 This docuent describes soe algoriths for fitting 2D or 3D point sets by linear or quadratic structures using least squares iniization. 1 Linear Fitting of 2D Points of the For x, fx)) This is the usual introduction to least squares fit by a line when the data represents easureents where the y-coponent is assued to be functionally dependent on the x-coponent. Given a set of saples {x i, y i )}, deterine A and B so that the line y = Ax + B best fits the saples in the sense that the su of the squared errors between the y i and the line values Ax i + B is iniized. Note that the error is easured only in the y-direction. Define the error function for the least squares iniization to be EA, B) = [Ax i + B) y i ] 2 1) This function is nonnegative and its graph is a paraboloid whose vertex occurs when the gradient satistfies E = 0, 0). This leads to a syste of two linear equations in A and B which can be easily solved. Precisely, 0, 0) = E = 2 [Ax i + B) y i ]x i, 1) 2) and so x2 i x i x i 1 A B = x iy i y i 3) The solution provides the least squares solution y = Ax + B. If ipleented directly, this forulation can lead to an ill-conditioned linear syste. To avoid this, you should first copute the averages x = 1 x i and ȳ = 1 y i and subtract the fro the data, x i x i x and y i y i ȳ. The fitted line is y ȳ = Ax x), where A = x i x)y i ȳ) x 4) i x) 2 An ipleentation is GteApprHeightLine2.h. 2 Linear Fitting of nd Points Using Orthogonal Regression It is also possible to fit a line using least squares where the errors are easured orthogonally to the proposed line rather than easured vertically. The following arguent holds for saple points and lines in n diensions. Let the line be Lt) = td + A where D is unit length. Define X i to be the saple points; then X i = A + d i D + p i D i, where d i = D X i A) and D i is soe unit length vector perpendicular to D with appropriate coefficient p i. Define Y i = X i A. The vector fro X i to its projection onto the line is 2

3 Y i d i D = p i D i. The squared length of this vector is p 2 i = Y i d i D) 2. The error function for the least squares iniization is EA, D) = p2 i. Two alternate fors for this function are EA, D) = Y T i [I DD T] ) Y i 5) and EA, D) = D T [ ] ) Y i Y i )I Y i Y T i D = D T MA)D 6) Copute the derivative of equation 5) with respect to A to obtain [ A = 2 I DD T] Y i 7) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + sd for any scalar s. This is siply a stateent that the average is on the best-fit line, but any other point on the line ay serve as the origin for that line. Equation 6) is a quadratic for D T MA)D whose iniu is the sallest eigenvalue of MA), coputed using standard eigensyste solvers. A corresponding unit length eigenvector D copletes our construction of the least squares line. The covariance atrix of the input points is C = Y iy T i. Defining δ = YT i Y i, we see that M = δi C, where I is the identity atrix. Therefore, M and C have the sae eigenspaces. The eigenspace corresponding to the iniu eigenvalue of M is the sae as the eigenspace corresponding to the axiu eigenvalue of C. In an ipleentation, it is sufficient to process C and avoid the additional cost to copute M. An ipleentation for 2D is GteApprOrthogonalLine2.h and an ipleentation for 3D is GteApprOrthogonalLine3.h. 3 Planar Fitting of 3D Points of the For x, y, fx, y)) The assuption is that the z-coponent of the data is functionally dependent on the x- and y-coponents. Given a set of saples {x i, y i, z i )}, deterine A, B, and C so that the plane z = Ax + By + C best fits the saples in the sense that the su of the squared errors between the z i and the plane values Ax i +By i +C is iniized. Note that the error is easured only in the z-direction. Define the error function for the least squares iniization to be EA, B, C) = [Ax i + By i + C) z i ] 2 8) This function is nonnegative and its graph is a hyperparaboloid whose vertex occurs when the gradient satistfies E = 0, 0, 0). This leads to a syste of three linear equations in A, B, and C which can be easily solved. Precisely, 0, 0, 0) = E = 2 [Ax i + By i + C) z i ]x i, y i, 1) 9) 3

4 and so x2 i x iy i x i The solution provides the least squares solution z = Ax + By + C. x iy i x i A x iz i y2 i y i B y = y iz i i 1. 10) C z i If ipleented directly, this forulation can lead to an ill-conditioned linear syste. To avoid this, you should first copute the averages x = 1 x i, ȳ = 1 y i, and z = 1 z i, and then subtract the fro the data, x i x i x, y i y i ȳ, and z i z i z. The fitted plane is z z = Ax x)+by ȳ), where A = x i x) 2 1 x i x)y i ȳ) B x x i x)z i z) i x)y i ȳ) y i ȳ) 2 11) y i ȳ)z i z) An ipleentation is GteApprHeightPlane3.h. 4 Hyperplanar Fitting of nd Points Using Orthogonal Regression It is also possible to fit a plane using least squares where the errors are easured orthogonally to the proposed plane rather than easured vertically. The following arguent holds for saple points and hyperplanes in n diensions. Let the hyperplane be N X A) = 0 where N is a unit length noral to the hyperplane and A is a point on the hyperplane. Define X i to be the saple points; then X i = A + λ i N + p i N i, where λ i = N X i A) and N i is soe unit length vector perpendicular to N with appropriate coefficient p i. Define Y i = X i A. The vector fro X i to its projection onto the hyperplane is λ i N. The squared length of this vector is λ 2 i = N Y i) 2. The error function for the least squares iniization is EA, N) = λ2 i. Two alternate fors for this function are and where C = Y iy T i. EA, N) = Y T i [NN T] ) Y i 12) ) EA, N) = N T Y i Y T i N = N T CN 13) Copute the derivative of equation 12) with respect to A to obtain A = 2 NN T) Y i 14) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + W, where W is any vector perpendicular to N. This is siply a stateent that the average is on the best-fit hyperplane, but any other point on the hyperplane ay serve as the origin for that hyperplane. 4

5 Equation 13) is a quadratic for D T CD whose iniu is the sallest eigenvalue of C, coputed using standard eigensyste solvers. A corresponding unit length eigenvector N copletes our construction of the least squares hyperplane. An ipleentation for 3D is GteApprOrthogonalPlane3.h. 5 kd Flat Fitting of nd Points Using Orthogonal Regression Orthogonal regression to fit n-diensional points by a line or by a hyperplane ay be generalized to fitting by a flat of affine subspace. To ephasize the diension of the flat, soeties the terinology used is k-flat, where the diension is k with 0 k n. A line is a 1-flat and a hyperplane is a n 1)-flat. For diension n = 3, we fit with flats that are either lines or planes. For diensions n 4 and flat diensions 2 k n 2, the generalization of orthogonal regression is the following. The bases entioned here are for the linear portion of the affine subspace; that is, the basis vectors are relative to an origin at a point A. The flat has an orthonoral basis {F j } k and the orthogonal copleent has an orthonoral basis {P j } n k. The union of the two bases is an orthonoral basis for Rn. Any input point X i, 1 i, can be represented by X i = A + k n k f ij F j + p ij P j = A + k n k f ij F j + p ij P j 15) The left-parenthesized ter is the portion of X i that lives in the flat and the right-parenthesized is the portion that is the deviation of X i fro the flat. The least squares proble is about choosing the two bases so that the su of squared lengths of the deviations is as sall as possible. Define Y i = X i A. The basis coefficients are f ij = F j Y i and p ij = P j Y i. The squared length of the deviation is n k p2 ij. The error function for the least squares iniization is the su of the squared lengths for all inputs, E = n k p2 ij. Two alternate fors for this function are n k EA, P 1,... P n k ) = Y T i P j P T j Y i 16) and where C = Y iy T i. n k EA, P 1,... P n k ) = P T j ) n k Y i Y T i P j = P T j CP j 17) Copute the derivative of equation 16) with respect to A to obtain n k A = 2 P j P T j Y i 18) The derivative is zero when Y i = 0 in which case A = 1/) X i, the average of the saple points. In fact there are infinitely any solutions A + W, where W is any vector in the orthogonal copleent 5

6 of the subspace spanned by the P j ; this subspace is that spanned by the F j. This is siply a stateent that the average is on the best-fit flat, but any other point on the flat ay serve as the origin for that flat. Because we are choosing A to be the average of the input points, the atrix C is the covariance atrix for the input points. The last ter in equation 17) is a su of quadratic fors involving the atrix C and the vectors P j that are a basis unit-length, utually perpendicular). The iniu value of the quadratic for is the sallest eigenvalue λ 1 of C, so we ay choose P 1 to be a unit-length eigenvector of C corresponding to λ 1. We ust choose P 2 to be unit length and perpendicular to P 1. If the eigenspace for λ 1 is 1-diensional, the next sallest value we can attain by the quadratic for is the sallest eigenvalue λ 2 for which λ 1 < λ 2. P 2 is chosen to be a corresponding unit-length eigenvector. However, if λ 1 has an eigenspace of diension larger than 1, we can choose P 2 in that eigenspace but which is perpendicular to P 1. Generally, let {λ l } r l=1 be the r distinct eigenvalues of the covariance atrix C; we know 1 r n and λ 1 < λ 2 < < λ r. Let the diension of the eigenspace for λ l be d l 1; we know that r l=1 d l = n. List the eigenvalues and eigenvectors in order of increasing eigenvalue, including repeated values, λ 1 λ 1 V 1 1 V 1 d 1 }{{} d 1 ters λ 2 λ 2 V 2 1 V 2 d 2 }{{} d 2 ters λ r λ r V r 1 V r d r }{{} d r ters 19) The list has n ites. The eigenvalue d l has an eigenspace with orthonoral basis {V l j} d l. In this list, choose the first k eigenvectors to be P 1 through P k and choose the last n k eigenvectors to be F 1 through F n k. It is possible that one or ore) of the P j and one or ore) of the F j are in the eigenspace for the sae eigenvalue. In this case, the fitted flat is not unique, and one should re-exaine the choice of diension k for the fitted flat. This is analogous to the following situations in diension n = 3. The input points are nearly collinear but you are trying to fit those points with a plane. The covariance atrix likely has two distinct eigenvalues λ 1 < λ 2 with d 1 = 2 and d 2 = 1. The basis vectors are P 1 = V 1 1, F 1 = V 1 2, and F 2 = V 2 1. The first two of these are fro the sae eigenspace. The input points are spread out over a portion of a plane and are not nearly collinear) but you are trying to fit those points with a line. The covariance atrix likely has two distinct eigenvalues λ 1 < λ 2 with d 1 = 1 and d 2 = 2. The basis vectors are P 1 = V 1 1, P 2 = V 2 1, and F 1 = V 2 2. The last two of these are fro the sae eigenspace. The input points are not well fit by a flat of any diension. For exaple, your input points are uniforly distributed over a sphere. The covariance atrix likely has one distinct eigenvalue λ 1 of ultiplicity d 1 = 3. Neither a line nor a plane is a good fit to the input points in either case, a P-vector and an F-vector are in the sae eigenspace. The coputational algorith is to copute the average A and covariance atrix of the points. Use an eigensolver whose output eigenvalues are sorted in nondecreasing order. Choose the F j to be the last n k eigenvectors output by the eigensolver. 6

7 6 Fitting a Circle to 2D Points There are several ways to define a least squares error function to try to fit a circle to points. The typical approach is to define a su of squares function and use the Levenberg Marquardt algorith for iterative iniization. The ethods proposed here do not follow this paradig. The first section uses the su of squares of the differences between the purported circle radius and the distance fro an input point to the purported circle. The algorith is effectively a fixed-point iteration. The second section estiates the coefficients of a quadratic equation of two variables. It reduces to coputing the eigenvector associated with the iniu eigenvalue of a atrix. Regardless of the error function, in practice the fitting produces reasonable results when the input points are distributed over the circle. If the input points are restricted to a sall arc of a circle, the fitting is typically not of good quality. The proble is that sall perturbations in the input points can lead to large changes in the estiates for the center and radius. 6.1 Direct Least Squares Algorith Given a set of points {x i, y i )}, 3, fit the with a circle x a)2 + y b) 2 = r 2, where a, b) is the circle center and r is the circle radius. An assuption of this algorith is that not all the points are collinear. The error function to be iniized is Ea, b, r) = L i r) 2 20) where L i = x i a) 2 + y i b) 2. Copute the partial derivative with respect to r to obtain Setting the derivative equal to zero yields r = 2 L i r) 21) r = 1 Copute the partial derivatives with respect to a and b to obtain L i 22) a = 2 L i r) Li a = 2 b Setting these derivatives equal to zero yields a = 1 = 2 L i r) Li b = 2 x i + r 1 L i a, b = 1 ) xi a) + r Li a, ) 23) yi b) + r Li b y i + r 1 Replace r fro equation 22) and using using L i / a = a x i )/L i and L i / b = b y i )/L i, we obtain two nonlinear equations in a and b, L i b 24) a = x + L L a = F a, b), b = ȳ + L L b = Ga, b) 25) 7

8 where the last equality of each expression defines the functions F a, b) and Ga, b) and where x = 1 x i, ȳ = 1 y i, L = 1 L i, La = 1 a x i L i, Lb = 1 b y i L i 26) Fixed-point iteration can be applied to solve these equations: a 0 = x, b 0 = ȳ, and a i+1 = F a i, b i ) and b i+1 = Ga i, b i ) for i 0. An ipleentation is GteApprCircle2.h. 6.2 Estiation of Coefficients to a Quadratic Equation The quadratic equation that represents a circle is 0 = c 0 + c 1 x + c 2 y + c 3 x 2 + y 2 ) = c 0, c 1, c 2, c 3 ) 1, x, y, x 2 + y 2 ) = C V 27) where the last equality defines V and a unit-length vector C = c 0, c 1, c 2, c 3 ) with c 3 0 for a nondegenerate circle). Given a set of input points {x i, y i )}, the least squares error function is chosen to be ) EC) = C V i ) 2 = C T V i V T i C = C T MC 28) where V i = 1, x i, y i, ri 2) with r2 i = x2 i + y2 i. The last equality defines the 4 4 syetric atrix M, 1 x i y i r2 i M = x i x2 i x iy i x ir 2 i y i x iy i y2 i y iri 2 r2 i x iri 2 y iri 2 r4 i 29) The error function is a quadratic for that is iniized when C is a unit-length eigenvector corresponding to the iniu eigenvalue of M. An eigensolver can be used to copute the eigenvector. The resulting equation appears not to be invariant to translations of the input points. You should instead consider odifying the inputs by coputing the averages x = 1 x i and ȳ = 1 y i and then replacing x i x i x and y i y i ȳ. The ri 2 are then coputed fro the adjusted x i and y i values. The atrix becoes r2 i 0 ˆM = x2 i x iy i x ir 2 i 0 x iy i y2 i y iri 2 30) r2 i x iri 2 y iri 2 r4 i The eigenvector C is coputed. The circle center in the original coordinate syste is x, ȳ) c 1, c 2 )/2c 3 ) and the circle radius is r = c c2 2 4c 0c 3 /2c 3 ). An ipleentation is teplate class ApprQuadraticCircle2 in GteApprQuadratic2.h. 8

9 7 Fitting a Sphere to 3D Points There are several ways to define a least squares error function to try to fit a sphere to points. The typical approach is to define a su of squares function and use the Levenberg Marquardt algorith for iterative iniization. The ethods proposed here do not follow this paradig. The first section uses the su of squares of the differences between the purported sphere radius and the distance fro an input point to the purported sphere. The algorith is effectively a fixed-point iteration. The second section estiates the coefficients of a quadratic equation of three variables. It reduces to coputing the eigenvector associated with the iniu eigenvalue of a atrix. Regardless of the error function, in practice the fitting produces reasonable results when the input points are distributed over the sphere. If the input points are restricted to a sall solid angle, the fitting is typically not of good quality. The proble is that sall perturbations in the input points can lead to large changes in the estiates for the center and radius. 7.1 Direct Least Squares Algorith Given a set of points {x i, y i, z i )}, 4, fit the with a sphere x a)2 + y b) 2 + z c) 2 = r 2 where a, b, c) is the sphere center and r is the sphere radius. An assuption of this algorith is that not all the points are coplanar. The error function to be iniized is Ea, b, c, r) = L i r) 2 31) where L i = x i a) 2 + y i b) 2 + z i c). Copute the partial derivative with respect to r to obtain Setting the derivative equal to zero yields r = 2 L i r). 32) r = 1 Copute the partial derivatives with respect to a, b, and c to obtain L i. 33) a = 2 L i r) Li a = 2 b c = 2 L i r) Li b = 2 = 2 L i r) Li c = 2 ) xi a) + r Li a, ) yi b) + r Li b, ) zi c) + r Li c 34) Setting these derivatives equal to zero yields a = 1 x i + r 1 L i a, b = 1 y i + r 1 L i b, c = 1 z i + r 1 L i c 35) 9

10 Replacing r fro equation 33) and using L i / a = a x i )/L i, L i / b = b y i )/L i, and L i / c = c z i )/L i, we obtain three nonlinear equations in a, b, and c, a = x + L L a = F a, b, c), b = ȳ + L L b = Ga, b, c), c = z + L L c = Ha, b, c) 36) where the last equality defines functions F a, b, c), Ga, b, c), and Ha, b, c) and where x = 1 x i, ȳ = 1 y i, z = 1 z i L = 1 L i, La = 1 a x i L i, Lb = 1 b y i L i, Lc = 1 c z i L i 37) Fixed-point iteration can be applied to solving these equations: a 0 = x, b 0 = ȳ, c 0 = z, and a i+1 = F a i, b i, c i ), b i+1 = Ga i, b i, c i ), and c i+1 = Ha i, b i, c i ) for i 0. An ipleentation is GteApprSphere3.h. 7.2 Estiation of Coefficients to a Quadratic Equation The quadratic equation that represents a sphere is 0 = c 0 + c 1 x + c 2 y + c 3 z + c 4 x 2 + y 2 + z 2 ) = c 0, c 1, c 2, c 3, c 4 ) 1, x, y, z, x 2 + y 2 + z 2 ) = C V 38) where the last equality defines V and a unit-length vector C = c 0, c 1, c 2, c 3, c 4 ) with c 4 0 for a nondegenerate sphere). Given a set of input points {x i, y i, z i )}, the least squares error function is chosen to be ) EC) = C V i ) 2 = C T V i V T i C = C T MC 39) where V i = 1, x i, y i, z i, ri 2) with r2 i = x2 i + y2 i + z2 i. The last equality defines the 5 5 syetric atrix M, 1 x i y i z i r2 i x i x2 i x iy i x iz i x ir 2 i M = y i x iy i y2 i y iz i y iri 2 40) z i x iz i y iz i z2 i z iri 2 x iri 2 y iri 2 z iri 2 r4 i r2 i The error function is a quadratic for that is iniized when C is a unit-length eigenvector corresponding to the iniu eigenvalue of M. An eigensolver can be used to copute the eigenvector. The resulting equation appears not to be invariant to translations of the input points. You should instead consider odifying the inputs by coputing the averages x = 1 x i, ȳ = 1 y i, and z = 1 z i and then replacing x i x i x, y i y i ȳ, and z i z i z. The ri 2 are then coputed fro the adjusted 10

11 x i, y i, and z i values. The atrix becoes r2 i 0 x2 i x iy i x iz i x iri 2 ˆM = 0 x iy i y2 i y iz i y iri 2 0 x iz i y iz i z2 i z iri 2 x iri 2 y iri 2 z iri 2 r4 i r2 i 41) The eigenvector C is coputed. The sphere center in the original coordinate syste is x, ȳ, z) c 1, c 2, c 3 )/2c 4 ) and the sphere radius is r = c c2 2 + c2 3 4c 0c 4 /2c 4 ). An ipleentation is teplate class ApprQuadraticSphere3 in GteApprQuadratic3.h. 8 Fitting an Ellipse to 2D Points Given a set of points {X i }, 3, fit the with an ellipse X U)T R T DRX U) = 1 where U is the ellipse center, R is an orthonoral atrix representing the ellipse orientation, and D is a diagonal atrix whose diagonal entries represent the reciprocal of the squares of the half-lengths lengths of the axes of the ellipse. An axis-aligned ellipse with center at the origin has equation x/a) 2 + y/b) 2 = 1. In this setting, U = 0, 0), R = I the identity atrix), and D = diag1/a 2, 1/b 2 ). The error function to be iniized is EU, R, D) = where L i is the distance fro X i to the ellipse with the given paraeters. L i r) 2 42) This proble is ore difficult than that of fitting circles. The distance L i is coputed according to the algorith described in Distance fro a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid. The function E is iniized iteratively using Powell s direction-set ethod to search for a iniu. An ipleentation is GteApprEllipse2.h. 9 Fitting an Ellipsoid to 3D Points Given a set of points {X i }, 3, fit the with an ellipsoid X U)T R T DRX U) = 1 where U is the ellipsoid center and R is an orthonoral atrix representing the ellipsoid orientation. The atrix D is a diagonal atrix whose diagonal entries represent the reciprocal of the squares of the half-lengths of the axes of the ellipsoid. An axis-aligned ellipsoid with center at the origin has equation x/a) 2 + y/b) 2 + z/c) 2 = 1. In this setting, U = 0, 0, 0), R = I the identity atrix), and D = diag1/a 2, 1/b 2, 1/c 2 ). The error function to be iniized is EU, R, D) = L i r) 2 43) where L i is the distance fro X i to the ellipse with the given paraeters. 11

12 This proble is ore difficult than that of fitting spheres. The distance L i is coputed according to the algorith described in Distance fro a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid. The function E is iniized iteratively using Powell s direction-set ethod to search for a iniu. An ipleentation is GteApprEllipsoid3.h. 10 Fitting a Paraboloid to 3D Points of the For x, y, fx, y)) Given a set of saples {x i, y i, z i )} and assuing that the true values lie on a paraboloid z = fx, y) = p 1 x 2 + p 2 xy + p 3 y 2 + p 4 x + p 5 y + p 6 = P Qx, y) 44) where P = p 1, p 2, p 3, p 4, p 5, p 6 ) and Qx, y) = x 2, xy, y 2, x, y, 1), select P to iniize the su of squared errors EP) = P Q i z i ) 2 45) where Q i = Qx i, y i ). The iniu occurs when the gradient of E is the zero vector, E = 2 P Q i z i )Q i = 0. 46) Soe algebra converts this to a syste of 6 equations in 6 unknowns: ) Q i Q T i P = z i Q i. 47) The product Q i Q T i is a product of the 6 1 atrix Q i with the 1 6 atrix Q T i, the result being a 6 6 atrix. Define the 6 6 syetric atrix A = Q iq T i and the 6 1 vector B = z iq i. The choice for P is the solution to the linear syste of equations AP = B. The entries of A and B indicate suations over the appropriate product of variables. For exaple, sx 3 y) = x3 i y i: sx 4 ) sx 3 y) sx 2 y 2 ) sx 3 ) sx 2 y) sx 2 ) p 1 szx 2 ) sx 3 y) sx 2 y 2 ) sxy 3 ) sx 2 y) sxy 2 ) sxy) p 2 szxy) sx 2 y 2 ) sxy 3 ) sy 4 ) sxy 2 ) sy 3 ) sy 2 ) p 3 szy sx 3 ) sx 2 y) sxy 2 ) sx 2 = 2 ) 48) ) sxy) sx) p 4 szx) sx 2 y) sxy 2 ) sy 3 ) sxy) sy 2 ) sy) p 5 szy) sx 2 ) sxy) sy 2 ) sx) sy) s1) sz) p 6 An ipleentation is GteApprParaboloid3.h. 12

Least Squares Fitting of Data by Linear or Quadratic Structures

Least Squares Fitting of Data by Linear or Quadratic Structures Least Squares Fitting of Data by Linear or Quadratic Structures David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Perspective Projection of an Ellipse

Perspective Projection of an Ellipse Perspective Projection of an Ellipse David Eberly, Geometric ools, Redmond WA 98052 https://www.geometrictools.com/ his work is licensed under the Creative Commons Attribution 4.0 International License.

More information

A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves

A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves David Eberly, Geometric Tools, Redmond WA 9852 https://www.geometrictools.com/ This work is licensed under the Creative

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

Lecture 21 Nov 18, 2015

Lecture 21 Nov 18, 2015 CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Distance Between Ellipses in 2D

Distance Between Ellipses in 2D Distance Between Ellipses in 2D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Information About Ellipses

Information About Ellipses Information About Ellipses David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

MA304 Differential Geometry

MA304 Differential Geometry MA304 Differential Geoetry Hoework 4 solutions Spring 018 6% of the final ark 1. The paraeterised curve αt = t cosh t for t R is called the catenary. Find the curvature of αt. Solution. Fro hoework question

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Reconstructing an Ellipsoid from its Perspective Projection onto a Plane

Reconstructing an Ellipsoid from its Perspective Projection onto a Plane Reconstructing an Ellipsoid from its Perspective Projection onto a Plane David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

An Approach to Conic Sections

An Approach to Conic Sections n pproach to Conic ections Jia. F. Weng 1998 1 Introduction. The study of conic sections can be traced back to ancient Greek atheaticians, usually to pplonious (ca. 220-190 bc) [2]. The nae conic section

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Least squares fitting with elliptic paraboloids

Least squares fitting with elliptic paraboloids MATHEMATICAL COMMUNICATIONS 409 Math. Coun. 18(013), 409 415 Least squares fitting with elliptic paraboloids Heluth Späth 1, 1 Departent of Matheatics, University of Oldenburg, Postfach 503, D-6 111 Oldenburg,

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Quantum Chemistry Exam 2 Take-home Solutions

Quantum Chemistry Exam 2 Take-home Solutions Cheistry 60 Fall 07 Dr Jean M Standard Nae KEY Quantu Cheistry Exa Take-hoe Solutions 5) (0 points) In this proble, the nonlinear variation ethod will be used to deterine an approxiate solution for the

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Variations on Backpropagation

Variations on Backpropagation 2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance

More information

HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS

HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS R 702 Philips Res. Repts 24, 322-330, 1969 HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS by F. A. LOOTSMA Abstract This paper deals with the Hessian atrices of penalty

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA Multivariate Methos Xiaoun Qi Principal Coponents Analysis -- PCA he PCA etho generates a new set of variables, calle principal coponents Each principal coponent is a linear cobination of the original

More information

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

Solving Systems of Polynomial Equations

Solving Systems of Polynomial Equations Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Intersection of Ellipsoids

Intersection of Ellipsoids Intersection of Ellipsoids David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view

More information

MODIFICATION OF AN ANALYTICAL MODEL FOR CONTAINER LOADING PROBLEMS

MODIFICATION OF AN ANALYTICAL MODEL FOR CONTAINER LOADING PROBLEMS MODIFICATIO OF A AALYTICAL MODEL FOR COTAIER LOADIG PROBLEMS Reception date: DEC.99 otification to authors: 04 MAR. 2001 Cevriye GECER Departent of Industrial Engineering, University of Gazi 06570 Maltepe,

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

A Quantum Observable for the Graph Isomorphism Problem

A Quantum Observable for the Graph Isomorphism Problem A Quantu Observable for the Graph Isoorphis Proble Mark Ettinger Los Alaos National Laboratory Peter Høyer BRICS Abstract Suppose we are given two graphs on n vertices. We define an observable in the Hilbert

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Feedforward Networks

Feedforward Networks Feedforward Neural Networks - Backpropagation Feedforward Networks Gradient Descent Learning and Backpropagation CPSC 533 Fall 2003 Christian Jacob Dept.of Coputer Science,University of Calgary Feedforward

More information

Linear Transformations

Linear Transformations Linear Transforations Hopfield Network Questions Initial Condition Recurrent Layer p S x W S x S b n(t + ) a(t + ) S x S x D a(t) S x S S x S a(0) p a(t + ) satlins (Wa(t) + b) The network output is repeatedly

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

8.1 Force Laws Hooke s Law

8.1 Force Laws Hooke s Law 8.1 Force Laws There are forces that don't change appreciably fro one instant to another, which we refer to as constant in tie, and forces that don't change appreciably fro one point to another, which

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information