Least-Squares Fitting of Model Parameters to Experimental Data

Similar documents
Method 1: Geometric Error Optimization

13. Nonlinear least squares

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Pseudoinverse & Moore-Penrose Conditions

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Some Notes on Least Squares, QR-factorization, SVD and Fitting

EECS 275 Matrix Computation

Programming, numerics and optimization

Linear and Nonlinear Optimization

Visual SLAM Tutorial: Bundle Adjustment

Computational Methods. Least Squares Approximation/Optimization

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra- Final Exam Review

Pose estimation from point and line correspondences

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Numerical Methods I Solving Nonlinear Equations

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Trust-region methods for rectangular systems of nonlinear equations

Pseudoinverse and Adjoint Operators

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Lecture 6. Regularized least-squares and minimum-norm methods 6 1

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Chapter 3 Numerical Methods

Singular Value Decomposition

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Least squares problems Linear Algebra with Computer Science Application

Optimisation on Manifolds

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Numerical solutions of nonlinear systems of equations

Least squares: the big idea

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

EECS 275 Matrix Computation

Matrix Derivatives and Descent Optimization Methods

2 Nonlinear least squares algorithms

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Rank-Deficient Nonlinear Least Squares Problems and Subset Selection

Data Fitting and Uncertainty

Chapter 6 - Orthogonality

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

A Sobolev trust-region method for numerical solution of the Ginz

NORMS ON SPACE OF MATRICES

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Least squares problems

Non-polynomial Least-squares fitting

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

17 Solution of Nonlinear Systems

Numerical Methods I: Eigenvalues and eigenvectors

Nonlinear Optimization

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

The Singular Value Decomposition

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Solving Linear Systems of Equations

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Numerical Methods - Numerical Linear Algebra

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

1 Computing with constraints

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Vector Spaces, Orthogonality, and Linear Least Squares

Lecture 5 Least-squares

What is... Parameter Identification?

14. Nonlinear equations

Orthogonal tensor decomposition

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Lie Groups for 2D and 3D Transformations

linear least squares methods nonlinear least squares asymptotically second order iterations M.R. Osborne Something old,something new, and...

Multi-Robotic Systems

Numerical Methods in Informatics

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Numerical Methods in Matrix Computations

Optimization and Root Finding. Kurt Hornik

Robotics 1 Inverse kinematics

Computational Methods. Eigenvalues and Singular Values

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

Vision par ordinateur

Subset Selection. Ilse Ipsen. North Carolina State University, USA

18.06SC Final Exam Solutions

Pseudoinverse & Orthogonal Projection Operators

Orthonormal Transformations and Least Squares

Basic Math for

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Robotics 1 Inverse kinematics

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Transcription:

Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193

Outline of talk What about Science and Scientific Computing Linear least-squares Nonlinear least-squares Separable least-squares (the variable projection method) Rigid body movements

Scientific Computing Goal Computational tools for obtaining accurate solutions in short time

Least-squares fitting Given: Data points (t i, y i ), i = 1,..., m. A model function φ x (t) that depends on some unknown parameters x R n, m > n. Task: Find parameters x by solving min x m (φ x (t i ) y i ) 2. i=1

5 0 Least Squares Fit φ x (t)=x 1 t+x 2 sin(t) (t i,y i ) φ x (t i ) y i 5 y 10 15 20 25 0 2 4 6 8 10 12 14 16 18 20 t

Least-squares fitting, cont Nice statistical properties when y i = φ x (t i ) + e i, from the exponential family. e i comes Implies easy computations compared to other measures. Sensitive to outliers. Many variants exists: error in variables, total lsqr, weighted lsqr, constrained lsqr, regularized lsqr,...

Formulated as a constrained problem min x m (ỹ i y i ) 2, i=1 s.t. φ x (t i ) ỹ i = 0, i = 1,..., m.

Linear least-squares φ x (t) depends linearly on x. (Ex: φ x (t) = x 1 t + x 2 sin t) We solve min Ax y 2 x 2, (Ex: A = t 1 sin(t 1 ).. t m sin(t m ) The solution ˆx = (A T A) 1 A T y = A y satisfies ) Aˆx = Py, Aˆx y 2 = r 2 = P y 2 where P and P are matrices that projects onto R(A) and onto R(A), respectively.

Projections Let A = QR, Q R m n, Q T Q = I, R R n n (qr-decomp) Aˆx = Py QR ˆx = QQ T y R ˆx = Q T y Numerically stable computations! (e.g., x = A\y in Matlab)

Linear least-squares in matlab How to solve min x Ax y 2 in matlab?? 1 x = A\y (QR-factorization of A) 2 x = pinv(a) y (SVD of A) 3 x = (A A)\(A y) ( Cholesky-factorisation of A T A) 4 x = inv(a A) (A y) (Avoid!!)

Nonlinear least-squares φ x (t) depends nonlinearly on x, (e.g., φ x (t) = e x 1t + sin(x 2 t)) Define f i (x) = φ x (t i ) y i. We minimize F (x) = 1 2 m i=1 f 2 i (x) = 1 2 f (x) 2 2 using iterative methods, e.g., the Gauss-Newton method.

Newtons Method Newtons method for or solving min x F (x) or x F (x) = 0: k = 0 guess x (0) Repeat Compute Hessian matrix H = 2 F (x (k) ) R n n, Compute gradient g = F (x (k) ) R n Solve the linear system of equations Hp = g Compute step length β x (k+1) = x (k) + β p k = k + 1 Until convergence Based on F (x (k) + p) F (x (k) ) + p T g + 1 2 pt Hp.

Geometry Newtons method F (x) = k F (x (k) ) + p T g + 1 2 pt Hp = k

Gauss-Newton method Here: F (x) = 1 2 m i=1 f 2 i (x) = 1 2 f (x) 2 2 H = (J T J + i (f i 2 xf i )), g = J T f R n, where J = f (x (k) ) R m n is the Jacobian matrix Hence: p New = H 1 g = (J T J + i (f i 2 xf i )) 1 J T f p GN = (J T J) 1 J T f Note: p GN solves min p f (x (k) ) + Jp 2 (e.g., p = J\f in Matlab)

Gauss Newtons Method Gauss-Newtons method for or solving f (x) 2 : k = 0 guess x (0) Repeat Compute J = f (x (k) ) R m n Compute f = f (x (k) ) R m Solve the linear least squares problem min p f (x (k) ) + Jp 2. Compute step length β x (k+1) = x (k) + β p k = k + 1 Until convergence Based on f (x (k) + p) 2 f (x (k) ) + Jp 2. F (x (k) + p) F (x (k) ) + p T J T f + 1 2 pt J T Jp.

Properties of Gauss-Newtons method Only first order derivatives are needed p GN is a descent direction if J has full rank. Asymptotic rate of convergence is linear (quadratic if f (ˆx) == 0) x (k+1) ˆx lim k x (k) ˆx = f (ˆx), ρ = radius of curvature. ρ Converges to local minima Global properties????

Difficulties How to find initial guess x (0)? How to deal with ill-conditioned Jacobians? p LM = (J T J + λi ) 1 J T f (Levenberg-Marquardt) min(1 λ) f (x) 2 x 2 + λ regularization term How to deal with slow rate of convergence? Use higher order derivatives (Newtons method)

Separable least-squares Example: φ x (t) = x 1 e x 3t + x 2 e x 4t = a 1 e b 1t + a 2 e b 2t. φ depends linearly on a 1,..., a n1 φ depends nonlinearly on b 1,..., b n2 The least-squares problem is min a,b 1 2 y A(b)a 2 2, Example: A(b) = e b 1t 1 e b 2t 1. e b 1t m. e b 2t m

The variable projection method For a given vector b the solution â satisfies y A(b)â 2 = P(b) y 2, (P(b) projects onto R(A(b)) ) The original problem has been transformed to Nonlinear projected problem 1 min b 2 P(b) y 2 2,

The variable projection method 1 min b 2 P(b) y 2 2, The dimension of the problem is n 2 instead of n 1 + n 2 Can be solved with standard methods (GN,New). How to obtain derivatives?? Asymptotic convergence with GN is almost the same as when applying GN on the original problem (Ruhe,Wedin, 1980). Global properties???

How to find derivatives Notation: C i = C(b) b i, C ij = 2 C(b) b i b j. P i = P A i A (P A i A ) T, i = 1,..., n 2. (Golub, Pereyea, 1973). Last term can be ignored. P ij = B ij B T ij, i, j = 1,..., n 2, where B ij = P j A i A + P A ij A + P A i A ((A j A ) T P A j A ) (Borges 2009).

The Jacobian matrix, J, of r = P(b) y Let Pi P A i A Q R m n 1 be an orthogonal base for R(A(b)). â = A y J(:, i) = P i y = P A i A y = (I QQ T )A i â

Examples of separable nonlinear least squares problems Sums of exponentials NURBS Rigid body movements

Sum of exponentials φ a,b (t) = N a i e b i t i=0 Literature reports global benefits using the variable projection method. (see e.g., Golub,Pereyra 2003 and references therein)

NURB-curves c(t) = n B i (t)w i p i i=0 = φ n a,b (t) = B i (t)w i i=0 n B i (t)a i, n B i (t)b i i=0 i=0 The variable projection method is most often slower than using the unseparated formulation. (see Bergström, Söderkvist, 2012)

Rigid body movements min R,d m Rp i + d q i 2 2 i=1 s.t. R T R = I, det(r) = 1 Separable least-squares problem with special structure. No standard iteration is needed. Applications in orthopedics, robotics, computer vision, reverse engineering, automatic shape verification, etc.. Determining the movements of the skeleton using well-configured markers by I Söderkvist, PÅ Wedin, Journal of biomechanics, 1993. Over 600 citations

Rigid body movements, cont. The orthogonal Procrustes problem d = q Rp min R Ω RA B F where A = (p 1 p,..., p m p), B = (q 1 q,..., q m q) Solved using SVD, UΣV T = BA T R 3 3, R = UV T Efficient and stable computational solution procedure. Needs coordinates of a ordered set of landmarks. (R = U diag(1, 1, 1) V T if det(uv T ) = 1)

Procrustes PROCRUSTES, also called POLYPEMON or DAMASTES, in Greek legend, a robber dwelling in the neighbourhood of Eleusis, who was slain by Thesus (q.v). He had an iron bed (or according to some accounts, two beds) on which he compelled his victims to lie, stretching or cutting off their legs to make them fit the bed s length. The bed of Procrustes has become proverbial for inflexibility. (Encyclopædia Britannica. Vol 18, 1964)

Condition numbers Assume that B = RA, A, B R 3 n, R Ω A and B have the singular values σ 1 σ 2 σ 3 0, where σ 2 > 0. Let the orthogonal matrix R + R be the solution to the perturbed problem min (R + R)(A + A) (B + B). (R+ R) Ω

Condition numbers A first order bound of R is given by lim ε A 0 ε B 0 sup A ε A B ε B R A + B = 2 (σ 2 2 + σ2 3 ) 1 2 The square sum of the distances of the landmarks to the closest straight line equals σ 2 2 + σ2 3.

Conclusions Computational methods for solving least squares problem are useful in modelling. Numerical optimization and linear algebra provide important basic tools. The structure of the problem can often be utilized to gain efficiency and robustness.

Thanks for your attention!