nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

Similar documents
Sparse Levenberg-Marquardt algorithm.

Visual SLAM Tutorial: Bundle Adjustment

Optimization Methods

NonlinearOptimization

Generalized Gradient Descent Algorithms

Introduction to unconstrained optimization - direct search methods

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Nonlinear Optimization: What s important?

Method 1: Geometric Error Optimization

17 Solution of Nonlinear Systems

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

ECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Chapter 3 Numerical Methods

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Math 411 Preliminaries

Written Examination

Review for Exam 2 Ben Wang and Mark Styczynski

Convex Optimization CMU-10725

Performance Surfaces and Optimum Points

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Numerical Optimization

Linear and Nonlinear Optimization

Matrix Derivatives and Descent Optimization Methods

Constrained Optimization

Numerical solutions of nonlinear systems of equations

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Programming, numerics and optimization

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Basic concepts in Linear Algebra and Optimization

8 Numerical methods for unconstrained problems

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Real Vector Derivatives, Gradients, and Nonlinear Least-Squares

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Math and Numerical Methods Review

Mathematical modelling Chapter 2 Nonlinear models and geometric models

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Non-linear least squares

1 Numerical optimization

Computational Methods. Least Squares Approximation/Optimization

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

IE 5531: Engineering Optimization I

LINEAR AND NONLINEAR PROGRAMMING

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

1 Numerical optimization

ENGINEERINGMATHEMATICS-I. Hrs/Week:04 Exam Hrs: 03 Total Hrs:50 Exam Marks :100

An Iterative Descent Method

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Conjugate Gradient (CG) Method

Basic Math for

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

Lecture 3. Optimization Problems and Iterative Algorithms

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

Convex Optimization. Problem set 2. Due Monday April 26th

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Numerical Analysis of Electromagnetic Fields

Chapter 10 Conjugate Direction Methods

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

PETROV-GALERKIN METHODS

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Nonlinear Optimization for Optimal Control

IE 5531: Engineering Optimization I

Theory of Bouguet s MatLab Camera Calibration Toolbox

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Scientific Computing: Optimization

Lecture 7 Unconstrained nonlinear programming

TRACKING and DETECTION in COMPUTER VISION

MATH 4211/6211 Optimization Quasi-Newton Method

Maria Cameron. f(x) = 1 n

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Convex Optimization Algorithms for Machine Learning in 10 Slides

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Higher-Order Methods

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Robot Mapping. Least Squares. Cyrill Stachniss

the method of steepest descent

Unconstrained optimization

Nonlinear Programming

Optimization on the Manifold of Multiple Homographies

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Chapter 4. Unconstrained optimization

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

COMP 558 lecture 18 Nov. 15, 2010

Optimization II: Unconstrained Multivariable

Transcription:

Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x instead of y and x as the measured coordinates, like in the book. n 1 = n nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. X = f(p) + ɛ and ɛ has to be minimized. M N The estimated measurements ˆX are on a manifold S M inside R N. The estimated parameters ˆP in R M satisfy the model f( ˆP) = ˆX and f : R M R N. P; X are equal to P o ; X o, the true values The estimation is equivalent to go from h(p, X) 0 to h( ˆP, ˆX) = 0.

The f( ˆP) not one-to-one because of the uncertainties has to be elimitated. In the parameter space has only rank d < M, the number of essential parameters. The vector (X ˆX) have nonzero elements only in the (N d) dimensional space. The vector ( X ˆX) in the tangent space have nonzero elements only in d dimensions. This is the estimation error. The covariance along the d directions cannot be smaller then the initial covariance of the measurements X along these directions. 2

Homography between two 2D images. The homogeneous coordinated are x i = [ x i y i 1 ] and x i = [ x hi y hi w hi ]. The 3 3 matrix H have to the found ˆx i = Ĥˆx i i = 1,..., n. The measurement space is R N = R 4n. The parameter space is R M = R 9. Constraint h h = 1 eliminates the ninth parameter. The number of essential parameters is eight. In the parameter space, ˆP lies on an eight dimensional manifold, a unit sphere in R 9. The null space is perpendicular to the unit sphere, and changes for every ˆP. In 2D is a circle and a line, d = 1 < M = 2. 3

General view. Covariance Σ X. Taking into account also the estimated measurements, the two vectors in P are - a is the parameters, dimension M. - b is the estimated measurements, dimension N. ( ) a P = b Be aware that P became parameters+estimated measurements. Do not confuse with the previous notation f(p). At in each iteration ˆX satisfy the model f( ˆP). The Jacobian has block structure with J = ˆX [ = [A B] A = ˆX ] B = P a - A is an N M matrix. - B is an N N matrix. The equation to be minimized is X f(p) 2 Σ X. The matrix below will be needed A Σ 1 X A A Σ 1 X B J Σ 1 X J = B Σ 1 X A B Σ 1 X B [parameter+measure] [parameter+measure] [ ˆX ] b 4

Nonlinear least squares estimators Always an iterative solution. The approximation depends on the method. Gauss-Newton method The objective function is taken as locally quasi-quadratic in the parameters. The first iteration. Assume that ˆP 0 was solved with the algebraic distance X = f( ˆP 0 ) + ɛ 0 with ɛ 0 being the residual and ˆX 0 = f( ˆP 0 ) was found by projection into the null space S M. The first order expansion for the next iteration is f(p 1 ) f( ˆP 0 ) + J 0 (P 1 ˆP 0 ) δ 1 = P 1 ˆP 0 with the Jacobian J = f/ P evaluated at ˆP 0. X f(p 1 ) X f( ˆP 0 ) J 0 δ 1 = ɛ 0 J 0 δ 1. In the equation ɛ 0 J 0 δ 1 2 Σ X make the gradient in δ 1 equal to zero J 0 Σ 1 X J 0δ 1 = J 0 Σ 1 X ɛ 0 from where ˆP1 = ˆP 0 + δ 1. 5

The (t + 1)-th iteration is executed in a similar way, just that 0 becomes t and 1 becomes (t + 1). The minimization over the scalar e 2 res e 2 res = X f(p) 2 Σ X ɛ t J t δ t+1 2 Σ X = = (ɛ t J t δ t+1 ) Σ 1 X (ɛ t J t δ t+1 ) = = ɛ t Σ 1 X ɛ t 2ɛ t Σ 1 X J tδ t+1 + δ t+1j t Σ 1 X J tδ t+1 has the Hessian matrix approximately equal J t Σ 1 X J t, a symmetric, positive semidefinite matrix. The gradient in δ t+1 is equal to zero Gives the δ t+1 and for the next iteration. J t Σ 1 X J tδ t+1 = J t Σ 1 X ɛ t. ˆP t+1 = ˆP t + δ t+1 The approach is called the Gauss-Newton method, Has an iterative solution and in general converges to a local minimum. Is strongly dependent on the initial estimate of the parameters. 6

Gradient descent method e 2 res = X f(p) 2 Σ X = ɛ Σ 1 X ɛ ɛ = X f(p). The steepest descent method updates in the downhill direction the parameters using the negative of the gradient of the objective function. The gradient is [ e 2 res f( ˆP = 2 ˆP ] t ) t ˆP Σ 1 X ɛ t = 2J t Σ 1 X ɛ t. t The length of the step γ t is found by line search so that the X f(p t+1 ) 2 Σ X is quasi-minimum for iteration (t + 1). δ t+1 = γ t J t Σ 1 X ɛ t ˆP t+1 = ˆP t + δ t+1. Similar methods, like conjugate gradient, also exist. The initial estimate is very important. Converges slowly to a local minimum. 7

Different distances The norm of squared Mahalanobis distances is the cost function. The weights are the full rank N N covariance matrix of the measurements Σ X e 2 res = X ˆX 2 Σ X = (X ˆX) Σ 1 X (X ˆX) = n = (x i ˆx i ) Σ 1 x i (x i ˆx i ) f( ˆP) = ˆX. i=1 In homography, satisfies ˆx i = Ĥˆx i This is a geometric distance d geom, the reprojection error with M + N unknowns, the parameters and estimated measurements. It requires a nonlinear estimation. We saw before the algebraic distance d alg, with M unknowns, the parameters. It is solved by linear TLS. The estimated measurements are only nuisance parameters. Can be the initial solution for the geometric distance. 8