Method 1: Geometric Error Optimization

Similar documents
Least-Squares Fitting of Model Parameters to Experimental Data

Visual SLAM Tutorial: Bundle Adjustment

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

Vision par ordinateur

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

COMP 558 lecture 18 Nov. 15, 2010

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Sparse Levenberg-Marquardt algorithm.

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Chapter 3 Numerical Methods

Pose estimation from point and line correspondences

Linear Regression and Its Applications

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Feature extraction: Corners and blobs

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Robot Mapping. Least Squares. Cyrill Stachniss

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Principal components analysis COMS 4771

Basic Math for

Applied Linear Algebra

Performance Surfaces and Optimum Points

13. Nonlinear least squares

Introduction to gradient descent

Least Squares Fitting of Data by Linear or Quadratic Structures

STA141C: Big Data & High Performance Statistical Computing

TRACKING and DETECTION in COMPUTER VISION

14. Nonlinear equations

Motion Estimation (I)

Lecture 6 Positive Definite Matrices

Data Fitting and Uncertainty

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Optimisation on Manifolds

Generalized Weiszfeld Algorithms for Lq Optimization

Generalized Gradient Descent Algorithms

Matrix Derivatives and Descent Optimization Methods

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

STA141C: Big Data & High Performance Statistical Computing

Preliminary Examination in Numerical Analysis

EPIPOLAR GEOMETRY WITH MANY DETAILS

EECS 275 Matrix Computation

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Iterative Reweighted Least Squares

Numerical Methods I Solving Nonlinear Equations

Motion Estimation (I) Ce Liu Microsoft Research New England

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Math 273a: Optimization Netwon s methods

Synopsis of Numerical Linear Algebra

Course Notes Math 275 Boise State University. Shari Ultman

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Multi-Robotic Systems

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

Sheet 07.5: Matrices II: Inverse, Basis Transformation

Computational Methods. Least Squares Approximation/Optimization

Linear Algebra. Session 12

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Preface to Second Edition... vii. Preface to First Edition...

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Gradient Descent. Dr. Xiaowei Huang

5.6. PSEUDOINVERSES 101. A H w.

Lecture: Quadratic optimization

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

TSRT14: Sensor Fusion Lecture 9

Probabilistic Latent Semantic Analysis

Math 302 Outcome Statements Winter 2013

Practical Linear Algebra: A Geometry Toolbox

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Shape of Gaussians as Feature Descriptors

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Lecture 5 Least-squares

Least Squares Approximation

CSE 252B: Computer Vision II

Linear Algebra: Matrix Eigenvalue Problems

Image Analysis. Feature extraction: corners and blobs

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Principal Component Analysis

Iterative Methods. Splitting Methods

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

Lecture 8: Interest Point Detection. Saad J Bedros

FINITE-DIMENSIONAL LINEAR ALGEBRA

Overviews of Optimization Techniques for Geometric Estimation

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

component risk analysis

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Numerical solution of Least Squares Problems 1/32

ADDITIONAL MATHEMATICS

Transcription:

Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection matrices are see [H&Z,Sec. 9.5] for complete characterization P 1 = [ I 0 ], P 2 = [ ] [e 2 ] F + e 2 e 1 e 2 H3; 2pt: Verify that F is a f.m. of P 1, P 2, for instance that F Q 2 Q 1 [e 1] 1. compute F (0) by the 7-point algorithm Slide 81; construct camera P (0) 2 from F (0) (0) 2. triangulate 3D points ˆX i from correspondences (x i, y i) for all i = 1,..., k Slide 85 3. express the energy function as reprojection error W i(x i, y i ˆXi, P 2) = x i ˆx i 2 + y i ŷ i 2 where ˆx i P 1 ˆX i, ŷ i P 2 ˆX i 4. starting from P (0) 2, ˆX(0) minimize ( ˆX, P 2) = arg min P 2, ˆX k i=1 W i(x i, y i ˆXi, P 2) 5. compute F from P 1, P 2 3k + 12 parameters to be found: latent: ˆX i, for all i (correspondences!), non-latent: P 2 minimal representation: 3k + 7 parameters, P 2 = P 2 (F) Slide 138 there are pitfalls; this is essentially bundle adjustment; we will return to this later Slide 131 3D Computer Vision: V. Optimization for 3D Vision (p. 95/206)

Method 2: First-Order Error Approximation An elegant method for solving problems like (14): we will get rid of the latent parameters [H&Z, p. 287], [Sampson 1982] we will recycle the algebraic error ε = y F x from Slide 81 Observations: correspondences ˆx i ŷ i satisfy ŷ i F ˆx i = 0, ˆx i = (û 1, ˆv 1, 1), ŷ i = (û 2, ˆv 2, 1) this is a manifold V F R 4 : a set of points Ẑ = (û 1, ˆv 1, û 2, ˆv 2 ) consistent with F let Ẑ i be the closest point on V F to measurement Z i, then (see (13)) Z i Ẑ i 2 = (u 1 i û 1 i ) 2 + (v 1 i ˆv 1 i ) 2 + (u 2 i û 2 i ) 2 + (v 2 i ˆv 2 i ) 2 = = V i(x i, y i ˆx i, ŷ i, F) def = e(ẑ i, Z i) 2 which is what we needed in (14) Î Z i = ( u 1, v 1, u 2, v 2) measurement µ algebraic error: ε(ẑ i) def = ŷ i F ˆx i (= 0) Sampson s idea: Linearize ε(ẑ i ) (with hat!) at Z i (no hat!) and estimate e(ẑ i, Z i ) with it 3D Computer Vision: V. Optimization for 3D Vision (p. 96/206)

Sampson s Idea Linearize ε(ẑ i) at Z i per correspondence and estimate e(ẑ i, Z i) with it have: ε(z i ), want: e(ẑ i, Z i ) ε(ẑ i) ε(z i) + ε(zi) (Ẑ i Z i) Z }{{ i }}{{} J(Z i ) e(ẑ i,z i ) Illustration on circle fitting def = ε(z i) + J(Z i) e(ẑ i, Z i)! = 0 We are estimating distance from point x to circle V C of radius r in canonical position. The circle is ε(x) = x 2 r 2 = 0. Then ε(ˆx) ε(x) + ε(x) (ˆx x) x }{{}}{{} J(x)=2x e(ˆx,x) and ε L(ˆx) = 0 is a line with normal = = 2 x ˆx (r 2 + x 2 ) def = ε L (ˆx) x and intercept r2 + x 2 x 2 x ˆx x V C not tangent to V C, outside! e x 2 x 1 ε 1 (ˆx) = 0 V C linear function over R 2 : ε L (ˆx) line in R 2 : ε L (ˆx) = 0 quadratic algebraic error ε(ˆx) ε(x i ) ˆx i V C R 2 ε 2 (ˆx) = 0 x i e (ˆx i, x i ) 3D Computer Vision: V. Optimization for 3D Vision (p. 97/206)

Sampson Error Approximation In general, the Taylor expansion is ε(z i) + ε(zi) Z i }{{} J i (Z i ) (Ẑ i Z i) }{{} e(ẑ i,z i ) = ε(z i) }{{} + J(Z i) e(ẑ i, Z i) }{{}}{{} ε i R n J i R n,d e i R d! = 0 to find Ẑi closest to Zi, we estimate ei from εi by minimizing per correspondence X i e i = arg min e i e i 2 subject to ε i + J i e i = 0 which gives a closed-form solution P1; 1pt: derive e i e i = J i (J ij i ) 1 ε i e i 2 = ε i (J ij i ) 1 ε i note that J i is not invertible! we often do not need Ẑ i, just the squared distance e i 2 exception: triangulation Slide 100 the unknown parameters F are inside: e i = e i (F), ε i = ε i (F), J i = J i (F) 3D Computer Vision: V. Optimization for 3D Vision (p. 98/206)

Sampson Error: Result for Fundamental Matrix Estimation The fundamental matrix estimation problem becomes k F = arg min e 2 i (F) F,rank F=2 i=1 Let F = [ ] (F 1 ) 1 0 0 F 1 F 2 F 3 (per columns) = (F 2 ) (per rows), S = 0 1 0, then (F 3 ) 0 0 0 Sampson ε i = y i F x i ε i R scalar algebraic error from Slide 81 [ ] εi J i =, εi ε i,, εi J u 1 i vi 1 u 2 i vi 2 i R 1,4 derivatives over point coords. e 2 i (F) = ε2 i J i 2 e i R Sampson error ] J i = [(F 1) y i, (F 2) y i, (F 1 ) x i, (F 2 ) x i Sampson correction normalizes the algebraic error automatically copes with multiplicative factors F λf actual optimization not yet covered Slide 103 3D Computer Vision: V. Optimization for 3D Vision (p. 99/206) e 2 (y i Fx i) 2 i (F) = SFx i 2 + SF y i 2

Back to Triangulation: The Golden Standard Method We are given P 1, P 2 and a single correspondence x y and we look for 3D point X projecting to x and y. Slide 85 Idea: 1. compute F from P 1, P 2, e.g. F = (Q 1Q 1 2 ) [q 1 (Q 1Q 1 2 )q2] 2. correct measurement by the linear estimate of the correction vector Slide 98 û 1 u 1 u 1 (F 1 ) y ˆv 1 û 2 v 1 u 2 ε J 2 J = v 1 u 2 y Fx (F 2 ) y SFx 2 + SF y 2 (F 1 ) x ˆv 2 v 2 v 2 (F 2 ) x 3. use the SVD algorithm with numerical conditioning Slide 86 Ex (cont d from Slide 89): C 1 C 2 X a X s X T X T noiseless ground truth position reprojection error minimizer X s Sampson-corrected algebraic error minimizer X a algebraic error minimizer m measurement (m T with noise in v 2 ) C 1 e 1 m a m s m T m C 2 e 2 m s m m T m a 3D Computer Vision: V. Optimization for 3D Vision (p. 100/206)

Levenberg-Marquardt (LM) Iterative Estimation Consider error function e i(θ) = f(x i, y i, θ) R m, with x i, y i given, θ R q unknown k θ = F, q = 9, m = 1 for f.m. estimation Our goal: θ = arg min e i(θ) 2 θ i=1 Idea 1 (Gauss-Newton approximation): proceed iteratively for s = 0, 1, 2,... k θ s+1 := θ s + d s, where d s = arg min d e i(θ s + d) 2 (15) e i(θ s + d) e i(θ s ) + L i d, (L i) jl = ( e i(θ) ) j (θ) l, L i R m,q typically a long matrix Then the solution to Problem (15) is a set of normal eqs ( k k ) L i e i(θ s ) = L i L i d s, (16) i=1 } {{ } e R q,1 i=1 } {{ } L R q,q d s can be solved for by Gaussian elimination using Choleski decomposition of L L symmetric use Choleski, almost 2 faster than Gauss-Seidel, see bundle adjustment slide 134 such updates do not lead to stable convergence ideas of Levenberg and Marquardt i=1 3D Computer Vision: V. Optimization for 3D Vision (p. 101/206)

LM (cont d) Idea 2 (Levenberg): replace i L i L i with i L i L i + λ I for some damping factor λ 0 Idea 3 (Marquardt): replace λ I with λ i diag(l i L i) to adapt to local curvature: ( k k L i e i(θ s ( ) = L i L i + λ diag L ) ) i L i d s i=1 i=1 Idea 4 (Marquardt): adaptive λ small λ Gauss-Newton, large λ gradient descend 1. choose λ 10 3 and compute d s 2. if i ei(θs + d s) 2 < i ei(θs ) 2 then accept d s and set λ := λ/10, s := s + 1 3. otherwise set λ := 10λ and recompute d s sometimes different constants are needed for the 10 and 10 3 note that L i R m,q (long matrix) but each contribution L i L i is a square singular q q matrix (always singular for k < q) error can be made robust to outliers, see the trick on Slide 106 we have approximated the least squares Hessian by ignoring second derivatives of the error function (Gauss-Newton approximation) See [Triggs et al. 1999, Sec. 4.3] λ helps avoid the consequences of gauge freedom Slide 136 3D Computer Vision: V. Optimization for 3D Vision (p. 102/206)

LM with Sampson Error for Fundamental Matrix Estimation Sampson (derived by linearization over point coordinates u 1, v 1, u 2, v 2 ) e 2 i (F) = ε2 i J = (y i Fx i) 2 1 0 0 S = 0 1 0 i 2 SFx i 2 + SF y i 2 0 0 0 LM (by linearization over parameters F) L i = ei(f) F = 1 2 J i [ ( ) ( ) ] y i 2ei J i SFxi x i + y i x i 2ei J i SF y i L i is a 3 3 matrix, must be reshaped to dimension-9 vector x i and y i in Sampson error are normalized to unit homogeneous coordinate reinforce rank F = 2 after each LM update to stay in the fundamental matrix manifold and F = 1 to avoid gauge freedom (by SVD, see Slide 104) LM linearization could be done by numerical differentiation (small dimension) 3D Computer Vision: V. Optimization for 3D Vision (p. 103/206)

Local Optimization for Fundamental Matrix Estimation Given a set {(x i, y i)} k i=1 of k > 7 inlier correspondences, compute an efficient estimate for fundamental matrix F. 1. Find the conditioned ( Slide 88) 7-point F 0 ( Slide 81) from a suitable 7-tuple 2. Improve the F 0 using the LM optimization ( Slides 101 102) and the Sampson error ( Slide 103) on all inliers, reinforce rank-2, unit-norm F k after each LM iteration using SVD if there are no wrong matches (outliers), this gives a local optimum contamination of (inlier) correspondences by outliers may wreak havoc with this algorithm the full problem involves finding the inliers! in addition, we need a mechanism for jumping out of local minima (and exploring the space of all fundamental matrices) 3D Computer Vision: V. Optimization for 3D Vision (p. 104/206)

The Full Problem of Matching and Fundamental Matrix Estimation Problem: Given two sets of image points X = {x i} m i=1 and Y = {y j} n j=1 and their descriptors D, find the most probable 1. inliers S X X, S Y Y 2. one-to-one perfect matching M : S X S Y perfect matching: 1-factor of the bipartite graph 3. fundamental matrix F such that rank F = 2 4. such that for each x i S X and y j = M(x i ) it is probable that a. the image descriptor D(x i) is similar to D(y j), and b. the total geometric error ij e2 ij (F) is small note a slight change in notation: e ij 5. inlier-outlier and outlier-outlier matches are improbable Ë Å 1 Å 1 Ë 1 2 3 4 5 6 7 8 2 3 2 3 5 4 5 6 4 6 8 7 1 2 3 4 5 6 (M, F ) = arg max p(m, F X, Y, D) (17) M,F probabilistic model: an efficient language for task formulation the (17) is a p.d.f. for all the involved variables (there is a constant number of variables!) binary matching table M ij {0, 1} of fixed size m n each row/column contains at most one unity zero rows/columns correspond to unmatched point x i/y j ¼ ½ 3D Computer Vision: V. Optimization for 3D Vision (p. 105/206)

Thank You

linear function over R 2 : ε L (ˆx) line in R 2 : ε L (ˆx) = 0 quadratic algebraic error ε(ˆx) ε(x i ) ˆx i V C R 2 x i e (ˆx i, x i ) 3D Computer Vision: enlarged figures

C 1 C 2 X a X s X T 3D Computer Vision: enlarged figures

C 1 e 1 m a m s m T m 3D Computer Vision: enlarged figures

C 2 e 2 m s m m T m a 3D Computer Vision: enlarged figures