Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm

Similar documents
Vision par ordinateur

Lecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese

CSE 252B: Computer Vision II

Multiple View Geometry in Computer Vision

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

A Practical Method for Decomposition of the Essential Matrix

Camera Models and Affine Multiple Views Geometry

Maths for Signals and Systems Linear Algebra in Engineering

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Nonrobust and Robust Objective Functions

Closed-Form Solution Of Absolute Orientation Using Unit Quaternions

Linear Algebra. Session 12

Linear Algebra & Geometry why is linear algebra useful in computer vision?

COMP 558 lecture 18 Nov. 15, 2010

Outline. Linear Algebra for Computer Vision

Properties of Matrices and Operations on Matrices

EPIPOLAR GEOMETRY WITH MANY DETAILS

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

Lagrange Multipliers

Linear Algebra Review. Fei-Fei Li

A Study of Kruppa s Equation for Camera Self-calibration

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Chapter 3 Transformations

A DARK GREY P O N T, with a Switch Tail, and a small Star on the Forehead. Any

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

A minimal solution to the autocalibration of radial distortion

Positive Definite Matrix

Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems

Linear Algebra & Geometry why is linear algebra useful in computer vision?

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

1. The Polar Decomposition

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix

CSE 252B: Computer Vision II

MIT Final Exam Solutions, Spring 2017

A Brief Outline of Math 355

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Singular Value Decompsition

Basic Calculus Review

18.06 Professor Edelman Quiz 3 December 5, 2011

Matrix Theory. A.Holst, V.Ufnarovski

Technical University Hamburg { Harburg, Section of Mathematics, to reduce the number of degrees of freedom to manageable size.

3 (Maths) Linear Algebra

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Induced Planar Homologies in Epipolar Geometry

Spring 2014 Math 272 Final Exam Review Sheet

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

8. Diagonalization.

Review problems for MA 54, Fall 2004.

Algebra Workshops 10 and 11

Symmetric matrices and dot products

Linear Algebra Practice Problems

Recovering Unknown Focal Lengths in Self-Calibration: G.N. Newsam D.Q. Huynh M.J. Brooks H.-P. Pan.

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Routines for Relative Pose of Two Calibrated Cameras from 5 Points

Basic Math for

Review of similarity transformation and Singular Value Decomposition

Lecture 6 Positive Definite Matrices

NORMS ON SPACE OF MATRICES

Elementary Linear Algebra

M3: Multiple View Geometry

The Singular Value Decomposition

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Topics in linear algebra

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Principal Component Analysis and Linear Discriminant Analysis

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Subspace Arrangements III Robust GPCA

Conceptual Questions for Review

Singular Value Decomposition (SVD)

Optimisation on Manifolds

Linear Algebra Primer

Linear Algebra Practice Problems

Calculating determinants for larger matrices

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

CSE 167: Introduction to Computer Graphics Lecture #2: Linear Algebra Primer

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Regression and Its Applications

Image Registration Lecture 2: Vectors and Matrices

Tensor Visualization. CSC 7443: Scientific Information Visualization

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

CS 143 Linear Algebra Review

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

Tensor fields. Tensor fields: Outline. Chantal Oberson Ausoni

The Singular Value Decomposition

PAijpam.eu EPIPOLAR GEOMETRY WITH A FUNDAMENTAL MATRIX IN CANONICAL FORM Georgi Hristov Georgiev 1, Vencislav Dakov Radulov 2

Linear Algebra Review. Fei-Fei Li

Dimensionality Reduction

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

UNIT 6: The singular value decomposition.

Nonlinear Programming Algorithms Handout

Singular Value Decomposition

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Transcription:

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

Compact matrix multiplication formulas Block matrix multiplication A ( B 1 B 2 ) = ( AB1 AB 2 ) A ( B1 B n ) = ( AB1 AB n ) ( A1 A 2 ) B = ( ) A1 B A 2 B AT 1. A T m B = A T 1 B. A T m B Both matrices split into blocks ( ) ( ) B 1 A1 A 2 = A 1 B 1 + A 2 B 2 B 2 ( ) A1 A k B T 1. B T k = A 1 B T 1 + + A k B T k

Vector product Denition x x yz zy a b = [a] b = y y = zx xz z z xy yx 0 z y [a] = z 0 x y x 0 Properties: bilinear, antisymmetric. Link with determinant Composition a T (b c) = a b c (a b) c = (a T c)b (b T c)a Composition with isomorphism M (Ma) (Mb) = M M T (a b) [Ma] = M M T [a] M 1

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

Triangulation Fundamental principle of stereo vision z h = B/(H h) z B/H, z = d H f. f focal length. H distance optical center-ground. B distance between optical centers (baseline). Goal Given two rectied images, point correspondences and computation of their apparent shift (disparity) gives information about relative depth of the scene.

Epipolar constraints Rays from matching points must intersect in space The vectors C x, C x and T are coplanar. We write it in camera 1 coordinate frame: x, Rx and T coplanar, x T Rx = 0, which we can write: x T (T Rx ) = 0. We note [T ] x = T x and we get the equation (Longuet-Higgins 1981) x T E x = 0 with E = [T ] R

Epipolar constraints E is the essential matrix but deals with points expressed in camera coordinate frame. Converting to pixel coordinates requires multiplying by the inverse of camera calibration matrix K : x cam = K 1 x image We can rewrite the epipolar constraint as: x T F x = 0 with F = K T EK 1 = K T [T ] RK 1 (Faugeras 1992) F is the fundamental matrix. The progress is important: we can constrain the match without calibrating the cameras! It can be easily derived formally, by expressing everything in camera 2 coordinate frame: λx = K(RX + T ) λ x = K X We remove the 5 unknowns X, λ and λ from the system λk 1 x = λ RK 1 x + T λt (K 1 x) = λ [T ] RK 1 x followed by scalar product with K 1 x

Anatomy of the fundamental matrix Glossary: e = KT satises e T F = 0, that is the left epipole e = K R 1 T satises Fe = 0, that is the right epipole F x is the epipolar line (in left image) associated to x F T x is the epipolar line (in right image) associated to x Observe that if T = 0 we get F = 0, that is, no constraints: without displacement of optical center, no 3D information. The constraint is important: it is enough to look for the match of point x along its associated epipolar line (1D search). Theorem A 3 3 matrix is a fundamental matrix i it has rank 2

Example Image 1 Image 2

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

Singular Value Decomposition Theorem (SVD) Let A be an m n matrix. We can decompose A as: A = UΣV T = min(m,n) i=1 σ i U i V T i with Σ diagonal m n matrix and σ i = Σ ii 0, U (m m) and V (n n) composed of orthonormal columns. The rank of A is the number of non-zero σ i An orthonormal basis of the kernel of A is composed of {V i : σ i = 0} {V i : i = min(m, n) + 1... max(m, n)} Theorem (Thin SVD) If m n, U m n and A = U σ 1... σ n V T If m n, V n m and A = U σ 1... σ m V T

Singular Value Decomposition Proof: 1. Orthonormal diagonalization of A T A = V Σ 2 V T 2. Write U i = AV i /σ i if σ i 0. 3. Check that U T U i j = δ ij. 4. Complement the U i by orthonormal vectors. 5. Check A = UΣV T by comparison on the basis formed by V i. Implementation: ecient algorithm but: As much as we dislike the use of black-box routines, we need to ask you to accept this one, since it would take us too far aeld to cover its necessary background material here. Numerical Recipes

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

Computation of F The 8 point method (actually 8+) is the simplest as it is linear. We write the epipolar constraint for the 8 correspondences x i T F x i = 0 AT i f = 0 with f = ( f 11 f 12 f 13 f 21... f 33 ) T Each one is a linear equation in the unknown f. f has 8 independent parameters, since scale is indierent. We impose the constraint f = 1: min Af 2 subject to f 2 = 1 with A = A AT 1. Solution: f is an eigenvector of A T A associated to its smallest eigenvalue (can be recovered from SVD of A). Constraint: to enforce rank 2 of F, we can decompose it as SVD, put σ 3 = 0 and recompose. A T 8

Computation of F Enforcing constraint det F = 0 after minimization is not optimal. The 7 point method imposes that from the start. We get linear system Af = 0 with A of size 7 9. Let f 1, f 2 be 2 free vectors of the kernel of A (from SVD). Look for a solution f 1 + xf 2 with det F = 0. det(f 1 + xf 2 ) = P(x) with P polynomial of degree 3, we get 1 or 3 solutions. The main interest is not computing F with fewer points (we have many more in general, which is anyway better for precision), but we have fewer chances of selecting false correspondences. By the way, how to ensure we did not incorporate bad correspondences in the equations?

Normalization The 8 point algorithm as is yields very imprecise results Hartley (1997): In Defense of the Eight-Point Algorithm Explanation: the scales of coecients of F are very dierent. F 11, F 12, F 21 and F 22 are multiplied by x i x i, x i y i, y i x i and y i y i, that can reach 106. On the contrary, F 13, F 23, F 31 and F 32 are multiplied by x i, y i, x i and y i that are of order 10 3. F 33 is multiplied by 1. The scales being so dierent, A is badly conditioned. Solution: normalize points so that coordinates are of order 1. N = 10 3 10 3 1, x i = Nx i, x i = Nx i We nd F for points ( xi, x i) then F = N T F N

Computation of E E depends on 5 parameters (3 for R+3 for T -1 for scale) A 3 3 matrix E is essential i its singular values are 0 and two equal positive values. It can be written: 2EE T E tr(ee T )E = 0, det E = 0 5 point algorithm (Nister, 2004) We have Ae = 0, A of size 5 9, we get a solution of the form E = xx + yy + zz + W with X, Y, Z, W a basis of the kernel of A (SVD) The contraints yield 10 polynomial equations of degree 3 in x, y, z 1) Gauss pivot to eliminate terms of degree 2+ in x, y, then B(z) ( x y 1 )T = 0, that is det B(z) = 0, degree 10. 2) Gröbner bases. 3) C(z) ( 1 x y x 2 xy... y 3)T = 0 and det C(z) = 0.

Contents Some useful rules of vector calculus Essential and fundamental matrices Singular Value Decomposition Computation of E and F RANSAC algorithm

RANSAC algorithm How to solve a problem of parameter estimation in presence of outliers? This is the framework of robust estimation. Example: regression line of plane points (x i, y i ) with for certain i bad data (not simply imprecise). Correct data are called inliers and incorrect outliers. Hypothesis: inliers are coherent while outliers are random. RANdom SAmple Consensus (Fishler&Bolles, 1981): 1. Select k samples out of n, k being the minimal number to estimate uniquely a model. 2. Compute model and count samples among n explained by model at precision σ. 3. If this number is larger than the most coherent one until now, keep it. 4. Back to 1 if we have iterations left. Example: k = 2 for a plane regression line.

RANSAC for fundamental matrix Choose k = 7 or k = 8 Classify (x i, x i ) inlier/outlier as a function of the distance of x i to epipolar line associated to x i (F T x i ). k = 7 is better, because we have fewer chances to select an outlier. In that case, we can have 3 models by sample. We test the 3 models.

RANSAC: number of iterations Suppose there are m inliers. The probability of having an uncontaminated sample of k inliers is (m/n) k We require the probability that N iter samples are bad to be lower than β = 1%: Therefore we need N iter ( 1 (m/n) k ) Niter β log β log(1 (m/n) k ) m is unknown, but a lower bound is the best number of inliers found so far. recompute N iter each time a better model is found..

Conclusion Epipolar constraint: 1. Essential matrix E (calibrated case) 2. Fundamental matrix F (non calibrated case) F can be computed with the 7- or 8-point algorithm. Computation of E is much more complicated (5-point algorithm) Removing outliers through RANSAC algorithm.

Practical session: RANSAC algorithm for F computation Objective: Fundamental matrix computation with RANSAC algorithm. Get initial program from the website. Write the core of function ComputeF. Use RANSAC algorithm (update N iter dynamically), based on 8-point algorithm. Solve the linear system estimating F from 8 matches. Do not forget normalization! Hint: it is easier to use SVD with a square matrix. For that, add the 9th equation 0 T f = 0. After RANSAC, rene resulting F with least square minimization based on all inliers. Write the core of displayepipolar: when user clicks, nd in which image (left or right). Display this point and show associated epipolar line in other image.