Vision par ordinateur

Similar documents
EPIPOLAR GEOMETRY WITH MANY DETAILS

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm

Nonrobust and Robust Objective Functions

Method 1: Geometric Error Optimization

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Multiple View Geometry in Computer Vision

CSE 252B: Computer Vision II

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

Segmentation of Subspace Arrangements III Robust GPCA

Parameterizing the Trifocal Tensor

Lecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

A Study of Kruppa s Equation for Camera Self-calibration

Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems

Conditions for Segmentation of Motion with Affine Fundamental Matrix

A minimal solution to the autocalibration of radial distortion

Computation of the Quadrifocal Tensor

Motion Estimation (I)

Tutorial. Fitting Ellipse and Computing Fundamental Matrix and Homography. Kenichi Kanatani. Professor Emeritus Okayama University, Japan

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

Multiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Fast RANSAC with Preview Model Parameters Evaluation

Least-Squares Fitting of Model Parameters to Experimental Data

Math Review: parameter estimation. Emma

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Algorithms for Computing a Planar Homography from Conics in Correspondence

Sparse Levenberg-Marquardt algorithm.

A Practical Method for Decomposition of the Essential Matrix

Nonlinear Programming Models

ROBUST MULTIPLE-VIEW GEOMETRY ESTIMATION BASED ON GMM

Motion Estimation (I) Ce Liu Microsoft Research New England

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Trinocular Geometry Revisited

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix

Gradient-Based Learning. Sargur N. Srihari

ECE521 week 3: 23/26 January 2017

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Pose estimation from point and line correspondences

CSE 252B: Computer Vision II

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES

Visual SLAM Tutorial: Bundle Adjustment

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

6.801/866. Affine Structure from Motion. T. Darrell

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Image Alignment and Mosaicing

PAijpam.eu EPIPOLAR GEOMETRY WITH A FUNDAMENTAL MATRIX IN CANONICAL FORM Georgi Hristov Georgiev 1, Vencislav Dakov Radulov 2

Visual Self-Calibration of Pan-Tilt Kinematic Structures

Probabilistic Latent Semantic Analysis

Rigid Structure from Motion from a Blind Source Separation Perspective

Feature extraction: Corners and blobs

Overviews of Optimization Techniques for Geometric Estimation

Determining the Epipolar Geometry and its Uncertainty: A Review

Outliers Robustness in Multivariate Orthogonal Regression

MULTIPLE EXPOSURES IN LARGE SURVEYS

Convex Optimization: Applications

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Data Mining Techniques

Vector Space Models. wine_spectral.r

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool

Camera Self-Calibration Using the Singular Value Decomposition of the Fundamental Matrix

Exponential Families

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Structure from Motion. Read Chapter 7 in Szeliski s book

13. Nonlinear least squares

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Making Minimal Problems Fast

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Convex Optimization M2

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

Regression Clustering

COMP 558 lecture 18 Nov. 15, 2010

Numerical Methods I Solving Nonlinear Equations

CS6964: Notes On Linear Systems

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

A Theory of Multi-Layer Flat Refractive Geometry Supplementary Materials

Robustness Meets Algorithms

Lecture 8: Interest Point Detection. Saad J Bedros

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

Machine Learning for Signal Processing Sparse and Overcomplete Representations

General and Nested Wiberg Minimization: L 2 and Maximum Likelihood

Multivariate Distributions

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

2 Nonlinear least squares algorithms

M3: Multiple View Geometry

Linear Regression Linear Regression with Shrinkage

Robust Statistics, Revisited

Multiple View Geometry in Computer Vision

CPSC 540: Machine Learning

Final Exam Due on Sunday 05/06

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Scale & Affine Invariant Interest Point Detectors

A Brief Overview of Robust Statistics

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Randomized projection algorithms for overdetermined linear systems

More on Unsupervised Learning

Part I Generalized Principal Component Analysis

Transcription:

Vision par ordinateur Géométrie épipolaire Frédéric Devernay Avec des transparents de Marc Pollefeys

Epipolar geometry π Underlying structure in set of matches for rigid scenes C1 m1 l1 M L2 L1 l T 1 l 2 e1 m 2 T F m 1 = 0 e2 Fundamental matrix 3x3 rank 2 matrix) m2 C2 l2 Canonical representation: P = [I 0] " P = [[ " e ] # F + " e v T " e ] 1. Computable from corresponding points 2. Simplifies matching 3. Allows to detect wrong matches 4. Related to calibration

Epipolar geometry x1 C1 l1 ΠP l 1 = e 1 " x 1 L2 " = P T 1 l 1 = P T 2 l 2 M L1 l T 1 l 2 e1 x T 2 Px + T 2 2 P 1 F [ex 11 ] " = x 1 0= 0 e2 x2 l2 Fundamental matrix 3x3 rank 2 matrix) C2 l 2 T x 2 = 0 l 2 = P 2 +T P 1 T l 1

The projective reconstruction theorem If a set of point correspondences in two views determine the fundamental matrix uniquely, then the scene and cameras may be reconstructed from these correspondences alone, and any two such reconstructions from these correspondences are projectively equivalent allows reconstruction from pair of uncalibrated images!

Properties of the fundamental matrix

Computation of F Linear 8-point) Minimal 7-point) Robust RANSAC) Non-linear refinement MLE, ) Practical approach

Epipolar geometry: basic equation " x T Fx = 0 x " xf 11 + x " yf 12 + x " f 13 + y " xf 21 + y " yf 22 + y " f 23 + xf 31 + yf 32 + f 33 = 0 separate known from unknown [ x " x, x " y, x ", y " x, y " y, y ", x, y,1] f 11, f 12, f 13, f 21, f 22, f 23, f 31, f 32, f 33 data) [ ] T = 0 unknowns) linear) y 1 x 1 y 1 1& f = 0 y n x n y n 1' # x 1 " x 1 x 1 " y 1 x 1 " y 1 " x 1 y 1 " y 1 " M M M M M M M M M $ x " n x n x " n y n x " n y " n x n y " n y n " Af = 0

the NOT normalized 8-point algorithm # x 1 x 1 " y 1 x 1 " x 1 " x 1 y 1 " y 1 y 1 " y 1 " x 1 y 1 1& x 2 x " 2 y 2 x " 2 x " 2 x 2 y " 2 y 2 y " 2 y " 2 x 2 y 2 1 M M M M M M M M M $ x n x " n y n x " n x " n x n y " n y n y " n y " n x n y n 1' ~10000 ~10000 ~100 ~10000 ~10000 ~100 ~100 ~100 1 Orders of magnitude difference! between column of data matrix $ least-squares yields poor results # f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 & = 0 '

the normalized 8-point algorithm Transform image to ~[-1,1]x[-1,1] 0,500) 700,500) # 2 700 $ & 0 "1 2 500 "1 1 ' -1,1) 0,0) 1,1) 0,0) 700,0) -1,-1) 1,-1) normalized least squares yields good results Hartley, PAMI 97)

the singularity constraint e " T F = 0 Fe = 0 detf = 0 rank F = 2 SVD from linearly computed F matrix rank 3) #" 1 F = U " 2 $ " 3 Compute closest rank-2 approximation & V T = U " V T + U " V T + U " V T 1 1 1 2 2 2 3 3 3 ' min F - F " F $ # 1 & F " = U & # 2 & ' ) ) V T = U # V T + U # V T 1 1 1 2 2 2 0 )

F vs. F'

the minimum case 7 point correspondences # x 1 " x 1 x 1 " y 1 x 1 " y 1 " x 1 y 1 " y 1 y 1 " x 1 y 1 1& M M M M M M M M M f = 0 $ x " 7 x 7 x " 7 y 7 x " 7 y " 7 x 7 y " 7 y 7 y " 7 x 7 y 7 1' T A = U 7x7 diag " 1,...," 7,0,0)V 9x9 " A[V 8 V 9 ] = 0 9x2 e.g.v T V 8 = [000000010 ] T ) x i T F 1 + "F 2 )x i = 0,#i =1...7 one parameter family of solutions but F 1 +λf 2 not automatically rank 2

the minimum case impose rank 2 σ 3 obtain 1 or 3 solutions) F 7pts F 1 F 2 F detf 1 + "F 2 ) = a 3 " 3 + a 2 " 2 + a 1 " + a 0 = 0 detf 1 + "F 2 ) = det F 2 detf 2-1 F 1 + "I) = 0 cubic equation) det AB) = det A).det B) ) F 2-1 F 1 Compute possible λ as eigenvalues of only real solutions are potential solutions) Minimal solution for calibrated cameras: 5-point

Robust estimation What if set of matches contains gross outliers? to keep things simple let s consider line fitting first)

RANSAC RANdom Sampling Consensus) Objective Robust fit of model to data set S which contains outliers Algorithm i) ii) Randomly select a sample of s data points from S and instantiate the model from this subset. Determine the set of data points S i which are within a distance threshold t of the model. The set S i is the consensus set of samples and defines the inliers of S. iii) If the subset of S i is greater than some threshold T, reestimate the model using all the points in S i and terminate iv) If the size of S i is less than T, select a new subset and repeat the above. v) After N trials the largest consensus set S i is selected, and the model is re-estimated using all the points in the subset S i

Distance threshold Choose t so probability for inlier is α e.g. 0.95) Often empirically 2 d! Zero-mean Gaussian noise σ then follows 2! m distribution with m=codimension of model dimension+codimension=dimension space) Codimension 1 2 3 Model line,f H,P T t 2 3.84σ 2 5.99σ 2 7.81σ 2

How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 1" 1" e) s ) N =1" p N = log 1" p) /log 1" 1" e) s ) s 2 3 4 5 6 7 8 5 2 3 3 4 4 4 5 10 3 4 5 6 7 8 9 proportion of outliers e 20 25 30 5 6 7 7 9 11 9 13 17 12 17 26 16 24 37 20 33 54 26 44 78 40 11 19 34 57 97 163 272 50 17 35 72 146 293 588 1177 Note: Assumes that inliers allow to identify other inliers

Acceptable consensus set? Typically, terminate when inlier ratio reaches expected ratio of inliers T = 1" e)n

Adaptively determining the number of samples e is often unknown a priori, so pick worst case, i.e. 0, and adapt if more inliers are found, e.g. 80 would yield e=0.2 N=, sample_count =0 While N >sample_count repeat Choose a sample and count the number of inliers Set e=1-number of inliers)/total number of points) Recompute N from e Increment the sample_count by 1 Terminate N = log 1" p) /log 1" 1" e ) s ))

Other robust algorithms RANSAC maximizes number of inliers LMedS minimizes median error Not recommended: case deletion, iterative least-squares, etc.

Non-linear refinment

Geometric distance Gold standard Symmetric epipolar distance

Gold standard Maximum Likelihood Estimation Parameterize: P = [I 0], P " = [M t],x i = least-squares for Gaussian noise) 2 " ^ 2 " " T d$ x i,x^ ) i' + d $ x # & i, x i' subject to x^ F x^ # & i Initialize: normalized 8-point, P,P ) from F, reconstruct X i x^ i = PX i,x^ i = P " X i Minimize cost using Levenberg-Marquardt preferably sparse LM, e.g. see H&Z) = 0 overparametrized)

Gold standard Alternative, minimal parametrization with a=1) note x,y,1) and x,y,1) are epipoles) problems: a=0 pick largest of a,b,c,d to fix to 1 epipole at infinity pick largest of x,y,w and of x,y,w 4x3x3=36 parametrizations! reparametrize at every iteration, to be sure

Symmetric epipolar error # i ) 2 + d x i,f T " d x " i,fx i # = ) x " T Fx $ x ) 2 i x " T 2 F) + " 1 1 x T F) + 1 2 Fx 2 2 2 ) 1 + Fx) 2 & '

Some experiments:

Some experiments:

Some experiments:

Some experiments: Residual error: # i ) 2 + d x i,f T " d x " i,fx i for all points!) x ) 2 i

Recommendations: 1. Do not use unnormalized algorithms 2. Quick and easy to implement: 8-point normalized 3. Better: enforce rank-2 constraint during minimization 4. Best: Maximum Likelihood Estimation minimal parameterization, sparse implementation)

Automatic computation of F Step 1. Extract features Step 2. Compute a set of potential matches Step 3. do Step 3.1 select minimal sample i.e. 7 matches) Step 3.2 compute solutions) for F Step 3.3 determine inliers until Γ#inliers,#samples)<95 verify hypothesis) } generate hypothesis) Step 4. Compute F based on all inliers Step 5. Look for additional matches Step 6. Refine F based on all correct matches " =1# 1# # inliers ) 7 ) # samples # matches #inliers 90 80 70 60 50 #samples 5 13 35 106 382

Two-view geometry geometric relations between two views is fully described by recovered 3x3 matrix F