Multiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen

Similar documents
Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Machine Learning

Camera calibration Triangulation

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Parameterizing the Trifocal Tensor

Vision par ordinateur

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Machine Learning

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Homework 2 Solutions Kernel SVM and Perceptron

GAUSSIAN PROCESS REGRESSION

Nonlinear Programming Models

Rigid Structure from Motion from a Blind Source Separation Perspective

Structure from Motion. CS4670/CS Kevin Matzen - April 15, 2016

ECE521 lecture 4: 19 January Optimization, MLE, regularization

Convex Optimization: Applications

Bayesian Linear Regression [DRAFT - In Progress]

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

Week 5: Logistic Regression & Neural Networks

Learning from Data: Regression

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

A Practical Method for Decomposition of the Essential Matrix

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Association studies and regression

Massachusetts Institute of Technology

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

y Xw 2 2 y Xw λ w 2 2

Loss Functions and Optimization. Lecture 3-1

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Support Vector Machines and Bayes Regression

Unsupervised Learning: Dimensionality Reduction

Bayesian Regression Linear and Logistic Regression

Optimisation on Manifolds

Linear Regression. Machine Learning CSE546 Kevin Jamieson University of Washington. Oct 5, Kevin Jamieson 1

Masters Comprehensive Examination Department of Statistics, University of Florida

Lecture 8: Signal Detection and Noise Assumption

Tracking for VR and AR

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Introduction to Machine Learning

CPSC 340: Machine Learning and Data Mining

CSC321 Lecture 20: Reversible and Autoregressive Models

Linear classifiers: Overfitting and regularization

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Methods in Computer Vision: Introduction to Matrix Lie Groups

Robust Camera Location Estimation by Convex Programming

Linear Regression (9/11/13)

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann

Introduction to Machine Learning

1 Kalman Filter Introduction

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Multiple View Geometry in Computer Vision

WHEN IS A MAXIMAL INVARIANT HYPOTHESIS TEST BETTER THAN THE GLRT? Hyung Soo Kim and Alfred O. Hero

Convex Optimization M2

Robust and accurate inference via a mixture of Gaussian and terrors

M3: Multiple View Geometry

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Week 3: Linear Regression

2D Image Processing. Bayes filter implementation: Kalman filter

STAT 100C: Linear models

Convex Optimization in Computer Vision:

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

COMS 4771 Lecture Course overview 2. Maximum likelihood estimation (review of some statistics)

ECE521 week 3: 23/26 January 2017

Linear Regression and Its Applications

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ASTRONOMY. Chapter 7 OTHER WORLDS: AN INTRODUCTION TO THE SOLAR SYSTEM PowerPoint Image Slideshow

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

[y i α βx i ] 2 (2) Q = i=1

Linear Regression Linear Regression with Shrinkage

Camera Models and Affine Multiple Views Geometry

Machine learning - HT Maximum Likelihood

CSC 411: Lecture 09: Naive Bayes

Review of Maximum Likelihood Estimators

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

Modern Methods of Data Analysis - WS 07/08

Neural Network Training

Introduction to Compressed Sensing

Lecture 6: Linear Regression

Lecture : Probabilistic Machine Learning

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Localization of Radioactive Sources Zhifei Zhang

Motion Models (cont) 1 2/15/2012

Mysteries of Parameterizing Camera Motion - Part 2

Covariance Matrix Simplification For Efficient Uncertainty Management

Overfitting, Bias / Variance Analysis

Introduction to Machine Learning

Reliability Monitoring Using Log Gaussian Process Regression

Discriminative Learning and Big Data

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

3D from Photographs: Camera Calibration. Dr Francesco Banterle

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Machine Learning

Support Vector Machines

Inf2b Learning and Data

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

Transcription:

Multiview Geometry and Bundle Adjustment CSE P576 David M. Rosen 1

Recap Previously: Image formation Feature extraction + matching Two-view (epipolar geometry) Today: Add some geometry, statistics, optimization Turn it up to 11 N! 2

Motivating example: Photogrammetry The science of measurement using cameras 3

Application: Remote Sensing Mars Reconnaissance Orbiter Launched 12 Aug 2005 Entered orbit 10 Mar 2006 ~ 112 minute orbital period ~ 12.8 orbits / (Earth) day Sensors: High Resolution Imaging Science Experiment (HiRISE) Context Camera Mars Color Imager Image credit: NASA/JPL 4

Image credit: NASA/JPL-Caltech/MSSS 5

Image credit: NASA/JPL-Caltech/MSSS 6

Image credit: NASA/JPL/University of Arizona/USGS 7

Image credit: NASA / JPL-Caltech / UA / Kevin M. Gill 8

Image credit: NASA/JPL/University of Arizona/USGS 9

Image credit: NASA / JPL-Caltech / UA / Kevin M. Gill 10

Video credit: NASA / JPL / AU / Doug Ellison 11

Application: 3D Reconstruction Goal: Build a 3D model of a scene from a collection of images [S. Agarwal et al., Building Rome in a Day, Communications of the ACM, 2011] 12

13

14

15

Application: Robotics Video credit: MIT DRC team 16

Video credit: MIT DRC team 17

Video credit: MIT DRC team 18

19

In this lecture Photogrammetry: The problem of measurement using imagery Maximum-likelihood estimation and bundle adjustment: Solving the photogrammetry problem Practicalities Problem scale Robust estimation Representation of rotations 20

Photogrammetry: The Problem Given: A collection of images Estimate: 3D positions of imaged points Poses of the imaging cameras Intrinsic parameters of the imaging cameras 21

Photogrammetry: Generative model Q: How are variables of estimation: 3D point positions p j Camera poses x i = t i, R i Camera intrinsics K i related to images? p 1 u 11 u 21 u 31 u ij = f(x i, K i, p j ) x 1,K 1 x 3,K 3 where f is the camera projection function (from Lecture 1) 22 x 2,K 2

Photogrammetry: Estimation procedure Main idea: Given a set of images 1. Extract features 2. Match features (identify the set of 3D points) 3. Estimate parameters so that: u ij = f(x i, K i, p j ) for all point projections u ij x 1,K 1 x 2,K 2 x 3,K 3 23

The problem of measurement noise We want to find x i, K i, p j so that u ij = f(x i, K i, p j ) (i.e. our model matches the data) for all u ij. But: All real-world measurements have errors What we actually measure is: u ij = f x i, K i, p j + ε ij We cannot find parameters x i, K i, p j that fit the measured projections u ij exactly 24

Example: Linear regression 25

Maximum likelihood estimation MLE is a method for fitting parameters to noisy data y = y 1,, y n, given a sampling model y~p( θ). Basic idea: Choose the that best fits the data y. But: How can we measure goodness of fit? Likelihood function: L θ p y θ. Measures how likely the data y is for each choice of. MLE principle: Best fit Maximum likelihood Pick the that maximizes the likelihood of data y: መθ = max θ L θ 26

Example: Regression under Gaussian noise Consider fitting a function f ; θ to data D = y i = f x i ; θ + ε i, (x i, y i ) N i=1 ε i ~N 0, Σ i under the model: For each choice of, for each (x i, y i ): The pdf for ε i is: ε i = y i f x i ; θ p ε i = det(2πσ i ) 1 2exp 1 2 ε i T Σ i 1 ε i The likelihood of the ith data point (x i, y i ) is: p x i, y i θ = det(2πσ i ) 1 2exp 1 2 (y i f x i ; θ ) T Σ 1 i (y i f(x i ; θ)) The likelihood for the entire dataset D is: N L(D θ) exp 1 2 y i f(x i ; θ) T Σ 1 i y i f(x i ; θ) i=1 ε i 27

Example: Regression under Gaussian noise Consider fitting a function f ; θ to data D = y i = f x i ; θ + ε i, (x i, y i ) N i=1 ε i ~N 0, Σ i under the model: The likelihood for the entire dataset D is: Taking the logarithm: N L(D θ) exp 1 2 y i f(x i ; θ) T Σ 1 i y i f(x i ; θ) i=1 N log L(D θ) = c 1 2 i=1 y i f x i ; θ MLE under additive Gaussian noise is a nonlinear least-squares problem: 2 Σi θ = min θ N i=1 2 y i f(x i ; θ) Σi ε i 28

Exercise: Linear regression Consider fitting a linear function to the data: x 4.75 5.30 5.20 2.75 4.59 1.20 4.94 0.22 1.94 0.32 0.68 5.76 y 9.39 9.95 10.52 5.27 8.96 3.15 9.71 1.69 4.19 1.74 2.23 11.22 under the model: y i = ax i + b + ε i, ε i ~N(0,.35 2 ) 29

Exercise: Linear regression x y 4.75 9.39 5.30 9.95 5.20 10.52 2.75 5.29 4.59 8.96 1.20 3.15 4.94 9.71 0.22 1.69 1.94 4.19 0.32 1.74 0.68 2.30 5.76 11.22 Model: y i = ax i + b Estimated: a = 1.73 b = 1.06 True: a = 1.70 b = 1.20 30

Bundle adjustment Recall: Given a set of point projections u ij, we want to estimate: 3D point positions p j camera poses x i camera intrinsics K i p j Assuming the measurement model: u ij = f x i, K i, p j + ε ij, ε i ~N 0, Σ i Maximum-likelihood estimation is then: x i, K i, pƹ j = min x i,k i,p j i,j u ij f(x i, K i, p j ) Σij 2 x 1,K 1 x 2,K 2 x 3,K 3 Minimize (weighted) reprojection error 31

Photogrammetry and Bundle Adjustment Given: A set of images 1. Extract features 2. Match features (identify 3D points) 3. Bundle adjust (minimize reprojection error): p j x i, K i, pƹ j = min x i,k i,p j i,j u ij f(x i, K i, p j ) Σij 2 x 1,K 1 x 2,K 2 x 3,K 3 32

Special Case: Perspective-n-Point (PnP) Given: Known point positions p j Known camera intrinsics K Estimate: Camera pose x = (R,t) x = min x N j=1 2 u j f(x, K, p j ) Σj Image credit: OpenCV 33

Special Case: Camera calibration Given: Known point positions p j (on calibration object) Estimate: Camera poses x i Intrinsic parameters K x i, K = min x i,k i,j u ij f(x i, K, p j ) Σij 2 Image credit: OpenCV 34

Given: A pair of cameras with: Known poses x 1, x 2 Known intrinsics K 1, K 2 Special Case: Stereovision Estimate: 3D point positions p j 2 pƹ j = min σ p j u 1j f(x 1, K 1, p j ) j Σ1j + u 2j f(x 2, K 2, p j ) Σ2j 2 Image credit: MIT Space Systems Laboratory pƹ j = min p j independently for all j u 1j f(x 1, K 1, p j ) Σ1j 2 + u 2j f(x 2, K 2, p j ) Σ2j 2 35

Practicality: Problem Scale Piazza San Marco reconstruction: ~14,000 images ~4.5 million points Assuming each point is observed 20x, what is the size of the BA problem? 36

Practicality: Problem Scale Camera variables (0-skew, equal pixel scaling): 14,000 6 + 3 = 126,000 Point variables: 4,500,000 3 = 13,500,000 Point observations: 4,500,000 20 2 = 180,000,000 Totals: 13,626,000-dimensional state vector 180,000,000-dimensional residual vector This is a huge optimization problem! 37

Practicality: Feature mismatches Recall: We construct the bundle adjustment problem: x i, K i, pƹ j = min x i,k i,p j i,j u ij f(x i, K i, p j ) Σij 2 Image credit: Tian Zhou using estimated feature matches. But: What happens if these are mis-estimated? 38

Example: Contaminated linear regression Consider fitting the linear model: y i = ax i + ε i, ε i ~N 0,.35 2 to a contaminated data set x 4.75 5.30 5.20 2.75 4.59 1.20 4.94 0.22 1.94 0.32 0.68 5.76 5.70 y 9.39 9.95 10.52 5.27 8.96 3.15 9.71 1.69 4.19 1.74 2.23 11.22 2.10 39

Exercise: Contaminated linear regression x y 4.75 9.39 5.30 9.95 5.20 10.52 2.75 5.29 4.59 8.96 1.20 3.15 4.94 9.71 0.22 1.69 1.94 4.19 0.32 1.74 0.68 2.30 5.76 11.22 5.70 2.10 Model: y i = ax i + b Estimated: a = 1.37 b = 1.58 True: a = 1.70 b = 1.20 40

The problem of outliers Recall: We assumed additive Gaussian image noise: u ij = f x i, K i, p j + ε ij, ε ij ~N(0, Σ ij ) and obtained a nonlinear least-squares problem: x i, K i, pƹ j = min x i,k i,p j i,j u ij f(x i, K i, p j ) Σij 2 ε i NB: This loss weights extreme errors very heavily. This estimator is not robust! ε i 41

Robust loss functions One solution: Replace quadratic loss with a function that attenuates gross errors Tradeoff: More robust to outliers (Slightly) less statistical power Quadratic Huber Cauchy Geman-McClure 42

Example: Contaminated linear regression Least-squares loss Huber loss 43

Photogrammetry and Bundle Adjustment: Summary Given: A set of images 1. Extract features 2. Match features (identify 3D points) 3. Bundle adjust using a robust loss function : p j x i, K i, pƹ j = min x i,k i,p j i,j ρ u ij f(x i, K i, p j ) Σij x 1,K 1 x 2,K 2 x 3,K 3 44

Practicality: Representing rotations So far, we ve represented rotations using rotation matrices: R = r 11 r 12 r 13 r 21 r 31 r 22 r 32 r 23, r 33 R T R = I 3, det R = +1 Pro: Trivial point operations Con: Over-parameterized Not super convenient for optimization (requires constraints) 45

Euler s Rotation Theorem Theorem: Every rotation of 3D space has a fixed axis e. We can describe a rotation using: An axis (unit vector e) (Right-handed) rotation angle This is the axis-angle parameterization of rotations. Can also combine these into a single axis-angle vector: θ = θe 46

Rodrigues Formula How does a rotation parameterized as θ = θe act on points? Rodrigues formula: Given a vector v, v rot = v cos θ + sin θ e v + 1 cos θ e v e NB: Any 3D vector θ determinates a valid rotation Rodrigues formula is differentiable in θ Axis-angle is much more convenient for use in optimization! 47

Rodrigues Formula How does a rotation parameterized as θ = θe act on points? Rodrigues formula: Given a vector v, v rot = v cos θ + sin θ e v + 1 cos θ e v e Matrix form: where R θ = I + (sin θ) E + 1 cos θ E 2, E = 0 e 3 e 2 e 3 0 e 1 e 2 e 1 0 48

Photogrammetry and Bundle Adjustment: Summary Given: A set of images 1. Extract features 2. Match features (identify 3D points) 3. Bundle adjust using a robust loss function : p j x i, K i, pƹ j = min x i,k i,p j i,j ρ u ij f(x i, K i, p j ) Σij x 1,K 1 x 2,K 2 x 3,K 3 49