CS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras

Similar documents
CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Singular Value Decomposition: Theory and Applications

Minimizing Algebraic Error in Geometric Estimation Problems

Lecture 3. Camera Models 2 & Camera Calibration. Professor Silvio Savarese Computational Vision and Geometry Lab. 13- Jan- 15.

Structure from Motion. Forsyth&Ponce: Chap. 12 and 13 Szeliski: Chap. 7

Point cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Parameter estimation class 5

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Non-linear Canonical Correlation Analysis Using a RBF Network

1 Derivation of Point-to-Plane Minimization

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Geometric Camera Calibration

Linear Approximation with Regularization and Moving Least Squares

Geometric Registration for Deformable Shapes. 2.1 ICP + Tangent Space optimization for Rigid Motions

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Computer Vision. The 2D projective plane and it s applications. HZ Ch 2. In particular: Ch 2.1-4, 2.7, Szelisky: Ch 2.1.1, 2.1.2

where λ = Z/f. where a 3 4 projection matrix represents a map from 3D to 2D. Part I: Single and Two View Geometry Internal camera parameters

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Lecture Topics VMSC Prof. Dr.-Ing. habil. Hermann Lödding Prof. Dr.-Ing. Wolfgang Hintze. PD Dr.-Ing. habil.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Chapter Newton s Method

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Pattern Classification

Lecture 10 Support Vector Machines. Oct

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

CS 532: 3D Computer Vision 2 nd Set of Notes

Inexact Newton Methods for Inverse Eigenvalue Problems

Errors for Linear Systems

1 Matrix representations of canonical matrices

Support Vector Machines CS434

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

1 Convex Optimization

Which Separator? Spring 1

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Generalized Linear Methods

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

Lecture 10 Support Vector Machines II

Report on Image warping

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Math1110 (Spring 2009) Prelim 3 - Solutions

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Introduction to Simulation - Lecture 5. QR Factorization. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Linear Feature Engineering 11

The exam is closed book, closed notes except your one-page cheat sheet.

UNIVERSIDADE DE COIMBRA

CS 523: Computer Graphics, Spring Shape Modeling. PCA Applications + SVD. Andrew Nealen, Rutgers, /15/2011 1

Lecture 21: Numerical methods for pricing American type derivatives

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Time-Varying Systems and Computations Lecture 6

STATIC OPTIMIZATION: BASICS

Feature Selection: Part 1

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

LECTURE 9 CANONICAL CORRELATION ANALYSIS

β0 + β1xi. You are interested in estimating the unknown parameters β

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

VECTORS AND MATRICES:

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Lecture 12: Classification

Similarities, Distances and Manifold Learning

Linear Regression Introduction to Machine Learning. Matt Gormley Lecture 5 September 14, Readings: Bishop, 3.1

Topic 5: Non-Linear Regression

Adaptive Manifold Learning

Physics 207: Lecture 20. Today s Agenda Homework for Monday

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Curve Fitting with the Least Square Method

NEWTON S LAWS. These laws only apply when viewed from an inertial coordinate system (unaccelerated system).

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

6.854J / J Advanced Algorithms Fall 2008

Norms, Condition Numbers, Eigenvalues and Eigenvectors

On a direct solver for linear least squares problems

Newton s Method for One - Dimensional Optimization - Theory

The Geometry of Logit and Probit

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Spectral Graph Theory and its Applications September 16, Lecture 5

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu

APPENDIX A Some Linear Algebra

Neural networks. Nuno Vasconcelos ECE Department, UCSD

1 GSW Iterative Techniques for y = Ax

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Classification as a Regression Problem

Tracking with Kalman Filter

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

14 Lagrange Multipliers

β0 + β1xi. You are interested in estimating the unknown parameters β

MEM Chapter 4b. LMI Lab commands

T E C O L O T E R E S E A R C H, I N C.

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Lecture 4: Constant Time SVD Approximation

Polynomial Regression Models

Transcription:

CS4495/6495 Introducton to Computer Vson 3C-L3 Calbratng cameras

Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: M (3x4) f s x ' 1 0 0 0 c R 0 I T 3 3 3 x1 3 3 3 x1 0 af y ' 0 1 0 0 x x c 0 1 1 1 x3 01 x3 0 0 1 0 0 1 0 ntrnscs projecton rotaton translaton

Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: x X sx * * * * Y sy * * * * Z s * * * * 1 M X

Calbraton How to determne M?

Calbraton usng known ponts Place a known object n the scene dentfy correspondence between mage and scene compute mappng from scene to mage

Resectonng Estmatng the camera matrx from known 3D ponts Projectve Camera Matrx: p K R t P M P X w * u m m m m 11 12 13 14 Y w * v m m m m 21 22 23 24 Z w m m m m 31 32 33 34 1

Drect lnear calbraton - homogeneous One par of equatons for each pont u w * u m m m m 00 01 02 03 Y v w * v m m m m 10 11 12 13 Z 1 w m m m m 20 21 22 23 u v m X m Y m Z m 00 01 02 03 m X m Y m Z m 20 21 22 23 m X m Y m Z m 10 11 12 13 m X m Y m Z m 20 21 22 23 X 1

Drect lnear calbraton - homogeneous u m X m Y m Z m 00 01 02 03 m X m Y m Z m 20 21 22 23 One par of equatons for each pont v m X m Y m Z m 10 11 12 13 m X m Y m Z m 20 21 22 23 u ( m X m Y m Z m ) m X m Y m Z m 20 21 22 23 00 01 02 03 v ( m X m Y m Z m ) m X m Y m Z m 20 21 22 23 10 11 12 13

Drect lnear calbraton - homogeneous u ( m X m Y m Z m ) m X m Y m Z m 20 21 22 23 00 01 02 03 v ( m X m Y m Z m ) m X m Y m Z m 20 21 22 23 10 11 12 13 One par of equatons for each pont m m m m m m m m m m m m 00 10 02 03 10 11 12 13 20 21 22 23

Drect lnear calbraton - homogeneous Ths s a homogenous set of equatons. When over constraned, defnes a least squares problem mnmze Am Snce m s only defned up to scale, solve for unt vector m* Soluton: m* = egenvector of A T A wth smallest egenvalue Works wth 6 or more ponts

Drect lnear calbraton - homogeneous m m m m m m m m m m A m m 0 m 2n 12 12 2n 00 10 02 03 10 11 12 13 20 21 22 23

The SVD (sngular value decomposton) trck Fnd the m that mnmzes Am subject to m =1. Let A = UDV T (sngular value decomposton, D dagonal, U and V orthogonal) Therefor mnmzng UDV T m But, UDV T m = DV T m and m = V T m Thus mnmze DV T m subject to V T m = 1

The SVD (sngular value decomposton) trck Thus mnmze DV T m subject to V T m Let y = V T m Now mnmze Dy subject to y = 1. = 1 But D s dagonal, wth decreasng values. So Dy mnmum s when y = 0,0,0, 0,1 T Snce y = V T m, m = Vy snce V orthogonal Thus m = Vy s the last column n V.

The SVD (sngular value decomposton) trck Thus m = Vy s the last column n V. And, the sngular values of A are square roots of the egenvalues of A T A and the columns of V are the egenvectors. (Show ths? Nah ) Recap: Gven Am=0, fnd the egenvector of A T A wth smallest egenvalue, that s m.

Drect lnear calbraton - nhomogeneous Another approach: 1 n lower r.h. corner for 11 d.o.f X u m m m m 00 01 02 03 Y v m m m m 10 11 12 13 Z 1 m m m 1 20 21 22 1

Drect lnear calbraton - nhomogeneous Dangerous f m 23 s really (near) zero!

Drect lnear calbraton (transformaton) Advantages: Very smple to formulate and solve. Can be done, say, on a problem set These methods are referred to as algebrac error mnmzaton.

Drect lnear calbraton (transformaton) Dsadvantages: Doesn t drectly tell you the camera parameters (more n a bt) Approxmate: e.g. doesn t model radal dstorton Hard to mpose constrants (e.g., known focal length) Mostly: Doesn t mnmze the rght error functon

Drect lnear calbraton (transformaton) For these reasons, prefer nonlnear methods: Defne error functon E between projected 3D ponts and mage postons: E s nonlnear functon of ntrnscs, extrnscs, and radal dstorton Mnmze E usng nonlnear optmzaton technques e.g., varants of Newton s method (e.g., Levenberg Marquart)

Geometrc Error m nm ze E d ( x, xˆ ) Predcted Image locatons x X m n d ( x, MX ) M M

Gold Standard algorthm (Hartley and Zsserman) Objectve Gven n 6 3D to 2D pont correspondences {X x }, determne the Maxmum Lkelhood Estmaton of M

Gold Standard algorthm (Hartley and Zsserman) Algorthm () Lnear soluton: (a) (Optonal) Normalzaton: X U X x =Tx (b) Drect Lnear Transformaton mnmzaton () Mnmze geometrc error: usng the lnear estmate as a startng pont mnmze the geometrc error: m n d ( x, M X ) M

Gold Standard algorthm (Hartley and Zsserman) () Denormalzaton: M=T -1 M U

Fndng the 3D Camera Center from M M encodes all the parameters. So we should be able to fnd thngs lke the camera center from M. Two ways: pure way and easy way

Fndng the 3D Camera Center from M Slght change n notaton. Let: M s(3x4) b s last column of M The center C s the null-space camera of projecton matrx. So f fnd C such that: M C = 0 that wll be the center. Really M Q b

Fndng the 3D Camera Center from M Proof: Let X be somewhere between any pont P and C X λ P (1 λ) C

Fndng the 3D Camera Center from M Proof: Let X be somewhere between any pont P and C And the projecton: X λ P (1 λ) C x M X λ M P (1 λ) M C

Fndng the 3D Camera Center from M Proof: Let X be somewhere between any pont P and C And the projecton: X λ P (1 λ) C x M X λ M P (1 λ) M C For any P, all ponts on PC ray project on mage of P, therefore MC must be zero. So the camera center has to be n the null space.

Fndng the 3D Camera Center from M Now the easy way. A formula! If M =[Q b] then: C Q 1 1 b

Alternatve: mult-plane calbraton Images courtesy Jean-Yves Bouguet, Intel Corp.

Alternatve: mult-plane calbraton Advantages Only requres a plane Don t have to know postons/orentatons Good code avalable onlne! OpenCV lbrary Matlab verson by Jean-Yves Bouget: http://www.vson.caltech.edu/bouguetj/calb_doc/ndex.html Zhengyou Zhang s web ste: http://research.mcrosoft.com/~zhang/calb/