Introduction to pinhole cameras

Similar documents
3D from Photographs: Camera Calibration. Dr Francesco Banterle

EE Camera & Image Formation

A Study of Kruppa s Equation for Camera Self-calibration

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

A Practical Method for Decomposition of the Essential Matrix

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Multi-Frame Factorization Techniques

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Lecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese

Optimisation on Manifolds

Least squares Solution of Homogeneous Equations

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix

Visual Object Recognition

Position & motion in 2D and 3D

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm

Induced Planar Homologies in Epipolar Geometry

Camera calibration Triangulation

PAijpam.eu EPIPOLAR GEOMETRY WITH A FUNDAMENTAL MATRIX IN CANONICAL FORM Georgi Hristov Georgiev 1, Vencislav Dakov Radulov 2

Camera Calibration. (Trucco, Chapter 6) -Toproduce an estimate of the extrinsic and intrinsic camera parameters.

Trinocular Geometry Revisited

Homographies and Estimating Extrinsics

Camera Models and Affine Multiple Views Geometry

3D Computer Vision - WT 2004

Lecture 16: Projection and Cameras. October 17, 2017

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

CSE 252B: Computer Vision II

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Camera calibration by Zhang

Multiple View Geometry in Computer Vision

Maths for Map Makers

Pose estimation from point and line correspondences

Recovering Unknown Focal Lengths in Self-Calibration: G.N. Newsam D.Q. Huynh M.J. Brooks H.-P. Pan.

ROINN NA FISICE Department of Physics

Structure Report for J. Reibenspies

Lecture 1.2 Pose in 2D and 3D. Thomas Opsahl

I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 4. Filtering

Outline. Linear Algebra for Computer Vision

CS6640 Computational Photography. 8. Gaussian optics Steve Marschner

Applied Linear Algebra in Geoscience Using MATLAB

Fitting functions to data

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Single view metrology

Robotics - Homogeneous coordinates and transformations. Simone Ceriani

PnP Problem Revisited

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool

Basics of Epipolar Geometry

Lecture 10: Vector Algebra: Orthogonal Basis

Conditions for Segmentation of Motion with Affine Fundamental Matrix

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

Vectors are used to represent quantities such as force and velocity which have both. and. The magnitude of a vector corresponds to its.

Computer Vision Motion

Reconstruction from projections using Grassmann tensors

Basic Math for

Theory of Bouguet s MatLab Camera Calibration Toolbox

ANALYTICAL IMAGE FEATURES

Principal Component Analysis (PCA) CSC411/2515 Tutorial

What is Image Deblurring?

Winmeen Tnpsc Group 1 & 2 Study Materials

Pseudoinverse & Moore-Penrose Conditions

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2012/2013, 4th quarter. Lecture 2: vectors, curves, and surfaces

Tutorials. 1. Autocollimator. Angle Dekkor. General

A geometric interpretation of the homogeneous coordinates is given in the following Figure.

3D Coordinate Transformations. Tuesday September 8 th 2015

Self-Calibration of a Moving Camera by Pre-Calibration

= n. 1 n. Next apply Snell s law at the cylindrical boundary between the core and cladding for the marginal ray. = n 2. sin π 2.

Lecture'8:'' Camera'Models'

Singular value decomposition

Introduction to aberrations OPTI518 Lecture 5

Final Exam Due on Sunday 05/06

Homogeneous Coordinates

Determining Constant Optical Flow

Problem Solving. radians. 180 radians Stars & Elementary Astrophysics: Introduction Press F1 for Help 41. f s. picture. equation.

7. Dimension and Structure.

Complex number review

Linear Algebra- Final Exam Review

Algorithms for Computing a Planar Homography from Conics in Correspondence

Wave Propagation in Uniaxial Media. Reflection and Transmission at Interfaces

Ray Optics. 30 teaching hours (every wednesday 9-12am) labs as possible, tutoring (see NW s homepage on atomoptic.

An Exploration of Coordinate Metrology

Robotics Institute, Carnegie Mellon University Forbes Ave., Pittsburgh PA of Sun altitude, but the basic ideas could be applied

AOL Spring Wavefront Sensing. Figure 1: Principle of operation of the Shack-Hartmann wavefront sensor

Dr. Radhakrishnan A N Assistant Professor of Physics GPTC, Vechoochira

9.1 VECTORS. A Geometric View of Vectors LEARNING OBJECTIVES. = a, b

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces

Matrix decompositions

Determining the Translational Speed of a Camera from Time-Varying Optical Flow

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Linear Algebra - Part II

DEPARTMENT OF MATHEMATICS AND STATISTICS UNIVERSITY OF MASSACHUSETTS. MATH 233 SOME SOLUTIONS TO EXAM 1 Fall 2018

Graph Matching & Information. Geometry. Towards a Quantum Approach. David Emms

Tracking for VR and AR

1 Linearity and Linear Systems

PHYSICS. Ray Optics. Mr Rishi Gopie

Singular Value Decompsition

Concave mirrors. Which of the following ray tracings is correct? A: only 1 B: only 2 C: only 3 D: all E: 2& 3

DETERMINING THE EGO-MOTION OF AN UNCALIBRATED CAMERA FROM INSTANTANEOUS OPTICAL FLOW

Particle-Wave Duality and Which-Way Information

Image Processing 1 (IP1) Bildverarbeitung 1

Transcription:

Introduction to pinhole cameras Lesson given by Sébastien Piérard in the course Introduction aux techniques audio et vidéo (ULg, Pr JJ Embrechts) INTELSIG, Montefiore Institute, University of Liège, Belgium October 30, 2013 1 / 21

Optics : the light travels along straight line segments The light travels along straight lines if one assumes a single material (air, glass, etc) and a single frequency Otherwise 2 / 21

Optics : behavior of a lens Assuming the Gauss assumptions, all light rays coming from the same direction converge to a unique point on the focal plane 3 / 21

Optics : an aperture 4 / 21

Optics : a pinhole camera A pinhole camera is a camera with a very small aperture The lens becomes completely useless The camera is just a small hole High exposure durations are needed due to the limited amount of light received 5 / 21

What is a camera? A camera is a function: point in the 3D world pixel ( ) ( ) x y z u v ( ) ( ) (c) ( x y z x y z u v ) (f ) ( ( ) x y z in the world 3D Cartesian coordinate system ( (c) x y z) in the 3D Cartesian coordinate system located at the camera s optical center (the hole), with the axis z (c) along its optical axis, and the axis y (c) pointing upperwards ( (f ) u v) in the 2D Cartesian coordinate system locate in the focal ( plane ) u v in the 2D coordinate system screen or image u v ) 6 / 21

(x y z) (x y z) (c) z (c) y (c) t x (c) y z x x (c) x t x y (c) = R 3 3 y + t y z (c) z t z 7 / 21

(x y z) (x y z) (c) x (c) x t x y (c) = R 3 3 y + t y z (c) z t z R is a matrix that gives the rotation from the world coordinate system to the camera coordinate system The columns of R are the base vectors of the world coordinate system expressed in the camera coordinate system In the following, we will assume that the two coordinate systems are orthonormal In this case, we have R T R = I R 1 = R T 8 / 21

(x y z) (c) (u v) (f ) z (c) y (c) v (f ) f x (c) u (f ) 9 / 21

(x y z) (c) (u v) (f ) We suppose that the image plane is orthogonal to the optical axis We have : u (f ) = x (c) f z (c) v (f ) = y (c) f z (c) 10 / 21

(u v) (f ) (u v) 1 k v v 0 v 1 k u θ u 0 v (f ) u u (f ) 11 / 21

(u v) (f ) (u v) We have : u = (u (f ) v (f ) tan θ )k u + u 0 v = v (f ) k v + v 0 The parameters k u and k v are scaling factors and (u 0, v 0 ) are the coordinates of the point where the optical axis crosses the image plane We pose s uv = ku tan θ and obtain: su k u s uv u 0 su (f ) sv = 0 k v v 0 sv (f ) s 0 0 1 s Often, the grid of photosensitive cells can be considered as nearly rectangular The parameter s uv is then neglected and is considered as 0 12 / 21

The complete pinhole camera model With homogeneous coordinates the pinhole model can be written as a linear relation: su k u s uv u 0 f 0 0 0 t x x R sv = 0 k v v 0 0 f 0 0 3 3 t y y t s 0 0 1 0 0 1 0 z z 0 0 0 1 1 t su α u S uv u 0 0 x x R sv = 0 α v v 0 0 3 3 t y y t s 0 0 1 0 z z 0 0 0 1 1 su m 11 m 12 m 13 m x 14 y sv = m 21 m 22 m 23 m 24 z s m 31 m 32 m 33 m 34 1 with α u = f k u, α v = f k v, and S uv = f s uv 13 / 21

Questions Question What is the physical meaning of the homogeneous coordinates? Think about all concurrent lines intersection at the origin and the three first elements of the homogeneous coordinates with (x y z) on a sphere Question How many degrees of freedom has M 3 4? (m 31 m 32 m 33 ) is a unit vector since (m 31 m 32 m 33 ) = (r 31 r 32 r 33 ) 14 / 21

The calibration step: finding the matrix M 3 4 15 / 21

The calibration step: finding the matrix M 3 4 therefore su m 11 m 12 m 13 m x 14 y sv = m 21 m 22 m 23 m 24 z s m 31 m 32 m 33 m 34 1 u = m 11 x + m 12 y + m 13 z + m 14 m 31 x + m 32 y + m 33 z + m 34 v = m 21 x + m 22 y + m 23 z + m 24 m 31 x + m 32 y + m 33 z + m 34 and (m 31 u m 11 )x + (m 32 u m 12 )y + (m 33 u m 13 )z + (m 34 u m 14 ) = 0 (m 31 v m 21 )x + (m 32 v m 22 )y + (m 33 v m 23 )z + (m 34 v m 24 ) = 0 16 / 21

The calibration step: finding the matrix M 3 4 x 1 y 1 z 1 1 0 0 0 0 u 1 x 1 u 1 y 1 u 1 z 1 u 1 x 2 y 2 z 2 1 0 0 0 0 u 2 x 2 u 2 y 2 u 2 z 2 u 2 x n y n z n 1 0 0 0 0 u nx n u ny n u nz n u n 0 0 0 0 x 1 y 1 z 1 1 v 1 x 1 v 1 y 1 v 1 z 1 v 1 0 0 0 0 x 2 y 2 z 2 1 v 2 x 2 v 2 y 2 v 2 z 2 v 2 0 0 0 0 x n y n z n 1 v nx n v ny n v nz n v n m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 = 0 17 / 21

Questions Question What is the minimum value for n? 55 Question How can we solve the homogeneous system of the previous slide? Apply a SVD and use the vector of the matrix V corresponding the smallest singular value Question Is there a conditions on the set of 3D points used for calibrating the camera? Yes, they should not be coplanar 18 / 21

Intrinsic and extrinsic parameters m 11 m 12 m 13 m 14 α u S uv u 0 t x m 21 m 22 m 23 m 24 = 0 α v v 0 R 3 3 t y m 31 m 32 m 33 m 34 0 0 1 t z }{{}}{{} intrinsic parameters extrinsic parameters The decomposition can be achieved via the orthonormalising theorem of Graham-Schmidt, or via a QR decomposition since m 11 m 12 m 13 m 21 m 22 m 23 = m 31 m 32 m 33 m 33 m 23 m 13 m 32 m 22 m 12 = m 31 m 21 m 11 α u S uv u 0 r 11 r 12 r 13 0 α v v 0 r 21 r 22 r 23 0 0 1 r 31 r 32 r 33 r 33 r 23 r 13 1 v 0 u 0 r 32 r 22 r 12 0 α v S uv r 31 r 21 r 11 0 0 α u 19 / 21

Questions Question What is the minimum of points to use for recalibrating a camera that has moved? 3 since there are 6 degrees of freedom in the extrinsic parameters Question What is this? 20 / 21

Bibliography D Forsyth and J Ponce, Computer Vision: a Modern Approach Prentice Hall, 2003 R Hartley and A Zisserman, Multiple View Geometry in Computer Vision, 2nd ed Cambridge University Press, 2004 Z Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 22, no 11, pp 1330 1334, 2000 21 / 21