Visual Object Recognition
|
|
- Kathlyn Ferguson
- 6 years ago
- Views:
Transcription
1 Visual Object Recognition Lecture 2: Image Formation Per-Erik Forssén, docent Computer Vision Laboratory Department of Electrical Engineering Linköping University
2 Lecture 2: Image Formation Pin-hole, and thin lens cameras Projective geometry, lens distortion, vignetting, intensity, colour Geometric and Photometric Invariance Colour constancy, colour spaces, affine illumination model, homographies, epipolar geometry, canonical frames
3 The Pin-Hole Camera A brightly illuminated scene will be projected onto a wall opposite of the pin-hole. The image is rotated 180º.
4 The Pin-Hole Camera Optical Centre ( ) X= X Y Y Z y x f x y z Z X From similar triangles we get: x = f X Z y = f Y Z 2 3 x 4y5 = 1 2 f f X 4Y 5 Z
5 The Pin-Hole Camera 2 3 x 4y5 = 1 2 f f X 4Y 5 Z More generally, we write: 2 3 x 4y5 = f s c x X 40 fa c 5 4 y Y Z f-focal length, s-skew, a-aspect ratio, c-projection of optical centre
6 The Pin-Hole Camera 2 3 x 4y5 = f s c x X 40 fa c 5 4 y Y Z Motivation: Optical Centre z Image Plane f-focal length, s-skew, a-aspect ratio, c-projection of optical centre y f O FOV o FOV x Image Grid w c ( x c y ) h
7 The Pin-Hole Camera 2 3 x 4y5 1 {z} x = f s c x X 40 fa c 5 4 y Y {z 1 Z } {z } K X, x K X Motivation: Optical Centre z Image Plane f-focal length, s-skew, a-aspect ratio, c-projection of optical centre y f O FOV o FOV x Image Grid w c ( x c y ) h
8 The Pin-Hole Camera x 4y 5 1 {z} x = f s c x 40 fa c y Also normalized image coordinates: u = 5 {z } K 2 3 X/Z 4Y/Z5 1 X 4Y 5 Z {z } X, x K X x = Ku K X u = K 1 x
9 The Pin-Hole Camera For a general position of the world coordinate system (WCS) we have: u r 11 r 12 r 13 t 1 4r 21 r 22 r 23 t 5 2 r 31 r 32 r 33 {z t 3 } [R t] X 6Y 4Z {z } X
10 The Pin-Hole Camera For a general position of the world coordinate system (WCS) we have: u r 11 r 12 r 13 t 1 4r 21 r 22 r 23 t 5 2 r 31 r 32 r 33 {z t 3 } [R t] X 6Y 4Z {z } X, u [R t]x
11 The Pin-Hole Camera For a general position of the world coordinate system (WCS) we have: u and thus r 11 r 12 r 13 t 1 4r 21 r 22 r 23 t 5 2 r 31 r 32 r 33 {z t 3 } [R t] x K[R t]x X 6Y 4Z {z } X, u [R t]x
12 Epipolar geometry The epipolar geometry of two cameras: epipolar plane ( X= X ) Y Z o 1 o 2 e e 2 baseline 1 camera 1 camera 2 x 1 epipolar lines x 2 e 1, e 2 are called epipoles. o 1,o 2 are the optical centres.
13 Homographies For a planar object, we can imagine a world coordinate system fixed to the object y x f Optical Centre x y z Z X Y ( X= X ) Y x 4y5 = K r 11 r 12 r 13 t 1 4r 21 r 22 r 23 t 5 2 r 31 r 32 r 33 t 3 2 X 6Y 4Z
14 Homographies For a planar object, we can imagine a world coordinate system fixed to the object y x f Optical Centre x y z Z X Y ( X= X ) Y x 4y5 = K r 11 r 12 t 1 4r 21 r 22 t 5 2 r 31 r 32 t 3 X 4Y = 2 3 h 11 h 12 h 13 4h 21 h 22 h 5 23 h 31 h 32 h 33 {z } H 2 3 X 4Y 5 1
15 Homographies Projections into two cameras: As the homography is invertible, we can now map from camera 2 to the object and on to camera 1:
16 Epipolar Geometry So in general, two view geometry only tells us that a corresponding point lies somewhere along a line. In practice, we often know more, as objects often have planar, or near planar surfaces. i.e., we are close to the homography case. Also: If the views have a short relative baseline, we can use even more simple models.
17 Thin Lens Camera An actual pinhole lets in too little light, and a bigger hole blurs the picture. Real cameras instead use lenses to obtain a sharp image using more light.
18 Thin Lens Camera A thin lens is a (positive) lens with Focal point f Optical Centre d << f Parallel rays converge at the focal points d Rays through the optical centre are not refracted
19 Thin Lens Camera Image plane Focal point Optical Centre Focal point f d Z l Thin lens relation (from similar triangles): 1 f = 1 Z + 1 l
20 Thin Lens Camera Image plane Focal point Optical Centre Focal point l f d Z Focus at one depth only. Objects at other depths are blurred.
21 Thin Lens Camera Image plane Aperture Focal point Optical Centre Focal point l f Adding an aperture increases the depth-of-field, the range which is sharp in the image. A compromise between pinhole and thin lens. d Z
22 Lens distortion Correct Barrel distortion Pin-cushion distortion For zoom lenses: - Barrel at wide FoV - pin-cushion at narrow FoV
23 Lens distortion ) Correct image Distorted Modelling x Kf(u, 0 ) Used in optimisation such as BA
24 Lens distortion ) Distorted image Correct Rectification x 0 f 1 (Ku, ) Used in dense stereo
25 Lens Effects Correct Darkened periphery Vignetting and cos 4 -law - more severe on wide angle cameras
26 Lens Effects Vignetting cos 4 -law dampening with cos 4 (w)
27 Image intensity Sensor activation is linear Z a(x) = s( )r(, x)e( )d s-sensor absorption spectrum, r-reflectance spectrum of object, e- emission spectrum of light source (attenuated by the atmosphere)
28 Image intensity Sensor activation is linear Z a(x) = s( )r(, x)e( )d s-sensor absorption spectrum, r-reflectance spectrum of object, e- emission spectrum of light source (attenuated by the atmosphere) However, most images are gamma corrected i(x) =a(x)
29 Image intensity Sensor activation is linear Z a(x) = s( )r(, x)e( )d HVS handles a dynamic range on Exposure time and aperture size also scale the activation. Unless we know all aspects of image formation, we cannot trust the absolute intensity value.
30 Colour Colour perception is done using three different activation functions
31 Colour Colour perception is done using three different activation functions Sensor activation is not colour. Colour is an object property, i.e. a representation of. In order to estimate colour, we need to somehow compensate for the illumination.
32 Invariance Transformations varying illumination Two categories of nuisance factors for recognition/matching varying camera pose
33 Invariance Transformations varying illumination varying camera pose For matching we need either to know the changes, or an invariance transformation Ideally, an invariance transformation should keep information intrinsic to the object, but remove all influence from the imaging process
34 Invariance Transformations Photometric invariance gives robustness to illumination changes Geometric invariance gives robustness to view changes varying illumination varying camera pose
35 Colour Constancy
36 Colour Constancy
37 Colour Constancy The colours we perceive are not the activations of cones in the retina. Colour constancy is an attempt by the HVS to transform the retinal activation into a normalized (white) reference illumination. Complex process that takes place at many levels (retina, V2, ) and uses high level information (e.g. known object colour). White balancing in cameras is a low-level technical equivalent.
38 Colour Constancy Object based colour transfer If you know what you are looking at, you also know something about the illumination Forssén & Moe, View Matching with Blob Features, JIVC 09
39 Photometric invariance I(x) =k 1 I 0 (x) J(x) =k 2 I 0 (x) Input If illumination changes, direct matching fails X I(x) J(x) = large x
40 Photometric invariance I(x) =k 1 I 0 (x) J(x) =k 2 I 0 (x) Input If illumination changes, direct matching fails X I(x) J(x) = large x We seek a function that is invariant to scalings X f(i(x)) f(j(x)) = small x
41 Photometric invariance For cameras with non non-linear radiometric response (and e.g. gamma correction), or if two different cameras are used we may use the affine model: I(x) =k 1 I 0 (x)+k 0 How should we choose f? we want: X x f(i(x)) f(j(x)) = small
42 Photometric invariance Mean subtraction, derivatives, and other DC free linear filters remove a constant offset in intensity Normalising a patch by e.g. the standard deviation, removes scalings of the intensity. Affine invariance by combining both: f(i(x)) = (I(x) µ I )/ I µ I = mean of patch I = std of patch
43 Photometric invariance Illustration Input Gradient Normalised gradient Normalised input I(x) J(x)
44 Other colour spaces Transform each pixel separately Move intensity change into one dimension. E.g. HSV space. In matching, V is then downweighted (or discarded)
45 Other colour spaces Other popular choices are the CIE Lab space (left) and the Yuv space. Colour space changes do not handle changes in the illumination colour.
46 Geometric invariance The geometric invariances we use make a locally planar assumption. (see also today s paper) They can thus be described using homographies.
47 Geometric invariance A homography is a transformation between points x on one plane, and points y on another y 1 y = H 4 x 1 x = h 11 h 12 h 13 4h 21 h 22 h 23 h 31 h 32 h 33 At most 8 degrees-of-freedom (dof) as H and kh,k 2 R \ 0 define the same transformation See e.g. R. Hartley and A. Zisserman, Multiple View Geometry for Computer Vision 5 3 x 1 4x 5 2 1
48 Geometric invariance A hierarchy of transformations: scale+translation (3dof) similarity (4dof) (scale+translation+rotation) affine (6dof) (similarity+skew) plane projective (8dof) (affine+forshortening) 2 3 s 0 t 1 40 s t c s t 1 s c t a 11 a 12 t 1 4a 21 a 22 t h 11 h 12 h 13 4h 21 h 22 h 23 5 h 31 h 32 h 33
49 Canonical Frames Aka. covariant frames, and invariant frames. Resample patches to canonical frame. Points from e.g. Harris detector, or DoG maxima.
50 Canonical Frames After resampling matching is much easier!
51 Canonical Frames Combinatoral issues From Harris or DoG we get images full of keypoints.
52 Canonical Frames Combinatoral issues From Harris or DoG we get images full of keypoints. Using the points, we want to generate frames in both reference and query view and match them. We don t want to miss a combination in one of the images, but we don t want to generate too many combinations either.
53 Canonical Frames Solutions: Use each point as a reference point. Restrict frame construction to k-nearest neighbours in scale space (or image plane). Remove duplicate groupings, and reflections.
54 Summary Photometric invariance is needed to handle changes in illumination. Common approaches: colour constancy, other colour spaces, and normalisation Geometric invariance handles changes in object pose A locally planar assumption is very useful for geometric invariance
55 Discussion Questions/comments on paper: M. Brown, D. Lowe, "Invariant Features from Interest Point Groups", BMVC 2002 PhD students only, round robin scheduling
CS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationScale & Affine Invariant Interest Point Detectors
Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationEE Camera & Image Formation
Electric Electronic Engineering Bogazici University February 21, 2018 Introduction Introduction Camera models Goal: To understand the image acquisition process. Function of the camera Similar to that of
More informationOverview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance
More informationSingle Exposure Enhancement and Reconstruction. Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman
Single Exposure Enhancement and Reconstruction Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman 1 Reconstruction as an Inverse Problem Original image f Distortion & Sampling
More informationCSE 473/573 Computer Vision and Image Processing (CVIP)
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Hamed Masnadi Shirazi, Solmaz Alipour LECTURE 5 Relationships between the Homography and the Essential Matrix 5.1. Introduction In practice,
More informationDetectors part II Descriptors
EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:
More informationLecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?
Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We
More informationCS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix
CS4495/6495 Introduction to Computer Vision 3D-L3 Fundamental matrix Weak calibration Main idea: Estimate epipolar geometry from a (redundant) set of point correspondences between two uncalibrated cameras
More informationAdvances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.
6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More information3D from Photographs: Camera Calibration. Dr Francesco Banterle
3D from Photographs: Camera Calibration Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction Dense
More informationCamera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool
Outline Camera calibration Camera projection models Camera calibration i Nonlinear least square methods A camera calibration tool Applications Digital Visual Effects Yung-Yu Chuang with slides b Richard
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Scene Planes & Homographies Lecture 19 March 24, 2005 2 In our last lecture, we examined various
More informationINTEREST POINTS AT DIFFERENT SCALES
INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions
More informationBlob Detection CSC 767
Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob
More informationProperties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context
Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa
More informationSIFT: Scale Invariant Feature Transform
1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination
More informationFeature extraction: Corners and blobs
Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over
More informationSIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.
SIFT keypoint detection D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (), pp. 91-110, 004. Keypoint detection with scale selection We want to extract keypoints with characteristic
More informationCamera calibration by Zhang
Camera calibration by Zhang Siniša Kolarić September 2006 Abstract In this presentation, I present a way to calibrate a camera using the method by Zhang. NOTE. This
More informationLoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D
Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region
More informationEdges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise
Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationTelescopes (Chapter 6)
Telescopes (Chapter 6) Based on Chapter 6 This material will be useful for understanding Chapters 7 and 10 on Our planetary system and Jovian planet systems Chapter 5 on Light will be useful for understanding
More informationTracking for VR and AR
Tracking for VR and AR Hakan Bilen November 17, 2017 Computer Graphics University of Edinburgh Slide credits: Gordon Wetzstein and Steven M. La Valle 1 Overview VR and AR Inertial Sensors Gyroscopes Accelerometers
More informationAchieving scale covariance
Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff
More informationIntroduction to pinhole cameras
Introduction to pinhole cameras Lesson given by Sébastien Piérard in the course Introduction aux techniques audio et vidéo (ULg, Pr JJ Embrechts) INTELSIG, Montefiore Institute, University of Liège, Belgium
More informationRotational Invariants for Wide-baseline Stereo
Rotational Invariants for Wide-baseline Stereo Jiří Matas, Petr Bílek, Ondřej Chum Centre for Machine Perception Czech Technical University, Department of Cybernetics Karlovo namesti 13, Prague, Czech
More informationOptical/IR Observational Astronomy Telescopes I: Optical Principles. David Buckley, SAAO. 24 Feb 2012 NASSP OT1: Telescopes I-1
David Buckley, SAAO 24 Feb 2012 NASSP OT1: Telescopes I-1 1 What Do Telescopes Do? They collect light They form images of distant objects The images are analyzed by instruments The human eye Photographic
More informationLecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese
Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-21-Jan-15 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential
More informationAdvanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching
Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1 Features
More informationFinal Exam Due on Sunday 05/06
Final Exam Due on Sunday 05/06 The exam should be completed individually without collaboration. However, you are permitted to consult with the textbooks, notes, slides and even internet resources. If you
More informationCS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015
CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature
More informationAffine Adaptation of Local Image Features Using the Hessian Matrix
29 Advanced Video and Signal Based Surveillance Affine Adaptation of Local Image Features Using the Hessian Matrix Ruan Lakemond, Clinton Fookes, Sridha Sridharan Image and Video Research Laboratory Queensland
More informationExtract useful building blocks: blobs. the same image like for the corners
Extract useful building blocks: blobs the same image like for the corners Here were the corners... Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D 2 g=
More information6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005
6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose
More informationCamera Models and Affine Multiple Views Geometry
Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D
More informationWhat is Motion? As Visual Input: Change in the spatial distribution of light on the sensors.
What is Motion? As Visual Input: Change in the spatial distribution of light on the sensors. Minimally, di(x,y,t)/dt 0 As Perception: Inference about causes of intensity change, e.g. I(x,y,t) v OBJ (x,y,z,t)
More informationVideo and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)
Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow
More informationMaximally Stable Local Description for Scale Selection
Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.
More informationBlobs & Scale Invariance
Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos
More informationInvariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching
Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive
More informationParameterizing the Trifocal Tensor
Parameterizing the Trifocal Tensor May 11, 2017 Based on: Klas Nordberg. A Minimal Parameterization of the Trifocal Tensor. In Computer society conference on computer vision and pattern recognition (CVPR).
More informationTrinocular Geometry Revisited
Trinocular Geometry Revisited Jean Pounce and Martin Hebert 报告人 : 王浩人 2014-06-24 Contents 1. Introduction 2. Converging Triplets of Lines 3. Converging Triplets of Visual Rays 4. Discussion 1. Introduction
More informationThe state of the art and beyond
Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching
More informationPosition & motion in 2D and 3D
Position & motion in 2D and 3D! METR4202: Guest lecture 22 October 2014 Peter Corke Peter Corke Robotics, Vision and Control FUNDAMENTAL ALGORITHMS IN MATLAB 123 Sections 11.1, 15.2 Queensland University
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More informationOptimisation on Manifolds
Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation
More informationComputer Vision Motion
Computer Vision Motion Professor Hager http://www.cs.jhu.edu/~hager 12/1/12 CS 461, Copyright G.D. Hager Outline From Stereo to Motion The motion field and optical flow (2D motion) Factorization methods
More informationImage Analysis. Feature extraction: corners and blobs
Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Trifocal Tensor Lecture 21 March 31, 2005 2 Lord Shiva is depicted as having three eyes. The
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationRadiometry. Nuno Vasconcelos UCSD
Radiometry Nuno Vasconcelos UCSD Light Last class: geometry of image formation pinhole camera: point (x,y,z) in 3D scene projected into image pixel of coordinates (x, y ) according to the perspective projection
More informationLecture 2. September 13, 2018 Coordinates, Telescopes and Observing
Lecture 2 September 13, 2018 Coordinates, Telescopes and Observing News Lab time assignments are on class webpage. Lab 2 Handed out today and is due September 27. Observing commences starting tomorrow.
More informationGiven a feature in I 1, how to find the best match in I 2?
Feature Matching 1 Feature matching Given a feature in I 1, how to find the best match in I 2? 1. Define distance function that compares two descriptors 2. Test all the features in I 2, find the one with
More informationInterest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator
Interest Operators All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator SIFT interest point detector and region descriptor Kadir Entrop Detector
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Basic Information Instructor: Professor Ron Sahoo Office: NS 218 Tel: (502) 852-2731 Fax: (502)
More informationPose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!
Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization
More informationCalibration Goals and Plans
CHAPTER 13 Calibration Goals and Plans In This Chapter... Expected Calibration Accuracies / 195 Calibration Plans / 197 This chapter describes the expected accuracies which should be reached in the calibration
More informationCHAPTER IV INSTRUMENTATION: OPTICAL TELESCOPE
CHAPTER IV INSTRUMENTATION: OPTICAL TELESCOPE Outline: Main Function of Telescope Types of Telescope and Optical Design Optical Parameters of Telescope Light gathering power Magnification Resolving power
More informationLocal Features (contd.)
Motivation Local Features (contd.) Readings: Mikolajczyk and Schmid; F&P Ch 10 Feature points are used also or: Image alignment (homography, undamental matrix) 3D reconstruction Motion tracking Object
More informationCommissioning of the Hanle Autoguider
Commissioning of the Hanle Autoguider Copenhagen University Observatory Edited November 10, 2005 Figure 1: First light image for the Hanle autoguider, obtained on September 17, 2005. A 5 second exposure
More informationUncertainty Models in Quasiconvex Optimization for Geometric Reconstruction
Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract
More informationAQA Physics /7408
AQA Physics - 7407/7408 Module 10: Medical physics You should be able to demonstrate and show your understanding of: 10.1 Physics of the eye 10.1.1 Physics of vision The eye as an optical refracting system,
More informationAstronomy 114. Lecture 26: Telescopes. Martin D. Weinberg. UMass/Astronomy Department
Astronomy 114 Lecture 26: Telescopes Martin D. Weinberg weinberg@astro.umass.edu UMass/Astronomy Department A114: Lecture 26 17 Apr 2007 Read: Ch. 6,26 Astronomy 114 1/17 Announcements Quiz #2: we re aiming
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Tasha Vanesian LECTURE 3 Calibrated 3D Reconstruction 3.1. Geometric View of Epipolar Constraint We are trying to solve the following problem:
More informationWavelet-based Salient Points with Scale Information for Classification
Wavelet-based Salient Points with Scale Information for Classification Alexandra Teynor and Hans Burkhardt Department of Computer Science, Albert-Ludwigs-Universität Freiburg, Germany {teynor, Hans.Burkhardt}@informatik.uni-freiburg.de
More informationAgenda Announce: Visions of Science Visions of Science Winner
7. Telescopes: Portals of Discovery All of this has been discovered and observed these last days thanks to the telescope that I have [built], after having been enlightened by divine grace. Galileo Galilei
More informationChapter 6 Telescopes: Portals of Discovery
Chapter 6 Telescopes: Portals of Discovery 6.1 Eyes and Cameras: Everyday Light Sensors Our goals for learning: How does your eye form an image? How do we record images? How does your eye form an image?
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances
More informationCE 59700: Digital Photogrammetric Systems
CE 59700: Digital Photogrammetric Systems Fall 2016 1 Instructor: Contact Information Office: HAMP 4108 Tel: (765) 496-0173 E-mail: ahabib@purdue.edu Lectures (HAMP 2102): Monday, Wednesday & Friday (12:30
More informationMulti-Frame Factorization Techniques
Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the
More informationEPIPOLAR GEOMETRY WITH MANY DETAILS
EPIPOLAR GEOMERY WIH MANY DEAILS hank ou for the slides. he come mostl from the following source. Marc Pollefes U. of North Carolina hree questions: (i) Correspondence geometr: Given an image point in
More informationLecture Fall, 2005 Astronomy 110 1
Lecture 13+14 Fall, 2005 Astronomy 110 1 Important Concepts for Understanding Spectra Electromagnetic Spectrum Continuous Spectrum Absorption Spectrum Emission Spectrum Emission line Wavelength, Frequency
More information7. Telescopes: Portals of Discovery Pearson Education Inc., publishing as Addison Wesley
7. Telescopes: Portals of Discovery Parts of the Human Eye pupil allows light to enter the eye lens focuses light to create an image retina detects the light and generates signals which are sent to the
More informationOptical Flow, Motion Segmentation, Feature Tracking
BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature
More informationC4B Computer Vision Question Sheet 2 answers
C4B Computer Vision Question Sheet 2 answers Andrea Vedaldi vedaldi@robots.ox.ac.uk Question 1- Structure from Motion Consider two cameras O1, O2 looking at a plane - Let X1 be a point on the plane and
More informationTelescopes. Lecture 7 2/7/2018
Telescopes Lecture 7 2/7/2018 Tools to measure electromagnetic radiation Three essentials for making a measurement: A device to collect the radiation A method of sorting the radiation A device to detect
More informationTRACKING and DETECTION in COMPUTER VISION Filtering and edge detection
Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median
More informationHomographies and Estimating Extrinsics
Homographies and Estimating Extrinsics Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Adapted from: Computer vision: models, learning and inference. Simon J.D. Prince Review: Motivation
More informationGeneralized Laplacian as Focus Measure
Generalized Laplacian as Focus Measure Muhammad Riaz 1, Seungjin Park, Muhammad Bilal Ahmad 1, Waqas Rasheed 1, and Jongan Park 1 1 School of Information & Communications Engineering, Chosun University,
More informationThe structure tensor in projective spaces
The structure tensor in projective spaces Klas Nordberg Computer Vision Laboratory Department of Electrical Engineering Linköping University Sweden Abstract The structure tensor has been used mainly for
More informationECE 468: Digital Image Processing. Lecture 8
ECE 68: Digital Image Processing Lecture 8 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Point Descriptors Point Descriptors Describe image properties in the neighborhood of a keypoint Descriptors
More informationABOUT SPOTTINGSCOPES Background on Telescopes
22 November 2010 ABOUT SPOTTINGSCOPES A spotting scope is a compact telescope designed primarily for terrestrial observing and is used in applications which require magnifications beyond the range of a
More informationSURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:
SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points
More informationLecture 6: Finding Features (part 1/2)
Lecture 6: Finding Features (part 1/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 6 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon
More informationChapter 6 Telescopes: Portals of Discovery. Agenda. How does your eye form an image? Refraction. Example: Refraction at Sunset
Chapter 6 Telescopes: Portals of Discovery Agenda Announce: Read S2 for Thursday Ch. 6 Telescopes 6.1 Eyes and Cameras: Everyday Light Sensors How does your eye form an image? Our goals for learning How
More informationA geometric interpretation of the homogeneous coordinates is given in the following Figure.
Introduction Homogeneous coordinates are an augmented representation of points and lines in R n spaces, embedding them in R n+1, hence using n + 1 parameters. This representation is useful in dealing with
More informationTelescopes, Observatories, Data Collection
Telescopes, Observatories, Data Collection Telescopes 1 Astronomy : observational science only input is the light received different telescopes, different wavelengths of light lab experiments with spectroscopy,
More informationConformal Clifford Algebras and Image Viewpoints Orbit
Conformal Clifford Algebras and Image Viewpoints Orbit Ghina EL MIR Ph.D in Mathematics Conference on Differential Geometry April 27, 2015 - Notre Dame University, Louaize Context Some viewpoints of a
More information