Introduction to motion correspondence
|
|
- Dylan Brown
- 6 years ago
- Views:
Transcription
1 Introduction to motion correspondence 1 IPAM - UCLA July 24, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay
2 Why estimate visual motion? 2 Tracking Segmentation Structure from motion Action recognition
3 Action recognition and motion 3 G. Johansson, Visual Perception of Biological Motion and a Model For Its Analysis", Perception and Psychophysics 14, , 1973.
4 Action recognition and motion 4 G. Johansson, Visual Perception of Biological Motion and a Model For Its Analysis", Perception and Psychophysics 14, , 1973.
5 Action recognition and motion 5 G. Johansson, Visual Perception of Biological Motion and a Model For Its Analysis", Perception and Psychophysics 14, , 1973.
6 Motion Field 6 P(t) is a moving 3D point Velocity of scene point: V = dp/dt P(t) V P(t+dt) p(t) = (x(t),y(t)) is the projection of P in the image. Apparent velocity v in the image: given by components u = dx/dt and v = dy/dt Considering dt =1: p(t+1) = p(t) + (u,v) p(t) v p(t+dt) Motion estimation task: Estimate (u,v) Slide credit: S. Lazebnik 6
7 Brightness constancy constraint 7 Image is projection of 3D environment Each pixel: projection of a surface patch Pixel intensity influenced by 3D surface, incident light, camera Assumption: intensity of surface patch remains constant Slide credit: Georg Langs 7
8 Brightness constancy constraint 8 Optical flow estimation: (x + u(x, y),y+ v(x, y)) Constraint: I(x, y, t Taylor expansion of RHS: I(x, y, t t t 1 1) = I(x + u(x, y),y+ v(x, y),t) 1) ' I(x, Brightness Constancy Constraint: I x u + I y v + I t =0 (x, y) u(x, y)+@iv(x,
9 Optical flow estimation - 2D to 1D 9 Two input images Slide credit: Georg Langs
10 Optical flow estimation - 1D case 10 How can we estimate the displacement? Slide credit: Georg Langs
11 Optical Flow Estimation 1D case 11 Known: Gradient, Difference of intensities at x-d Unknown: d Slide credit: Georg Langs How about 2D?
12 Brightness constraint: not enough for 2D! I x u + I y v + I t = 0 12 I I x u + I y v + I t =0 ( u, v(u ) + 0,v I 0 )=(u, = 0 v)+c( I y,i x ) t unknown flow parallel to the edge ri (u, v)+i t =0 I ( u', v') = 0
13 The Aperture Problem 13 Perceived motion 13
14 The Aperture Problem 14 Actual motion 14
15 Optical flow uncertaintly 15
16 Overcoming the aperture effect 16 Brightness constancy constraint I x (p i )u i + I x (p i )v i + I t (p i )=0
17 Overcoming the aperture effect 17 united we move Brightness constancy constraint I x (p i )u + I x (p i )v + I t (p i )=0
18 Lucas-Kanade I x (p i )u + I x (p i )v + I t (p i )=0 I x,1 I y,1. I x,25 I y, apple u v = I t,1. I t, Rewrite: Residuals: Cost: Minimization: Au = b = Au T = b T b b A T Au = A T b 25 equations, 2 unknows 2u T A T b + u T A T Au B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. IJCAI, 1981.
19 Lucas-Kanade, continued 19 A T Au = A T b Is it invertible? A = u = A T A I x,1 I y, A T b A T A = apple I x,25 I P x,25 P i I2 x,i Pi I x,ii y,i P i I x,ii y,i i I2 y,i B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. IJCAI, 1981.
20 Second Moment Matrix 20 Distribution of gradients: J = apple P x 0,y 0 Ix 2 P x 0,y 0 I x I y Px P 0,y 0 I x I y x 0,y 0 I 2 y
21 Second Moment Matrix 21 Distribution of gradients: J = X x 0,y 0 (I x,i y ) T (I x,i y )
22 Second Moment Matrix 22 Distribution of gradients: J = X x 0,y 0 (rg u) T (rg u)
23 Second Moment Matrix 23 Distribution of gradients: J = G h i (rg u) T (rg u)
24 Second Moment Matrix 24 J = G h i (rg u) T (rg u) Eigenvectors : directions of maximal and minimal variation of u Eigenvalues: amounts of minimal and maximal variation u
25 Local structure and motion estimation 25 1 ' 0, 2 ' 0 1 large, 2 ' 0 1, 2 : large
26 Interpreting the eigenvalues 26 Classification of image points using eigenvalues of the second moment matrix: λ 2 Edge λ 2 >> λ 1 Corner λ 1 and λ 2 are large, λ 1 ~ λ 2 λ 1 and λ 2 are small Flat region Edge λ 1 >> λ 2 Slide credit: K. Grauman λ 1
27 SSD measure 27 Sum of squared differences
28 High-Texture Region 28 Gradients are different, large magnitude Large λ 1, large λ 2 28
29 Edge 29 Gradients very large or very small Large λ 1, small λ 2 29
30 Low-Texture Region 30 Gradients have small magnitude Small λ 1, small λ 2 30
31 Shi-Tomasi feature tracker Find good features (min eigenvalue of 2 2 Hessian) 2. Use Lucas-Kanade to track with pure translation 3. Use affine registration with first feature patch 4. Terminate tracks whose dissimilarity gets too large 5. Start new tracks when needed
32 Tracking example 32 J. Shi and C. Tomasi. Good Features to Track. CVPR 1994.
33 Lucas-Kanade and the aperture effect 33 Brightness constancy + neighboring pixels have same (u,v) I x (p i )u + I x (p i )v + I t (p i )=0 5x5 window: 25 equations, 2 unknowns Solve the system around each pixel separately
34 Dense Lukas Kanade Lucas Kanade Horn Schunk Anisotropic
35 Horn-Schunk and the aperture effect 35 Brightness constancy + flow smoothness: J(u, v) = 1 2 ZZ Minimize with respect to u(x, y),v(x, y) Euler-Lagrange derivative: [I t + ri (u, v)] 2 + (ru) 2 +(rv) 2 dxdy Minimum condition B.K.P. Horn and B.G. Schunck, "Determining optical flow." Artificial Intelligence, 1981.
36 Horn-Schunk results 36
37 Lucas-Kanade revisited 37 General expression for LK criterion: LK flow How does this compare with Horn-Schunk criterion?
38 Lucas-Kanade meets 2 3Horn-Schunk Introduce: w. u = 4 v 5 r 3 f =. 1 rw 2. = ru 2 + rv 2 Lucas-Kanade Horn-Schunk Bruhn-Weickert-Schnoerr Z E LK (w) = Z E HS (w) = Z E BWS (w) = J 2 4 f x f y f t = G r 3 fr 3 f T w T J w dxdy w T J 0 w + a rw dxdy w T J w + a rw dxdy A. Bruhn, J. Weickert, C. Schnörr: 'Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods.' IJCV 2005
39 Horn-Schunk 39
40 Bruhn-Weickert-Schnoerr 40
41 Bruhn-Weickert-Schnoerr, continued Z E BWS (w) = w T J w + a rw dxdy 41 E 0 BWS(w) = Z Spatio-temporal regularization [0,T ] w T J w + a r 3 w dxdy Robust norms: E 00 BWS(w) = Z [0,T ] 1 w T J w + a 2 ( r 3 w )dxdy
42 Bruhn-Weickert-Schnoerr 42
43 Bruhn-Weickert-Schnoerr - anisotropic 43
44 Bruhn-Weickert-Schnoerr flow regularization 44
45 Numerical solutions 45 Euler-Lagrange: Numerical approximation to Laplacian Sparse linear system in u,v Gauss-Seidel, Successive Over Relaxation (SOR) Multigrid 12 fps, 2006 A. Brandt, Multi-level adaptive solutions to boundary-value problems, Math. Comput., D. Terzopoulos, Image Analysis Using Multigrid Relaxation Methods, PAMI, 1986 A. Kenigsberg, R. Kimmel, and I. Yavneh, A Multigrid Approach for Fast Geodesic Active Contours, 2001 G. Papandreou and P. Maragos, "Multigrid Geometric Active Contour Models", TIP 2007 A. Bruhn, J. Weickert, T. Kohlberger, C. Schnörr: A multigrid platform for real-time motion computation with discontinuity-preserving variational methods. IJCV 2006
46 Optical Flow: Iterative Estimation 46 estimate update Initial guess: Estimate: x 0 x (using d for displacement here instead of u)
47 Optical Flow: Iterative Estimation 47 estimate update Initial guess: Estimate: x 0 x
48 Optical Flow: Iterative Estimation 48 estimate update Initial guess: Estimate: x 0 x
49 Optical Flow: Iterative Estimation 49 x 0 x
50 Large Displacements: Reduce Resolution! 50
51 Coarse-to-fine Optical Flow Estimation 51 u=1.25 pixels u=2.5 pixels u=5 pixels Image 1 u=10 pixels Image 2 Gaussian pyramid of image 1 Gaussian pyramid of image 2 51
52 Coarse-to-fine Optical Flow Estimation 52 Run iterative OF Run iterative OF Warp & upsample... Image 1 Image 2 Gaussian pyramid of image 1 Gaussian pyramid of image 2 52
53 Beyond the brightness constancy constraint 53 SIFT flow: dense correspondence across different scenes, C Liu, J Yuen, A Torralba, J. Sivic, W. Freeman, ECCV 2008 T. Brox, J. Malik, Large displacement optical flow: descriptor matching in variational motion estimation, PAMI 2011
54 Beyond the brightness constancy constraint 54 E. Trulls, I. Kokkinos, A. Sanfeliu, and F. Moreno, Dense Segmentation-Aware Descriptors, CVPR 2013
55 Motion-based action recognition Discovering Discriminative Action Parts from Mid-Level Video Representations, M. Raptis, I. Kokkinos and S. Soatto. CVPR,
56 Motion-based action recognition Discovering Discriminative Action Parts from Mid-Level Video Representations, M. Raptis, I. Kokkinos and S. Soatto. CVPR
57 Motion-based action recognition Discovering Discriminative Action Parts from Mid-Level Video Representations, M. Raptis, I. Kokkinos and S. Soatto. CVPR,
58 Motion-based action recognition Discovering Discriminative Action Parts from Mid-Level Video Representations, M. Raptis, I. Kokkinos and S. Soatto. CVPR,
59 Motion-based segmentation 59 Unsupervised segmentation incorporating colour, texture, and motion. T Brox, M Rousson, R Deriche, J Weickert, IVC 2010
60 Further Literature Pointers 60 Layered motion, parametric models, motion segmentation: Layered Representation for Motion Analysis. CVPR J. Wang and E. Adelson Black, M. J. and Anandan, P., The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields, CVIU 1996 D. Cremers, S. Soatto, Motion Competition: A Variational Approach to Piecewise Parametric. Motion Segmentation, IJCV 2004 Unsupervised segmentation incorporating colour, texture, and motion. T Brox, M Rousson, R Deriche, J Weickert, IVC 2010 (wait for R. Szeliski s talk) Learning: Roth, S., Black, M.J.: On the spatial statistics of optical flow. IJCV 74, (2007) Discrete optimization: (W. Freeman s talk today) Fusion Moves for Markov Random Field Optimization. V. Lempitsky, C. Rother, S. Roth, and A. Blake, PAMI B. Glocker, N. Komodakis, G. Tziritas, N. Navab, N. Paragios Dense Image Registration through MRFs and Efficient Linear Programming MIA, 2008
61 Code 61 Secrets of optical flow estimation and their principles, Sun., Roth., and Black, CVPR 10 T. Brox, J. Malik, Large displacement optical flow: descriptor matching in variational motion estimation, PAMI 2011 Action recognition & descriptor works: Registration service:
Motion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationMotion Estimation (I)
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationOptical Flow, Motion Segmentation, Feature Tracking
BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature
More informationIntroduction to Nonlinear Image Processing
Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations
More informationOptical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016
Optical flow Subhransu Maji CMPSC 670: Computer Vision October 20, 2016 Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes CMPSC 670 2 Motion and perceptual organization Sometimes,
More informationVideo and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)
Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow
More informationElaborazione delle Immagini Informazione multimediale - Immagini. Raffaella Lanzarotti
Elaborazione delle Immagini Informazione multimediale - Immagini Raffaella Lanzarotti OPTICAL FLOW Thanks to prof. Mubarak Shah,UCF 2 Video Video: sequence of frames (images) catch in the time Data: function
More informationIterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research
Iterative Image Registration: Lucas & Kanade Revisited Kentaro Toyama Vision Technology Group Microsoft Research Every writer creates his own precursors. His work modifies our conception of the past, as
More informationMotion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan
Motion estimation Digital Visual Effects Yung-Yu Chuang with slides b Michael Black and P. Anandan Motion estimation Parametric motion image alignment Tracking Optical flow Parametric motion direct method
More informationLucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)
Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow
More informationRevisiting Horn and Schunck: Interpretation as Gauß-Newton Optimisation
ZIKIC, KAMEN, AND NAVAB: REVISITING HORN AND SCHUNCK 1 Revisiting Horn and Schunck: Interpretation as Gauß-Newton Optimisation Darko Zikic 1 zikic@in.tum.de Ali Kamen ali.kamen@siemens.com Nassir Navab
More informationINTEREST POINTS AT DIFFERENT SCALES
INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions
More informationAn Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections
An Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections Claudia Kondermann, Daniel Kondermann, Bernd Jähne, Christoph Garbe Interdisciplinary Center for Scientific Computing
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More informationLecture 13: Tracking mo3on features op3cal flow
Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons Reading: [Szeliski]
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff
More informationIntroduction to Linear Image Processing
Introduction to Linear Image Processing 1 IPAM - UCLA July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Image Sciences in a nutshell 2 Image Processing Image
More informationEdges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise
Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationCS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint
CS4495/6495 Introduction to Computer Vision 6B-L1 Dense flow: Brightness constraint Motion estimation techniques Feature-based methods Direct, dense methods Motion estimation techniques Direct, dense methods
More informationA Generative Model Based Approach to Motion Segmentation
A Generative Model Based Approach to Motion Segmentation Daniel Cremers 1 and Alan Yuille 2 1 Department of Computer Science University of California at Los Angeles 2 Department of Statistics and Psychology
More informationMethods in Computer Vision: Introduction to Optical Flow
Methods in Computer Vision: Introduction to Optical Flow Oren Freifeld Computer Science, Ben-Gurion University March 22 and March 26, 2017 Mar 22, 2017 1 / 81 A Preliminary Discussion Example and Flow
More informationComputer Vision I. Announcements
Announcements Motion II No class Wednesda (Happ Thanksgiving) HW4 will be due Frida 1/8 Comment on Non-maximal supression CSE5A Lecture 15 Shi-Tomasi Corner Detector Filter image with a Gaussian. Compute
More informationModulation-Feature based Textured Image Segmentation Using Curve Evolution
Modulation-Feature based Textured Image Segmentation Using Curve Evolution Iasonas Kokkinos, Giorgos Evangelopoulos and Petros Maragos Computer Vision, Speech Communication and Signal Processing Group
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationOptic Flow Computation with High Accuracy
Cognitive Computer Vision Colloquium Prague, January 12 13, 2004 Optic Flow Computation with High ccuracy Joachim Weickert Saarland University Saarbrücken, Germany joint work with Thomas Brox ndrés Bruhn
More informationImage Alignment and Mosaicing
Image Alignment and Mosaicing Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic mosaicing Image Enhancement Original
More informationBlobs & Scale Invariance
Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos
More informationFeature extraction: Corners and blobs
Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over
More informationAnalysis of Numerical Methods for Level Set Based Image Segmentation
Analysis of Numerical Methods for Level Set Based Image Segmentation Björn Scheuermann and Bodo Rosenhahn Institut für Informationsverarbeitung Leibnitz Universität Hannover {scheuerm,rosenhahn}@tnt.uni-hannover.de
More informationMultigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem
Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem El Mostafa Kalmoun kalmoun@cs.fau.de Ulrich Ruede ruede@cs.fau.de Institute of Computer Science 10 Friedrich Alexander
More informationGlobal parametric image alignment via high-order approximation
Global parametric image alignment via high-order approximation Y. Keller, A. Averbuch 2 Electrical & Computer Engineering Department, Ben-Gurion University of the Negev. 2 School of Computer Science, Tel
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationDense Optical Flow Estimation from the Monogenic Curvature Tensor
Dense Optical Flow Estimation from the Monogenic Curvature Tensor Di Zang 1, Lennart Wietzke 1, Christian Schmaltz 2, and Gerald Sommer 1 1 Department of Computer Science, Christian-Albrechts-University
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationAdvances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.
6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;
More information6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005
6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose
More informationCS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015
CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature
More informationTRACKING and DETECTION in COMPUTER VISION
Technischen Universität München Winter Semester 2013/2014 TRACKING and DETECTION in COMPUTER VISION Template tracking methods Slobodan Ilić Template based-tracking Energy-based methods The Lucas-Kanade(LK)
More informationImage Alignment and Mosaicing Feature Tracking and the Kalman Filter
Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released
More informationImage Analysis. Feature extraction: corners and blobs
Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).
More informationInvariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching
Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive
More informationNon Linear Observation Equation For Motion Estimation
Non Linear Observation Equation For Motion Estimation Dominique Béréziat, Isabelle Herlin To cite this version: Dominique Béréziat, Isabelle Herlin. Non Linear Observation Equation For Motion Estimation.
More informationEfficient Nonlocal Regularization for Optical Flow
Efficient Nonlocal Regularization for Optical Flow Philipp Krähenbühl and Vladlen Koltun Stanford University {philkr,vladlen}@cs.stanford.edu Abstract. Dense optical flow estimation in images is a challenging
More informationCSE 473/573 Computer Vision and Image Processing (CVIP)
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for
More informationProbabilistic Exploitation of the Lucas and Kanade Smoothness Constraint
Probabilistic Exploitation of the Lucas and Kanade Smoothness Constraint Volker Willert, Julian Eggert, Marc Toussaint, Edgar Körner HRI Europe GmbH TU Berlin Carl-Legien-Str. 30 Franklinstr. 28/29 D-63073
More informationTemplates, Image Pyramids, and Filter Banks
Templates, Image Pyramids, and Filter Banks 09/9/ Computer Vision James Hays, Brown Slides: Hoiem and others Review. Match the spatial domain image to the Fourier magnitude image 2 3 4 5 B A C D E Slide:
More informationProperties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context
Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa
More informationLevel-Set Person Segmentation and Tracking with Multi-Region Appearance Models and Top-Down Shape Information Appendix
Level-Set Person Segmentation and Tracking with Multi-Region Appearance Models and Top-Down Shape Information Appendix Esther Horbert, Konstantinos Rematas, Bastian Leibe UMIC Research Centre, RWTH Aachen
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course
More informationNonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou
1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process
More informationDistinguish between different types of scenes. Matching human perception Understanding the environment
Scene Recognition Adriana Kovashka UTCS, PhD student Problem Statement Distinguish between different types of scenes Applications Matching human perception Understanding the environment Indexing of images
More informationPerformance of Optical Flow Techniques. Abstract. While dierent optical ow techniques continue to appear, there has been a
1 Performance of Optical Flow Techniques J.L. Barron 1, D.J. Fleet 2 and S.S. Beauchemin 1 1 Dept. of Computer Science University of Western Ontario London, Ontario, N6A 5B7 2 Dept. of Computing Science
More informationOverview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance
More informationBayesian Approaches to Motion-Based Image and Video Segmentation
Bayesian Approaches to Motion-Based Image and Video Segmentation Daniel Cremers Department of Computer Science University of California, Los Angeles, USA http://www.cs.ucla.edu/ cremers Abstract. We present
More informationSURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:
SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points
More information6.869 Advances in Computer Vision. Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018
6.869 Advances in Computer Vision Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018 1 Sampling Sampling Pixels Continuous world 3 Sampling 4 Sampling 5 Continuous image f (x, y) Sampling
More informationRobust Motion Segmentation by Spectral Clustering
Robust Motion Segmentation by Spectral Clustering Hongbin Wang and Phil F. Culverhouse Centre for Robotics Intelligent Systems University of Plymouth Plymouth, PL4 8AA, UK {hongbin.wang, P.Culverhouse}@plymouth.ac.uk
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More informationMaximally Stable Local Description for Scale Selection
Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.
More informationLecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?
Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We
More informationRobustifying the Flock of Trackers
16 th Computer Vision Winter Workshop Andreas Wendel, Sabine Sternig, Martin Godec (eds.) Mitterberg, Austria, February 2-4, 2011 Robustifying the Flock of Trackers Jiří Matas and Tomáš Vojíř The Center
More informationOverview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and
More informationSingle-Image-Based Rain and Snow Removal Using Multi-guided Filter
Single-Image-Based Rain and Snow Removal Using Multi-guided Filter Xianhui Zheng 1, Yinghao Liao 1,,WeiGuo 2, Xueyang Fu 2, and Xinghao Ding 2 1 Department of Electronic Engineering, Xiamen University,
More informationAsaf Bar Zvi Adi Hayat. Semantic Segmentation
Asaf Bar Zvi Adi Hayat Semantic Segmentation Today s Topics Fully Convolutional Networks (FCN) (CVPR 2015) Conditional Random Fields as Recurrent Neural Networks (ICCV 2015) Gaussian Conditional random
More informationImage matching. by Diva Sian. by swashford
Image matching by Diva Sian by swashford Harder case by Diva Sian by scgbt Invariant local features Find features that are invariant to transformations geometric invariance: translation, rotation, scale
More informationEE 6882 Visual Search Engine
EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,
More informationRandom walks for deformable image registration Dana Cobzas
Random walks for deformable image registration Dana Cobzas with Abhishek Sen, Martin Jagersand, Neil Birkbeck, Karteek Popuri Computing Science, University of Alberta, Edmonton Deformable image registration?t
More informationHuman Pose Tracking I: Basics. David Fleet University of Toronto
Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated
More informationComputer Vision Motion
Computer Vision Motion Professor Hager http://www.cs.jhu.edu/~hager 12/1/12 CS 461, Copyright G.D. Hager Outline From Stereo to Motion The motion field and optical flow (2D motion) Factorization methods
More informationLecture 13 Visual recognition
Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object
More informationDetectors part II Descriptors
EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:
More informationRecap: edge detection. Source: D. Lowe, L. Fei-Fei
Recap: edge detection Source: D. Lowe, L. Fei-Fei Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel
More informationarxiv: v1 [cs.cv] 10 Feb 2016
GABOR WAVELETS IN IMAGE PROCESSING David Bařina Doctoral Degree Programme (2), FIT BUT E-mail: xbarin2@stud.fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz arxiv:162.338v1 [cs.cv]
More informationTensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals
Generalizations of the Total Variation and Beltrami Functionals 29 September 2010 International Conference on Image Processing 2010, Hong Kong Anastasios Roussos and Petros Maragos Computer Vision, Speech
More informationBlob Detection CSC 767
Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob
More informationThe state of the art and beyond
Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching
More informationScale & Affine Invariant Interest Point Detectors
Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine
More informationSimultaneous Segmentation and Object Behavior Classification of Image Sequences
Simultaneous Segmentation and Object Behavior Classification of Image Sequences Laura Gui 1, Jean-Philippe Thiran 1 and Nikos Paragios 1 Signal Processing Institute (ITS), Ecole Polytechnique Fédérale
More informationCS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang
CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations
More informationCS 231A Section 1: Linear Algebra & Probability Review
CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability
More informationReconnaissance d objetsd et vision artificielle
Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon
More informationMutual information for multi-modal, discontinuity-preserving image registration
Mutual information for multi-modal, discontinuity-preserving image registration Giorgio Panin German Aerospace Center (DLR) Institute for Robotics and Mechatronics Münchner Straße 20, 82234 Weßling Abstract.
More informationLecture 7: Finding Features (part 2/2)
Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances
More informationStochastic uncertainty models for the luminance consistency assumption
JOURNAL OF L A TEX CLASS FILES, VOL., NO., MONTH 1 Stochastic uncertainty models for the luminance consistency assumption Thomas Corpetti 1 and Etienne Mémin 2 1 : CNRS/LIAMA, Institute of Automation,
More informationUncertainty Models in Quasiconvex Optimization for Geometric Reconstruction
Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract
More informationing a minimal set of layers, each describing a coherent motion from the scene, which yield a complete description of motion when superposed. Jepson an
The Structure of Occlusion in Fourier Space S. S. Beauchemin Dept. of Computer Science The University of Western Ontario London, Canada N6A 5B7 e-mail: beau@csd.uwo.ca J. L. Barron Dept. of Computer Science
More informationCorner detection: the basic idea
Corner detection: the basic idea At a corner, shifting a window in any direction should give a large change in intensity flat region: no change in all directions edge : no change along the edge direction
More informationT H E S I S. Computer Engineering May Smoothing of Matrix-Valued Data. Thomas Brox
D I P L O M A Computer Engineering May 2002 T H E S I S Smoothing of Matrix-Valued Data Thomas Brox Computer Vision, Graphics, and Pattern Recognition Group Department of Mathematics and Computer Science
More informationDifferential Motion Analysis
Differential Motion Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@ece.northwestern.edu http://www.eecs.northwestern.edu/~yingwu July 19,
More informationAchieving scale covariance
Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant
More informationGalilean-diagonalized spatio-temporal interest operators
Galilean-diagonalized spatio-temporal interest operators Tony Lindeberg, Amir Akbarzadeh and Ivan Laptev Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and
More informationA Background Layer Model for Object Tracking through Occlusion
A Background Layer Model for Object Tracking through Occlusion Yue Zhou and Hai Tao UC-Santa Cruz Presented by: Mei-Fang Huang CSE252C class, November 6, 2003 Overview Object tracking problems Dynamic
More informationCSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey
CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?
More informationFantope Regularization in Metric Learning
Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction
More informationCOURSE INTRODUCTION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
COURSE INTRODUCTION COMPUTATIONAL MODELING OF VISUAL PERCEPTION 2 The goal of this course is to provide a framework and computational tools for modeling visual inference, motivated by interesting examples
More informationA Generative Perspective on MRFs in Low-Level Vision Supplemental Material
A Generative Perspective on MRFs in Low-Level Vision Supplemental Material Uwe Schmidt Qi Gao Stefan Roth Department of Computer Science, TU Darmstadt 1. Derivations 1.1. Sampling the Prior We first rewrite
More informationInstance-level l recognition. Cordelia Schmid & Josef Sivic INRIA
nstance-level l recognition Cordelia Schmid & Josef Sivic NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these
More information