Methods in Computer Vision: Introduction to Optical Flow

Size: px
Start display at page:

Download "Methods in Computer Vision: Introduction to Optical Flow"

Transcription

1 Methods in Computer Vision: Introduction to Optical Flow Oren Freifeld Computer Science, Ben-Gurion University March 22 and March 26, 2017 Mar 22, / 81

2 A Preliminary Discussion Example and Flow Visualizations Example: Two Frames from the RubberWhale Sequence Data from Mar 22, / 81

3 A Preliminary Discussion Example and Flow Visualizations Example: Two Frames from the RubberWhale Sequence Data from Mar 22, / 81

4 Optical Flow A Preliminary Discussion Example and Flow Visualizations Figure from Deqing Sun s thesis, 2013 Mar 22, / 81

5 A Preliminary Discussion Motion Field versus Optical Flow Motion Field A 2D Motion field describes the 3D motion projected to the 2D image plane Figure from Michael Black s thesis, 1992 Mar 22, / 81

6 A Preliminary Discussion Optical Flow is the Apparent Motion Motion Field versus Optical Flow Examples: Barber s pole illusion Aperture problem A single-color rotating sphere An array of lights blinking in a specific order. Mar 22, / 81

7 A Preliminary Discussion Optical Flow is the Apparent Motion Motion Field versus Optical Flow Examples: Barber s pole illusion Aperture problem A single-color rotating sphere An array of lights blinking in a specific order. Mar 22, / 81

8 A Preliminary Discussion Optical Flow is the Apparent Motion Motion Field versus Optical Flow Examples: Barber s pole illusion Aperture problem A single-color rotating sphere An array of lights blinking in a specific order. Mar 22, / 81

9 A Preliminary Discussion Optical Flow is the Apparent Motion Motion Field versus Optical Flow Examples: Barber s pole illusion Aperture problem A single-color rotating sphere An array of lights blinking in a specific order. Mar 22, / 81

10 A Preliminary Discussion Setting and (Loosely-defined) Goal Setting and Goal Two digital grayscale images2d arrays whose range is, say, [0,1,...,255]. It will be useful, however, at least at the beginning, to view the images as real-valued functions defined over the continuum: where Ω R 2 is a rectangle. I 1 : Ω R, I 2 : Ω R (1) Find a spatial transformation connecting I 1 and I 2. Mar 22, / 81

11 A Preliminary Discussion Gradient-Based Optimization The Focus of our Introduction to Optical Flow In our first formulation, we will define a cost function to be minimized w.r.t. the transformation. The optimization method will be based on gradients. The gradient of f(x), f : R n R, is the (row) vector: Example: the gradient of [ f x 1 f x 2... ] f x n f(x, y, z) = x 2 + 2xy + sin(z) (2) is [ 2x + 2y 2x cos(z) ] Mar 22, / 81

12 A Preliminary Discussion Gradient-Based Optimization The Focus of our Introduction to Optical Flow In our first formulation, we will define a cost function to be minimized w.r.t. the transformation. The optimization method will be based on gradients. The gradient of f(x), f : R n R, is the (row) vector: Example: the gradient of [ f x 1 f x 2... ] f x n f(x, y, z) = x 2 + 2xy + sin(z) (2) is [ 2x + 2y 2x cos(z) ] Mar 22, / 81

13 A Preliminary Discussion Gradient-Based Optimization The Focus of our Introduction to Optical Flow In our first formulation, we will define a cost function to be minimized w.r.t. the transformation. The optimization method will be based on gradients. The gradient of f(x), f : R n R, is the (row) vector: Example: the gradient of [ f x 1 f x 2... ] f x n f(x, y, z) = x 2 + 2xy + sin(z) (2) is [ 2x + 2y 2x cos(z) ] Mar 22, / 81

14 A Preliminary Discussion Displacement Field: u = (u, v) Optical Flow as a Dense Vector Field Write the (optical-flow) transformation, from Ω to R 2, as x = (x, y) (x + u(x), y + v(x)) = x + u(x) (3) u : Ω R v : Ω R u = (u, v) : Ω R 2 On a computer, u and v are 2D arrays of the same size as I 1 (or I 2 ). Mar 22, / 81

15 A Preliminary Discussion Displacement Field: u = (u, v) Optical Flow as a Dense Vector Field Write the (optical-flow) transformation, from Ω to R 2, as x = (x, y) (x + u(x), y + v(x)) = x + u(x) (3) u : Ω R v : Ω R u = (u, v) : Ω R 2 On a computer, u and v are 2D arrays of the same size as I 1 (or I 2 ). Mar 22, / 81

16 A Preliminary Discussion Classical Formulation of Optical Flow Optical Flow as a Dense Vector Field Goal: Given I 1 and I 2, and viewing (u, v) as discretely defined, find good values of u and v at every pixel x = (x, y); i.e., want I 1 (x) I 2 (x + u(x), y + v(x)) (4) Our yet-to-be-defined per-pixel cost function will be optimized w.r.t. (u(x), v(x)): (û(x), ˆv(x)) = arg min E(u(x), v(x), I 1 (x), I 2 ( ))) (5) u(x),v(x) E depends on the value of I 1 only at x, but on the entirety of I 2. Mar 22, / 81

17 A Preliminary Discussion Classical Formulation of Optical Flow Optical Flow as a Dense Vector Field Goal: Given I 1 and I 2, and viewing (u, v) as discretely defined, find good values of u and v at every pixel x = (x, y); i.e., want I 1 (x) I 2 (x + u(x), y + v(x)) (4) Our yet-to-be-defined per-pixel cost function will be optimized w.r.t. (u(x), v(x)): (û(x), ˆv(x)) = arg min E(u(x), v(x), I 1 (x), I 2 ( ))) (5) u(x),v(x) E depends on the value of I 1 only at x, but on the entirety of I 2. Mar 22, / 81

18 A Preliminary Discussion Classical Formulation of Optical Flow Optical Flow as a Dense Vector Field Goal: Given I 1 and I 2, and viewing (u, v) as discretely defined, find good values of u and v at every pixel x = (x, y); i.e., want I 1 (x) I 2 (x + u(x), y + v(x)) (4) Our yet-to-be-defined per-pixel cost function will be optimized w.r.t. (u(x), v(x)): (û(x), ˆv(x)) = arg min E(u(x), v(x), I 1 (x), I 2 ( ))) (5) u(x),v(x) E depends on the value of I 1 only at x, but on the entirety of I 2. Mar 22, / 81

19 A Preliminary Discussion Simplification via Linearization Problems E will be hard to work with due the nonlinear way I 2 (x + u(x), y + v(x)) depends on its arguments, leading to nasty equations (one per pixel). We will simplify things via a per-pixel linear approximation. This means more assumptions. The simplification will lead to linear equations (still one per pixel). Mar 22, / 81

20 A Preliminary Discussion Simplification via Linearization Problems E will be hard to work with due the nonlinear way I 2 (x + u(x), y + v(x)) depends on its arguments, leading to nasty equations (one per pixel). We will simplify things via a per-pixel linear approximation. This means more assumptions. The simplification will lead to linear equations (still one per pixel). Mar 22, / 81

21 A Preliminary Discussion Simplification via Linearization Problems E will be hard to work with due the nonlinear way I 2 (x + u(x), y + v(x)) depends on its arguments, leading to nasty equations (one per pixel). We will simplify things via a per-pixel linear approximation. This means more assumptions. The simplification will lead to linear equations (still one per pixel). Mar 22, / 81

22 A Preliminary Discussion Simplification via Linearization Problems E will be hard to work with due the nonlinear way I 2 (x + u(x), y + v(x)) depends on its arguments, leading to nasty equations (one per pixel). We will simplify things via a per-pixel linear approximation. This means more assumptions. The simplification will lead to linear equations (still one per pixel). Mar 22, / 81

23 A Preliminary Discussion The Problem is Ill-Posed Problems Problem: Each equation will have two unknowns and only one constraint Mar 22, / 81

24 A Preliminary Discussion Problems Two Main Popular Approaches for a Solution Global methods (adding smoothness/regularization) Patch-based methods (adding constraints) In both cases, there is a also a probabilistic take on all this. Mar 22, / 81

25 A Preliminary Discussion Problems On Adding Constraints or Smoothness/Regularization Good idea even if we had enough equations. 1D signals (1 equation, 1 unknown) 2-channel images (2 equations, 2 unknowns) RGB images (3 equations, 2 unknowns) This is partly since such measurements tend to be correlated, and mostly because some solutions are better than others even if the latter are better supported by the data. Mar 22, / 81

26 A Preliminary Discussion Problems On Adding Constraints or Smoothness/Regularization Good idea even if we had enough equations. 1D signals (1 equation, 1 unknown) 2-channel images (2 equations, 2 unknowns) RGB images (3 equations, 2 unknowns) This is partly since such measurements tend to be correlated, and mostly because some solutions are better than others even if the latter are better supported by the data. Mar 22, / 81

27 A Preliminary Discussion The Assumptions are Wrong Problems All the assumptions alluded to are wrong, but still useful. That said, we will see how some of them can be improved (to better-but-still-wrong ones). Mar 22, / 81

28 A Preliminary Discussion Problems Technical Issue: Implementing Image Warping For a nominal flow, u, v, and a nominal image I, there is the technical issue of how, on a computer, we can evaluate I(x + u(x), y + v(x)). For now, we will defer this discussion and assume we have a method that accomplishes it. Mar 22, / 81

29 Brightness Constancy Brightness Constancy I 1 : Ω R and I 2 : Ω R where Ω is a rectangle. Introduce a time variable, t: I 1 (x) = I(x, t) (6) I 2 (x) = I(x, t + 1) (7) where x = (x, y) Ω is the (pixel) location and t is time. The brightness constancy assumption is: I 1 (x) = I(x, t) = I(x + u(x), y + v(x), t + 1) = I 2 (x + u(x), y + v(x)). (8) Mar 22, / 81

30 Brightness Constancy Brightness Constancy May work well for salient features; ambiguity in homogeneous regions Figure from Deqing Sun s thesis, 2013 Mar 22, / 81

31 Brightness Constancy Brightness Constancy Occlusions and disocclusions violate the assumption Figure from Deqing Sun s thesis, 2013 Mar 22, / 81

32 Brightness Constancy Brightness Constancy Can also break for many other reasons: illumination changes, objects changing colors or leaving/entering the scene, etc. Mar 22, / 81

33 Brightness Constancy A Straightforward Choice for a Cost Function Brightness constancy: I(x, t) = I(x + u(x), y + v(x), t + 1) (9) Define a per-pixel squared error: ε 2 (u(x), v(x)) = (I(x, t) I(x + u(x), y + v(x), t + 1)) 2 (10) Mar 22, / 81

34 Brightness Constancy First Problem: Nonlinearity ε 2 (u(x), v(x)) = (I(x, t) I(x + u(x), y + v(x), t + 1)) 2 In general, quadratic errors are easy but here the dependency in the unknown is nonlinear; moreover, the nonlinearity depends on the I function. Mar 22, / 81

35 Brightness Constancy Reminder: Taylor Expansion f : R R f(x + x) = f(x) + x df (x) + H.O.T. (11) dx Equivalently: f : R 3 R x = (x, y, z) f(x + x, y + y, z + z) = f(x) + xf x (x) + yf y (x) + zf z (x) + H.O.T. (12) f(x + x, y + y, z + z) = f(x) + [ f x (x) f y (x) f z (x) ] x y z + H.O.T. (13) H.O.T. = Higher-Order Terms f x is short for /f x, etc. Mar 22, / 81

36 Brightness Constancy Reminder: Taylor Expansion f : R R f(x + x) = f(x) + x df (x) + H.O.T. (11) dx Equivalently: f : R 3 R x = (x, y, z) f(x + x, y + y, z + z) = f(x) + xf x (x) + yf y (x) + zf z (x) + H.O.T. (12) f(x + x, y + y, z + z) = f(x) + [ f x (x) f y (x) f z (x) ] x y z + H.O.T. (13) H.O.T. = Higher-Order Terms f x is short for /f x, etc. Mar 22, / 81

37 Brightness Constancy Reminder: Taylor Expansion f : R R f(x + x) = f(x) + x df (x) + H.O.T. (11) dx Equivalently: f : R 3 R x = (x, y, z) f(x + x, y + y, z + z) = f(x) + xf x (x) + yf y (x) + zf z (x) + H.O.T. (12) f(x + x, y + y, z + z) = f(x) + [ f x (x) f y (x) f z (x) ] x y z + H.O.T. (13) H.O.T. = Higher-Order Terms f x is short for /f x, etc. Mar 22, / 81

38 Brightness Constancy Simplifying I(x + u(x), y + v(x), t + 1) Assuming differentiability: I(x + u(x), y + v(x), t + 1) = I(x, t) + Assuming small motions: I x (x, t) I y (x, t) I t (x, t) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) T [ u(x) v(x) u(x) v(x) 1 + H.O.T. (14) ] + I t (x, t) where x I(x, t) = [ I x (x, t) I y (x, t) ] and I t (x, t) are the spatial and temporal partial derivatives, resp. We can also write: I(x + u(x), y + v(x), t + 1) (15) I(x, t) + I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) (16) Mar 22, / 81

39 Brightness Constancy Simplifying I(x + u(x), y + v(x), t + 1) Assuming differentiability: I(x + u(x), y + v(x), t + 1) = I(x, t) + Assuming small motions: I x (x, t) I y (x, t) I t (x, t) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) T [ u(x) v(x) u(x) v(x) 1 + H.O.T. (14) ] + I t (x, t) where x I(x, t) = [ I x (x, t) I y (x, t) ] and I t (x, t) are the spatial and temporal partial derivatives, resp. We can also write: I(x + u(x), y + v(x), t + 1) (15) I(x, t) + I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) (16) Mar 22, / 81

40 Brightness Constancy Simplifying I(x + u(x), y + v(x), t + 1) Assuming differentiability: I(x + u(x), y + v(x), t + 1) = I(x, t) + Assuming small motions: I x (x, t) I y (x, t) I t (x, t) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) T [ u(x) v(x) u(x) v(x) 1 + H.O.T. (14) ] + I t (x, t) where x I(x, t) = [ I x (x, t) I y (x, t) ] and I t (x, t) are the spatial and temporal partial derivatives, resp. We can also write: I(x + u(x), y + v(x), t + 1) (15) I(x, t) + I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) (16) Mar 22, / 81

41 Brightness Constancy The Gradient-constraint Equation [ u(x) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) v(x) ] + I t (x, t) (17) Now bring in brightness constancy and neglect higher-order terms: [ ] u(x) x I(x, t) + I v(x) t (x, t) = 0. (18) Equivalently: I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) = 0. (19) This is the gradient-constraint equation: The spatio-temporal gradient constrains the values of the flow. Mar 22, / 81

42 Brightness Constancy The Gradient-constraint Equation [ u(x) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) v(x) ] + I t (x, t) (17) Now bring in brightness constancy and neglect higher-order terms: [ ] u(x) x I(x, t) + I v(x) t (x, t) = 0. (18) Equivalently: I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) = 0. (19) This is the gradient-constraint equation: The spatio-temporal gradient constrains the values of the flow. Mar 22, / 81

43 Brightness Constancy The Gradient-constraint Equation [ u(x) I(x + u(x), y + v(x), t + 1) I(x, t) + x I(x, t) v(x) ] + I t (x, t) (17) Now bring in brightness constancy and neglect higher-order terms: [ ] u(x) x I(x, t) + I v(x) t (x, t) = 0. (18) Equivalently: I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) = 0. (19) This is the gradient-constraint equation: The spatio-temporal gradient constrains the values of the flow. Mar 22, / 81

44 Brightness Constancy Second Problem: 1 linear equation; 2 unknowns I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t) = 0. (20) unknowns: u(x) and v(x). knowns (i.e., observed/approximated): I x (x, t), I y (x, t), and I t (x, t). One way to approximate these is via first-order finite differences; e.g., I t (x) I(x, t + 1) I(x, t). Mar 22, / 81

45 Brightness Constancy Digression: In 1D 1 linear equation, 1 unknown I(x, t), x R (instead of I(x, t), x Ω R 2 ) I x (x, t)u(x) + I t (x, t) = 0 (21) u(x) = I t(x, t) I x (x, t). (22) Mar 22, / 81

46 Brightness Constancy From Total Derivative to the Gradient-constraint Equation The gradient-constrained equation can also be derived from brightness constancy in a different way, if we view (u, v) as the velocity of a time-dependent location, (x, y) = ((x(t), y(t)): 0 B.C. = d I(x, y, t) dt = x I(x, y, t) d dt x + y I(x, y, t) d dt y + t I(x, y, t) d dt t = x I(x, y, t) d dt x + y I(x, y, t) d dt y + I(x, y, t) 1 t = I x (x, y, t)u(x, y) + I y (x, y, t)v(x, y) + I t (x, y, t) (23) Mar 22, / 81

47 Brightness Constancy From Total Derivative to the Gradient-constraint Equation Notationally suppressing (x, y, t) and (x, y) The gradient-constrained equation can also be derived from brightness constancy in a different way, if we view (u, v) as the velocity of a time-dependent location, (x, y) = ((x(t), y(t)): 0 B.C. = d dt I = I dx x dt + I dy y dt + I t = I dx x dt + I y dt dt dy dt + I t 1 = I x u + I y v + I t (24) Mar 22, / 81

48 A Simpler Cost Function Brightness Constancy Old: ε 2 (u(x), v(x)) = (I(x, t) I(x + u(x), y + v(x), t + 1)) 2 (25) New: ε 2 (u(x), v(x)) = (I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t)) 2 (26) Mar 22, / 81

49 Brightness Constancy Connection to a Gaussian Model Minimizing ε 2 (u(x), v(x)) (w.r.t. u(x) and v(x)) is equivalent to maximizing the likelihood, assuming p( I t (x, t); u(x), v(x), I x (x, t), I y (x, t)), (27) where µ = I x (x, t)u(x) + I y (x, t)v(x). I t (x, t) N (µ, σ 2 ) (28) Mar 22, / 81

50 Brightness Constancy Connection to a Gaussian Model Notationally suppressing the dependencies on x and t Minimizing ε 2 (u, v) (w.r.t. u and v) is equivalent to maximizing the likelihood, assuming where µ = I x u + I y v. p( I t ; u, v, I x, I y ), (29) I t N (µ, σ 2 ) (30) Mar 22, / 81

51 Brightness Constancy Connection Between a Quadratic Error and a Gaussian Model Proof: arg max µ arg min µ = arg max 1 µ 2 = arg min µ x, µ R σ > 0 x N (µ, σ 2 ) (31) (x µ) 2 = arg max N (x; µ, σ 2 ) (32) N (x; µ, σ 2 ) µ ( 1 exp 1 2πσ 2 2 N (x; µ, σ 2 ) = arg max log N (x; µ, σ 2 ) µ ( ) (x µ) 2 (x µ) 2 σ 2 σ 2 ( )) (x µ) 2 σ 2 + log const(σ) = arg max 1 µ 2 = arg min (x µ) 2 µ (33) ( ) (x µ) 2 σ 2 Mar 22, / 81

52 Brightness Constancy Connection Between a Quadratic Error and a Gaussian Model Proof: arg max µ arg min µ = arg max 1 µ 2 = arg min µ x, µ R σ > 0 x N (µ, σ 2 ) (31) (x µ) 2 = arg max N (x; µ, σ 2 ) (32) N (x; µ, σ 2 ) µ ( 1 exp 1 2πσ 2 2 N (x; µ, σ 2 ) = arg max log N (x; µ, σ 2 ) µ ( ) (x µ) 2 (x µ) 2 σ 2 σ 2 ( )) (x µ) 2 σ 2 + log const(σ) = arg max 1 µ 2 = arg min (x µ) 2 µ (33) ( ) (x µ) 2 σ 2 Mar 22, / 81

53 Brightness Constancy The Problem is Still Under-constrained ε 2 (u(x), v(x)) = (I x (x, t)u(x) + I y (x, t)v(x) + I t (x, t)) 2 (34) We can easily obtain zero error but this will not be that useful (e.g., there are infinitely many solutions) Mar 22, / 81

54 Perpendicular Vectors Brightness Constancy Normal and Tangent Flows Reminder: For 2 perpendicular vectors, a and b, 0 = cos(90 ) = a b a b = at b a b at b = 0 (35) Mar 22, / 81

55 Normal Flow Brightness Constancy Normal and Tangent Flows Normal flow: the component of the flow parallel to x I(x, t) T (thus normal to image edges ) Tangent flow: the component of the flow perpendicular to x I(x, t) T (thus tangent to image edges ) [ u v 0 G.C. ] = u = u normal + u tangent (36) = x I(x, t)u + I t = x I(x, t)u normal + x I(x, t)u tangent +I t }{{} 0 (37) x I(x, t)u normal + 0 +I t = 0 (38) Mar 22, / 81

56 Normal Flow Brightness Constancy Normal and Tangent Flows Normal flow: the component of the flow parallel to x I(x, t) T (thus normal to image edges ) Tangent flow: the component of the flow perpendicular to x I(x, t) T (thus tangent to image edges ) [ u v 0 G.C. ] = u = u normal + u tangent (36) = x I(x, t)u + I t = x I(x, t)u normal + x I(x, t)u tangent +I t }{{} 0 (37) x I(x, t)u normal + 0 +I t = 0 (38) Mar 22, / 81

57 Consistent with the 1D result: u = I t /I x Mar 22, / 81 Brightness Constancy Normal and Tangent Flows Normal Flow The gradient-constraint equation gives only the normal flow, the component of the flow that is parallel to xi T = [ I x I y ] T [ by proportionality Ix u normal = α I y ] for some α R (39) x Iu normal + I t = αi x I x + αi y I y + I t G.C. = 0 α = I t u normal = I t Ix 2 + Iy 2 [ Ix I y ] = I [ t Ix x I 2 I y u normal 2 = (I2 xi 2 t + I 2 y I 2 t ) x I 4 = I2 t x I 2 x I 4 = u normal = I t x I x I 2 (40) ] (41) I 2 t x I 2 (42) (43)

58 Consistent with the 1D result: u = I t /I x Mar 22, / 81 Brightness Constancy Normal and Tangent Flows Normal Flow The gradient-constraint equation gives only the normal flow, the component of the flow that is parallel to xi T = [ I x I y ] T [ by proportionality Ix u normal = α I y ] for some α R (39) x Iu normal + I t = αi x I x + αi y I y + I t G.C. = 0 α = I t u normal = I t Ix 2 + Iy 2 [ Ix I y ] = I [ t Ix x I 2 I y u normal 2 = (I2 xi 2 t + I 2 y I 2 t ) x I 4 = I2 t x I 2 x I 4 = u normal = I t x I x I 2 (40) ] (41) I 2 t x I 2 (42) (43)

59 Consistent with the 1D result: u = I t /I x Mar 22, / 81 Brightness Constancy Normal and Tangent Flows Normal Flow The gradient-constraint equation gives only the normal flow, the component of the flow that is parallel to xi T = [ I x I y ] T [ by proportionality Ix u normal = α I y ] for some α R (39) x Iu normal + I t = αi x I x + αi y I y + I t G.C. = 0 α = I t u normal = I t Ix 2 + Iy 2 [ Ix I y ] = I [ t Ix x I 2 I y u normal 2 = (I2 xi 2 t + I 2 y I 2 t ) x I 4 = I2 t x I 2 x I 4 = u normal = I t x I x I 2 (40) ] (41) I 2 t x I 2 (42) (43)

60 Tangent Flow Brightness Constancy Normal and Tangent Flows The gradient-constraint equation provides no information about the tangent flow. Mar 22, / 81

61 Two Main Approaches Brightness Constancy Approaches for a Solution The global approach [Horn and Schunck, 1981] which incorporates smoothness The patch-based (or local) approach [Lucas and Kanade, 1981] which adds constraints Both approaches modify the cost function, but in different ways. Mar 22, / 81

62 The Global Approach The Global Approach A cost function that couples the values of the flow in the entire image. Favors spatial smoothness (AKA spatial coherence) Tradeoff between deviates from the gradient-constraint equation and deviates from spatial smoothness. Figure from Michael Black s thesis, 1992 Mar 22, / 81

63 Markov Random Fields The Global Approach We will later see the connection between the global approach and a class of probabilistic (graphical) models known as MRFs. This will lead to a Bayesian probabilistic interpretation, additional inference methods, and more. Mar 22, / 81

64 Spatial Smoothness The Global Approach The spatial derivatives of the optical flow are ( u x, u y, v x, v ) y. (44) Favor spatial smoothness by encouraging (the approximations of) these derivatives to be small; e.g., if using finite differences, penalize differences between the values of the flow at nearby pixels. Mar 22, / 81

65 Notation The Global Approach Let (i, j) denote the discrete location of pixel x. Let a single index, s (short for site), denote a generic (i, j) pair. For s = (i, j), write s s if s {(i + 1, j), (i 1, j), (i, j + 1), (i, j 1)}. (45) The new cost function is (where we notationally dropped the dependency on t): E(u, v, I 1, I 2 ) = ((I x (s)u(s) + I y (s)v(s) + I t (s)) 2 s + λ [ (u(s) u(s )) 2 + (v(s) v(s )) 2] ) (46) s :s s Want: arg min u,v E(u, v, I 1, I 2 ), solving for the values of (u, v) in all of the pixels at once. Mar 22, / 81

66 Notation The Global Approach Let (i, j) denote the discrete location of pixel x. Let a single index, s (short for site), denote a generic (i, j) pair. For s = (i, j), write s s if s {(i + 1, j), (i 1, j), (i, j + 1), (i, j 1)}. (45) The new cost function is (where we notationally dropped the dependency on t): E(u, v, I 1, I 2 ) = ((I x (s)u(s) + I y (s)v(s) + I t (s)) 2 s + λ [ (u(s) u(s )) 2 + (v(s) v(s )) 2] ) (46) s :s s Want: arg min u,v E(u, v, I 1, I 2 ), solving for the values of (u, v) in all of the pixels at once. Mar 22, / 81

67 Notation The Global Approach Let (i, j) denote the discrete location of pixel x. Let a single index, s (short for site), denote a generic (i, j) pair. For s = (i, j), write s s if s {(i + 1, j), (i 1, j), (i, j + 1), (i, j 1)}. (45) The new cost function is (where we notationally dropped the dependency on t): E(u, v, I 1, I 2 ) = ((I x (s)u(s) + I y (s)v(s) + I t (s)) 2 s + λ [ (u(s) u(s )) 2 + (v(s) v(s )) 2] ) (46) s :s s Want: arg min u,v E(u, v, I 1, I 2 ), solving for the values of (u, v) in all of the pixels at once. Mar 22, / 81

68 Computing the Gradient The Global Approach E(u, v, I 1, I 2 ) = ((I x(s)u(s) + I y (s)v(s) + I t (s)) 2 s + λ [ (u(s) u(s )) 2 + (v(s) v(s )) 2] ) s :s s If there are N pixels, then the gradient is a (row) vector of length 2N whose entries are (for outer-boundary pixels an adjustment is needed): u(s) E =2(I2 x(s)u(s) + I x (s)i y (s)v(s) + I x (s)i t (s)) + 4λ (u(s) u(s )) (47) s :s s v(s) E =2(I y(s)i x (s)u(s) + Iy 2 (s)v(s) + I y (s)i t (s)) + 4λ (v(s) v(s )) (48) s :s s Mar 22, / 81

69 Critical Points The Global Approach Set the gradient to zero and obtain the normal equations 1 Ix(s)u(s) 2 + I x (s)i y (s)v(s) + I x (s)i t (s) + 2λ (u(s) u(s )) = 0 s :s s I y (s)i x (s)u(s) + I 2 y (s)v(s) + I y (s)i t (s) + 2λ s :s s (v(s) v(s )) = 0 Solving these simultaneously for all s, minimizes the cost function E(u, v, I 1, I 2 ) = ((I x (s)u(s) + I y (s)v(s) + I t (s)) 2 s + λ [ (u(s) u(s )) 2 + (v(s) v(s )) 2] ) (49) (50) s :s s 1 For pixels on the outer boundary of the image these need to be adjusted a bit. Mar 22, / 81

70 The Global Approach Solving the Normal Equations large, but also very sparse, linear system of the form Aξ = b. If N is the number of pixels, then A is 2N 2N, while ξ and b are 2N 1. The solution is the optimal flow (under this model). Mar 22, / 81

71 The Global Approach Motion Discontinuities Violate Spatial Smoothness Figure from Michael Black s thesis, 1992 Mar 22, / 81

72 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

73 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

74 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

75 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

76 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

77 The Global Approach The Seminal [Horn and Schunck, 1981] Paper Started with a slightly different representation and model but ended up with a similar linear system. Used a not-go-great method to solve it. One of the most cited computer-vision papers. Partly because of the originality and importance, and partly because there was room left for improvement... For years HS was considered so inaccurate that it was believed that accurate optical-flow estimation is a lost cause Since early 90 s [Black and Anandan]: New methods started to drastically improve optical flow accuracy. [Sun and Black, 2011]: turns out that with only slight modifications, HS is comparable to the state of the art. Mar 22, / 81

78 The Global Approach Beyond the Original Assumptions and their Limitations Review the Assumptions in the Global Approach Constant Brightness Deviations from constancy are Gaussian Small motion First-order Taylor approximation is good enough Image is differentiable (w.r.t. x, y and t) Smooth flow field Deviations from smoothness are Gaussian First-order smoothness is all that matters Can approximate flow spatial derivatives by first differences All these assumptions are problematic. Mar 22, / 81

79 Modifications The Global Approach Beyond the Original Assumptions and their Limitations We will later discuss how some of these assumptions can be improved. Some of these modification also apply to the local approach. Better derivatives (improves results) Coarse to fine (handles larger motions, partially improves the Taylor approximation) Higher-order Taylor approximations (not that popular) Use larger neighborhood to determine amount of smoothness Layered models (e.g., penalize lack of smoothness only within a layer). Much better results but now needs to solve for the layers too Mar 22, / 81

80 Modifications (Cont.) The Global Approach Beyond the Original Assumptions and their Limitations Replace the quadratic error function with a robust error function (effectively replaces Gaussians with heavy-tail distributions). E(u, v, I 1, I 2 ) = ρ D (I x (s)u(s) + I y (s)v(s) + I t (s)) s + λ [ ρs (u(s) u(s )) + ρ S (v(s) v(s )) ]. s :s s where ρ D : R R 0 and ρ S : R R 0 are the new error functions. Choosing x x 2 recovers the original quadratic error. We will discuss robust statistics in more detail later. Using robust error measures yields much better results; but inference is harder; we will discuss some possible approaches. Median filtering of the flow turns out to be very important: we will discuss why this is so. (51) Mar 22, / 81

81 The Patch-Based Approach The Patch-Based Approach The Main Idea Use additional measurements from nearby pixels to (over-) constrain the values of (u, v) at the pixel of interest, x. The original formulation: [Lucas & Kanade, 1981]. Their application was related to stereo. Mar 22, / 81

82 Adding Measurements The Patch-Based Approach The Main Idea Add equations from neighboring pixels (e.g., a 5 5 neighborhood), but pretend the optical flow in these pixels is the same as in x: I x (x 1, t)u(x) + I y (x 1, t)v(x) + I t (x 1, t) = 0. = 0 I x (x N, t)u(x) + I y (x N, t)v(x) + I t (x N, t) = 0 (52) where N is the number of pixels in the neighborhood (e.g., 25) More equations (e.g., 25) than unknowns (2). I x (x 1, t) I y (x 1, t). } I x (x N, t) I y (x N, t) {{ } N 2 [ ] I t (x 1, t) u(x) = v(x). }{{} I t (x N, t) 2 1 } {{ } N 1 (53) Mar 22, / 81

83 The Patch-Based Approach A Least-squares Criterion LS and WLS ε(x) = ε 1 (x). ε N (x) [ Ix(x 1,t) I y(x 1,t). I x(x N,t) I y(x N,t) ] [ u(x) v(x) ] [ It(x ] 1,t) +. I t(x N,t) (54) ε(x) 2 N ε 2 i (x) = i=1 N i=1 ( [ u(x) x I(x i, t) v(x) ] + I t (x i, t)) 2 (55) [ û(x) ˆv(x) Note ε(x) 2 = ε(x) T ε(x) ] LS arg min ε(x) 2 (56) u(x),v(x) Mar 22, / 81

84 The Patch-Based Approach A Least-squares Criterion LS and WLS ε(x) = ε 1 (x). ε N (x) [ Ix(x 1,t) I y(x 1,t). I x(x N,t) I y(x N,t) ] [ u(x) v(x) ] [ It(x ] 1,t) +. I t(x N,t) (54) ε(x) 2 N ε 2 i (x) = i=1 N i=1 ( [ u(x) x I(x i, t) v(x) ] + I t (x i, t)) 2 (55) [ û(x) ˆv(x) Note ε(x) 2 = ε(x) T ε(x) ] LS arg min ε(x) 2 (56) u(x),v(x) Mar 22, / 81

85 The Patch-Based Approach A Least-squares Criterion LS and WLS ε(x) = ε 1 (x). ε N (x) [ Ix(x 1,t) I y(x 1,t). I x(x N,t) I y(x N,t) ] [ u(x) v(x) ] [ It(x ] 1,t) +. I t(x N,t) (54) ε(x) 2 N ε 2 i (x) = i=1 N i=1 ( [ u(x) x I(x i, t) v(x) ] + I t (x i, t)) 2 (55) [ û(x) ˆv(x) Note ε(x) 2 = ε(x) T ε(x) ] LS arg min ε(x) 2 (56) u(x),v(x) Mar 22, / 81

86 The Patch-Based Approach LS and WLS More Generally: A Weighted Least-squares Criterion g: a weighting function; e.g., ( ) g(x, x i ) = exp 1 x x i 2 2 where σ s controls the decay rate with the spatial distance. N i=1 ( [ u(x) g(x, x i ) x I(x i, t) v(x) σ 2 s σ s > 0 (57) ] 2 + I t (x i, t)) = W ε = ε T W ε where W and W 1/2 are N N diagonal matrices with W ii = g(x, x i ) and (W 1 2 ) ii = g(x, x i ) (so W 1 2 W 1 2 = W ). [ û(x) ˆv(x) ] WLS (58) arg min ε T W ε (59) u(x),v(x) Mar 22, / 81

87 Critical Points The Patch-Based Approach LS and WLS E(u(x), v(x)) = N i=1 g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) 2 E u(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I x (x i, t) E v(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I y (x i, t) Set gradient to zero: N 0 = g(x, x i )(Ix(x 2 i, t)u(x) + I x (x i, t)i y (x i, t)v(x) + I x (x i, t)i t (x i, t)) 0 = i=1 (60) N g(x, x i )(I x (x i, t)i y (x i, t)u(x) + Iy 2 (x i, t)v(x) + I y (x i, t)i t (x i, t)) i=1 (61) Mar 22, / 81

88 Critical Points The Patch-Based Approach LS and WLS E(u(x), v(x)) = N i=1 g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) 2 E u(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I x (x i, t) E v(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I y (x i, t) Set gradient to zero: N 0 = g(x, x i )(Ix(x 2 i, t)u(x) + I x (x i, t)i y (x i, t)v(x) + I x (x i, t)i t (x i, t)) 0 = i=1 (60) N g(x, x i )(I x (x i, t)i y (x i, t)u(x) + Iy 2 (x i, t)v(x) + I y (x i, t)i t (x i, t)) i=1 (61) Mar 22, / 81

89 Critical Points The Patch-Based Approach LS and WLS E(u(x), v(x)) = N i=1 g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) 2 E u(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I x (x i, t) E v(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I y (x i, t) Set gradient to zero: N 0 = g(x, x i )(Ix(x 2 i, t)u(x) + I x (x i, t)i y (x i, t)v(x) + I x (x i, t)i t (x i, t)) 0 = i=1 (60) N g(x, x i )(I x (x i, t)i y (x i, t)u(x) + Iy 2 (x i, t)v(x) + I y (x i, t)i t (x i, t)) i=1 (61) Mar 22, / 81

90 Critical Points The Patch-Based Approach LS and WLS E(u(x), v(x)) = N i=1 g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) 2 E u(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I x (x i, t) E v(x) = N i=1 2g(x, x i) (I x (x i, t)u(x) + I y (x i, t)v(x) + I t (x i, t)) I y (x i, t) Set gradient to zero: N 0 = g(x, x i )(Ix(x 2 i, t)u(x) + I x (x i, t)i y (x i, t)v(x) + I x (x i, t)i t (x i, t)) 0 = i=1 (60) N g(x, x i )(I x (x i, t)i y (x i, t)u(x) + Iy 2 (x i, t)v(x) + I y (x i, t)i t (x i, t)) i=1 (61) Mar 22, / 81

91 Critical Points The Patch-Based Approach LS and WLS N i=1 g(x, x i)(i 2 x(x i, t)u(x)+i x (x i, t)i y (x i, t)v(x)+i x (x i, t)i t (x i, t)) = 0 N i=1 g(x, x i)(i x (x i, t)i y (x i, t)u(x)+i 2 y (x i, t)v(x)+i y (x i, t)i t (x i, t)) = 0 In matrix form: where M(x) = [ u(x) M(x) v(x) [ ] gi 2 x gix I y gix I y gi 2 y (M(x) is 2 2 and b(x) is 2 1) If M(x) is rank 2, then there is a solution: ] = b(x) (62) [ ] gix I b(x) = t gi y I t (63) û LK (x) = û WLS (x) = M 1 (x)b(x) (64) It is consistent with the 1D case: u(x) = It(x,t) I. x(x,t) Mar 22, / 81

92 Critical Points The Patch-Based Approach LS and WLS N i=1 g(x, x i)(i 2 x(x i, t)u(x)+i x (x i, t)i y (x i, t)v(x)+i x (x i, t)i t (x i, t)) = 0 N i=1 g(x, x i)(i x (x i, t)i y (x i, t)u(x)+i 2 y (x i, t)v(x)+i y (x i, t)i t (x i, t)) = 0 In matrix form: where M(x) = [ u(x) M(x) v(x) [ ] gi 2 x gix I y gix I y gi 2 y (M(x) is 2 2 and b(x) is 2 1) If M(x) is rank 2, then there is a solution: ] = b(x) (62) [ ] gix I b(x) = t gi y I t (63) û LK (x) = û WLS (x) = M 1 (x)b(x) (64) It is consistent with the 1D case: u(x) = It(x,t) I. x(x,t) Mar 22, / 81

93 Critical Points The Patch-Based Approach LS and WLS N i=1 g(x, x i)(i 2 x(x i, t)u(x)+i x (x i, t)i y (x i, t)v(x)+i x (x i, t)i t (x i, t)) = 0 N i=1 g(x, x i)(i x (x i, t)i y (x i, t)u(x)+i 2 y (x i, t)v(x)+i y (x i, t)i t (x i, t)) = 0 In matrix form: where M(x) = [ u(x) M(x) v(x) [ ] gi 2 x gix I y gix I y gi 2 y (M(x) is 2 2 and b(x) is 2 1) If M(x) is rank 2, then there is a solution: ] = b(x) (62) [ ] gix I b(x) = t gi y I t (63) û LK (x) = û WLS (x) = M 1 (x)b(x) (64) It is consistent with the 1D case: u(x) = It(x,t) I. x(x,t) Mar 22, / 81

94 Critical Points The Patch-Based Approach LS and WLS N i=1 g(x, x i)(i 2 x(x i, t)u(x)+i x (x i, t)i y (x i, t)v(x)+i x (x i, t)i t (x i, t)) = 0 N i=1 g(x, x i)(i x (x i, t)i y (x i, t)u(x)+i 2 y (x i, t)v(x)+i y (x i, t)i t (x i, t)) = 0 In matrix form: where M(x) = [ u(x) M(x) v(x) [ ] gi 2 x gix I y gix I y gi 2 y (M(x) is 2 2 and b(x) is 2 1) If M(x) is rank 2, then there is a solution: ] = b(x) (62) [ ] gix I b(x) = t gi y I t (63) û LK (x) = û WLS (x) = M 1 (x)b(x) (64) It is consistent with the 1D case: u(x) = It(x,t) I. x(x,t) Mar 22, / 81

95 Analysis of M(x) The Patch-Based Approach Analysis of M(x) If M(x) is singular, we can t solve the system. This is the aperture problem. Figure taken from Szeliski s Computer Vision Textbook, 2011 Mar 22, / 81

96 Analysis of M(x) The Patch-Based Approach Analysis of M(x) Also want M(x) not to be too small (due to noise); i.e., its eigen values, λ 1 and λ 2 (with λ 1 λ 2 ) should not be too small. Moreover, M should be well conditioned; i.e., λ 1 /λ 2 should not be too large. Mar 22, / 81

97 The Patch-Based Approach Reminder: Eigenvectors; Eigenvalues Analysis of M(x) Let A R n n, v R n, and let λ R If Av = λv then v is called a right eigenvector of A while λ is its corresponding eigenvalue. If v T A = λv T then v T is called a left eigenvector of A while λ is its corresponding eigenvalue. Mar 22, / 81

98 Analysis of M(x) Recall xi is a row vector The Patch-Based Approach Analysis of M(x) M(x) = [ gi 2 x gix I y gix I y gi 2 y ] = g x I T x I (65) Mar 22, / 81

99 The Patch-Based Approach Analysis of M(x) Suppose x is on an image edge Analysis of M(x) Gradients along the edge all point in the same direction while gradients away from the edge are of small magnitude. M(x) = i g(x, x i ) x I(x i, t) T x I(x i, t) κ x I(x) T x I(x) (66) M(x) x I(x) T = κ x I(x) T x I(x) x I(x) T = κ x I(x) 2 x I(x) T }{{} xi(x) 2 Thus, x I(x) T is an eigenvector with an eigenvalue λ 1 = κ x I(x) 2. (67) Mar 22, / 81

100 The Patch-Based Approach Analysis of M(x) Suppose x is on an image edge Analysis of M(x) The other eigenvector is perpendicular to x I T. Let T be perpendicular to x I T. [ ] 0 M(x)T κ x I(x) T x I(x)T =. }{{} 0 0 the second eigenvalue, λ 2, is 0 (particularly, M(x) is not invertible, since det M(x) = λ 1 λ 2 ). To summarize, the eigenvalues/eigenvectors of M(x) are related to the direction and the magnitude of the edge. Mar 22, / 81

101 Analysis of M(x) The Patch-Based Approach Analysis of M(x) This is also related to the Harris corner detector [Harris & Stephens, 1988]: R = λ 1 λ 2 k(λ 1 + λ 2 ) 2 = det M(x) k(trace(m(x)) 2 (68) where k (which is unrelated to κ from the previous slide) is a user-defined sensitivity parameter (usually k = 0.04). Images taken from OpenCV Documentation Mar 22, / 81

102 Analysis of M(x) The Patch-Based Approach Analysis of M(x) The Shi-Tomasi Corner Detector: R = min(λ 1, λ 2 ) (69) Mar 22, / 81

103 Analysis of M(x) The Patch-Based Approach Analysis of M(x) Even though optical flow involves two images, the analysis above tells us that it is enough to take a look at a single image in order to measure sensitivity. Putting it differently, it suggests a mechanism for deciding which pixels are easier to track. This is useful, e.g., when tracking a sparse set of features. E.g.: Good Features to Track [Shi-Tomasi, 1994] Image taken from OpenCV Documentation Mar 22, / 81

104 The Patch-Based Approach Analysis of M(x) Issues with the Lucas-Kanade Approach Suppose M(x) is easily invertible, and suppose there is very little noise. When would we expect the method to break? When brightness constancy is violated When the motion is not small (recall the Taylor approximation) When the motion of the pixel is too different from the motion of (many of) its neighbors. This can happen when the neighborhood (or window size ) is too large. Mar 22, / 81

105 The Patch-Based Approach The Iterative Lucas-Kanade Method The Iterative LK Method and a Coarse-to-fine Method Estimate the flow by solving the LK equations: u [0] = M 1 b Warp the first image using the estimated flow. I [1] warped = I 1(x + u [0] ) Recompute b and M, and then the flow, but with a twist: in computation of the derivatives (needed for b and M), use the warped image instead of the original. Set: Repeat till convergence. δu [1] = M 1 b (70) u [1] = u [0] + δu [1] (71) The procedure generates a sequence of cost function that converges to the original one. Mar 22, / 81

106 The Patch-Based Approach Iterations and Coarse-to-Fine LK The Iterative LK Method and a Coarse-to-fine Method Suppose L levels, where 1 is the finest and L is the coarsest. Compute, (u L, v L ), the iterative LK flow at level L. Iteratively at level i: Upsample by a factor of 2 in each dimension. Multiply the results by 2. Warp the image using that flow. Compute the temporal derivative based on the warped image. Compute iterative LK using the warped image. Add that to the previous flow estimate. Mar 22, / 81

107 Remark on Smoothness The Patch-Based Approach Smoothness If the weighted combinations in the LK equations vary smoothly in space, the resulting solutions of applying this procedure at adjacent pixels tends to favor solutions that vary smoothly. So in a sense, this method too may (implicitly) favor smoothness Mar 22, / 81

108 Parametric Models The Patch-Based Approach Regression More flexible than constant motions Can also use them for global motions Either way, it is a regression problem Mar 22, / 81

109 The Patch-Based Approach Regression Reminder: Affine Functions from R 2 to R 2 Affine is linear plus offset [ x y ] [ θ1 θ 2 θ 3 θ 4 θ 5 θ 6 ] x y 1 [ θ1 θ = 2 θ 4 θ 5 ] [ x y ] [ θ3 + θ 6 ] (72) Mar 22, / 81

110 The Patch-Based Approach Extension to Affine Motion Regression Let x i = (x i, y i ) be in the chosen neighborhood of x = (x, y). Instead of constant motion, assume [ u(xi ) v(x i ) ] [ ] θ1 θ = 2 θ 3 θ 4 θ 5 θ 6 x i x y i y 1 [ ] xi x y = i y x i x y i y 1 }{{} A(x,x i ) θ 1 θ 2 θ 3 θ 4 θ 5 θ 6 }{{} θ (73) Mar 22, / 81

111 The Patch-Based Approach Extension to Affine Motion Regression Together with the gradient-constraint equation, we get: Again, using weights as before: x I(x i, t)a(x, x i )θ + I t (x i, t) = 0 (74) ˆθ WLS (x) = M 1 (x) b(x) }{{}}{{} (75) where M(x) = i g(x, x i )A(x, x i ) T x I(x i, t) T x I(x i, t)a(x, x i ) (76) b(x) = i g(x, x i )A(x, x i ) T x I(x i, t) T I t (x i, t) (77) Mar 22, / 81

112 The Patch-Based Approach Regression Weighted Least Squares in a Linear Model The more general case: Hθ = y (78) ˆθ WLS = arg min W 1/2 (Hθ y) 2 (79) θ (H T W H) ˆθ WLS = H T W y (80) ˆθ WLS = (H T W H) 1 H T W y (81) H: N K θ: K 1 y: N 1 W : an N N diagonal matrix W such that W ii = g(x, x i ) Mar 22, / 81

113 The Patch-Based Approach We Already Saw Two Examples Regression Example 1 Constant Flow: I x (x 1, t) I y (x 1, t) [ ] u(x). v(x) I x (x N, t) I y (x N, t) }{{}}{{} θ H = I t (x 1, t). } I t (x N, t) {{ } y where H is N 2, θ is 2 1, and y is N 1. Example 2 Affine Flow: x I(x 1, t)a(x, x 1 ) I t (x 1, t). θ =. } x I(x N, t)a(x, x N ) {{ } } I t (x N, t) {{ } H y where H is N 6, θ is 6 1, and y is N 1. (82) (83) Mar 22, / 81

114 The Patch-Based Approach Other Parametric Flow Models Regression If the flow can be written as a linear combination of basis functions, we can still use the WLS for linear models Otherwise, if the flow is differentiable w.r.t. its parameters, can use nonlinear WLS techniques. Mar 22, / 81

115 Robust Formulations The Patch-Based Approach Regression Replace N i=1 ε2 i (x) with N i=1 ρ(ε i(x)) Mar 22, / 81

116 Learned Basis Additional Remarks Figure from [Black, Yacoob and Jepson, 1997] Mar 22, / 81

117 SIFT Flow Applications Additional Remarks Figure taken from Szeliskis Computer Vision Textbook, 2011 Mar 22, / 81

118 Layers Additional Remarks Figure from Wang and [Wang and Adelson, 1994] Mar 22, / 81

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani) Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

CS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint

CS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint CS4495/6495 Introduction to Computer Vision 6B-L1 Dense flow: Brightness constraint Motion estimation techniques Feature-based methods Direct, dense methods Motion estimation techniques Direct, dense methods

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016 Optical flow Subhransu Maji CMPSC 670: Computer Vision October 20, 2016 Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes CMPSC 670 2 Motion and perceptual organization Sometimes,

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Introduction to motion correspondence

Introduction to motion correspondence Introduction to motion correspondence 1 IPAM - UCLA July 24, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Why estimate visual motion? 2 Tracking Segmentation Structure

More information

Computer Vision I. Announcements

Computer Vision I. Announcements Announcements Motion II No class Wednesda (Happ Thanksgiving) HW4 will be due Frida 1/8 Comment on Non-maximal supression CSE5A Lecture 15 Shi-Tomasi Corner Detector Filter image with a Gaussian. Compute

More information

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan Motion estimation Digital Visual Effects Yung-Yu Chuang with slides b Michael Black and P. Anandan Motion estimation Parametric motion image alignment Tracking Optical flow Parametric motion direct method

More information

Optical Flow, Motion Segmentation, Feature Tracking

Optical Flow, Motion Segmentation, Feature Tracking BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature

More information

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani) Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Iterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research

Iterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research Iterative Image Registration: Lucas & Kanade Revisited Kentaro Toyama Vision Technology Group Microsoft Research Every writer creates his own precursors. His work modifies our conception of the past, as

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Image Analysis. Feature extraction: corners and blobs

Image Analysis. Feature extraction: corners and blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).

More information

Elaborazione delle Immagini Informazione multimediale - Immagini. Raffaella Lanzarotti

Elaborazione delle Immagini Informazione multimediale - Immagini. Raffaella Lanzarotti Elaborazione delle Immagini Informazione multimediale - Immagini Raffaella Lanzarotti OPTICAL FLOW Thanks to prof. Mubarak Shah,UCF 2 Video Video: sequence of frames (images) catch in the time Data: function

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?

More information

Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem

Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem El Mostafa Kalmoun kalmoun@cs.fau.de Ulrich Ruede ruede@cs.fau.de Institute of Computer Science 10 Friedrich Alexander

More information

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Lecture 13: Tracking mo3on features op3cal flow

Lecture 13: Tracking mo3on features op3cal flow Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons Reading: [Szeliski]

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Differential Motion Analysis

Differential Motion Analysis Differential Motion Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@ece.northwestern.edu http://www.eecs.northwestern.edu/~yingwu July 19,

More information

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Gradients and edges Points of sharp change in an image are interesting: change in reflectance change in object change

More information

Edge Detection. Computer Vision P. Schrater Spring 2003

Edge Detection. Computer Vision P. Schrater Spring 2003 Edge Detection Computer Vision P. Schrater Spring 2003 Simplest Model: (Canny) Edge(x) = a U(x) + n(x) U(x)? x=0 Convolve image with U and find points with high magnitude. Choose value by comparing with

More information

Machine vision, spring 2018 Summary 4

Machine vision, spring 2018 Summary 4 Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the

More information

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005 6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Machine vision. Summary # 4. The mask for Laplacian is given

Machine vision. Summary # 4. The mask for Laplacian is given 1 Machine vision Summary # 4 The mask for Laplacian is given L = 0 1 0 1 4 1 (6) 0 1 0 Another Laplacian mask that gives more importance to the center element is L = 1 1 1 1 8 1 (7) 1 1 1 Note that the

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Lecture 13 - Wednesday April 29th

Lecture 13 - Wednesday April 29th Lecture 13 - Wednesday April 29th jacques@ucsdedu Key words: Systems of equations, Implicit differentiation Know how to do implicit differentiation, how to use implicit and inverse function theorems 131

More information

Blobs & Scale Invariance

Blobs & Scale Invariance Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos

More information

Regularization in Neural Networks

Regularization in Neural Networks Regularization in Neural Networks Sargur Srihari 1 Topics in Neural Network Regularization What is regularization? Methods 1. Determining optimal number of hidden units 2. Use of regularizer in error function

More information

Nonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion

Nonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion Nonlinear Diffusion These notes summarize the way I present this material, for my benefit. But everything in here is said in more detail, and better, in Weickert s paper. 1 Introduction: Motivation for

More information

An Axiomatic Approach to Corner Detection

An Axiomatic Approach to Corner Detection An Axiomatic Approach to Corner Detection C S Kenney M Zuliani B S Manjunath Vision Research Lab Department of Electrical and Computer Engineering University of California, Santa Barbara Abstract This

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

TRACKING and DETECTION in COMPUTER VISION

TRACKING and DETECTION in COMPUTER VISION Technischen Universität München Winter Semester 2013/2014 TRACKING and DETECTION in COMPUTER VISION Template tracking methods Slobodan Ilić Template based-tracking Energy-based methods The Lucas-Kanade(LK)

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou 1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process

More information

Announcements. Tracking. Comptuer Vision I. The Motion Field. = ω. Pure Translation. Motion Field Equation. Rigid Motion: General Case

Announcements. Tracking. Comptuer Vision I. The Motion Field. = ω. Pure Translation. Motion Field Equation. Rigid Motion: General Case Announcements Tracking Computer Vision I CSE5A Lecture 17 HW 3 due toda HW 4 will be on web site tomorrow: Face recognition using 3 techniques Toda: Tracking Reading: Sections 17.1-17.3 The Motion Field

More information

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form Second Order ODEs Second Order ODEs In general second order ODEs contain terms involving y, dy But here only consider equations of the form A d2 y dx 2 + B dy dx + Cy = 0 dx, d2 y dx 2 and F(x). where

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR Appendix B The Derivative B.1 The Derivative of f In this chapter, we give a short summary of the derivative. Specifically, we want to compare/contrast how the derivative appears for functions whose domain

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

Global parametric image alignment via high-order approximation

Global parametric image alignment via high-order approximation Global parametric image alignment via high-order approximation Y. Keller, A. Averbuch 2 Electrical & Computer Engineering Department, Ben-Gurion University of the Negev. 2 School of Computer Science, Tel

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

Image Alignment and Mosaicing

Image Alignment and Mosaicing Image Alignment and Mosaicing Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic mosaicing Image Enhancement Original

More information

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours) SOLUTIONS TO THE 18.02 FINAL EXAM BJORN POONEN December 14, 2010, 9:00am-12:00 (3 hours) 1) For each of (a)-(e) below: If the statement is true, write TRUE. If the statement is false, write FALSE. (Please

More information

Computer Vision Motion

Computer Vision Motion Computer Vision Motion Professor Hager http://www.cs.jhu.edu/~hager 12/1/12 CS 461, Copyright G.D. Hager Outline From Stereo to Motion The motion field and optical flow (2D motion) Factorization methods

More information

FFTs in Graphics and Vision. The Laplace Operator

FFTs in Graphics and Vision. The Laplace Operator FFTs in Graphics and Vision The Laplace Operator 1 Outline Math Stuff Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 2 Linear Operators Definition:

More information

Today. Calculus. Linear Regression. Lagrange Multipliers

Today. Calculus. Linear Regression. Lagrange Multipliers Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject

More information

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. 3. Image processing 1 Image processing An image processing operation typically defines

More information

Digital Matting. Outline. Introduction to Digital Matting. Introduction to Digital Matting. Compositing equation: C = α * F + (1- α) * B

Digital Matting. Outline. Introduction to Digital Matting. Introduction to Digital Matting. Compositing equation: C = α * F + (1- α) * B Digital Matting Outline. Introduction to Digital Matting. Bayesian Matting 3. Poisson Matting 4. A Closed Form Solution to Matting Presenting: Alon Gamliel,, Tel-Aviv University, May 006 Introduction to

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Methods in Computer Vision: Introduction to Matrix Lie Groups

Methods in Computer Vision: Introduction to Matrix Lie Groups Methods in Computer Vision: Introduction to Matrix Lie Groups Oren Freifeld Computer Science, Ben-Gurion University June 14, 2017 June 14, 2017 1 / 46 Definition and Basic Properties Definition (Matrix

More information

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?) Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool Outline Camera calibration Camera projection models Camera calibration i Nonlinear least square methods A camera calibration tool Applications Digital Visual Effects Yung-Yu Chuang with slides b Richard

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

An Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections

An Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections An Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections Claudia Kondermann, Daniel Kondermann, Bernd Jähne, Christoph Garbe Interdisciplinary Center for Scientific Computing

More information

Chapter 3 Salient Feature Inference

Chapter 3 Salient Feature Inference Chapter 3 Salient Feature Inference he building block of our computational framework for inferring salient structure is the procedure that simultaneously interpolates smooth curves, or surfaces, or region

More information

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA Review for Exam Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 0003 March 26, 204 Abstract Here are some things you need to know for the in-class

More information

6 EIGENVALUES AND EIGENVECTORS

6 EIGENVALUES AND EIGENVECTORS 6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt

More information

Math and Numerical Methods Review

Math and Numerical Methods Review Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering

More information

Harris Corner Detector

Harris Corner Detector Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information