Differential Motion Analysis

Size: px
Start display at page:

Download "Differential Motion Analysis"

Transcription

1 Differential Motion Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL July 19, /52

2 Outline Optical Flow Beyond Basic Optical Flow Considering Lighting Variations Considering Appearance Variations Considering Spatial-Appearance Variations Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 2/52

3 Brightness Constancy and Optical Flow Optical flow: the apparent motion of the brightness pattern Optical flow motion field Denote an image by I(x, y, t), and the velocity of a pixel m = [x, y] T is [ ] v m = ṁ = [v x, v y ] T dx/dt = dy/dt Brightness constancy: the intensity of m keeps the same during dt, i.e., Optical flow constraint: I(x + v x dt, y + v y dt, t + dt) = I(x, y, t) i.e. I x v x + I y v y + I t = 0 I v m + I t = 0 3/52

4 The Aperture Problem For each pixel, one constraint equation, but two unknowns. Normal flow v y normal flow I optical constraint line Aperture problem: the motion along the direction perpendicular to the image gradient cannot be determined v x p C t p C t Other constraints are needed. 4/52

5 Lucas-Kanade s Method Assume: a constant motion for a small image patch Ω. Define a weight function W(m),m Ω, for the pixels. Weighted LS formulation WLS solution: mine = W 2 (m) v m Ω ( I v + I ) 2 t A = v = I 1 I 1 x 1 y 1. I N x N v = (A T W 2 A) 1 A T W 2 b. I N y N [ x t, y t, W = diag(w(m 1 ),...,W(m N )) ] T = [v x, v y ] T, b = [ I 1 t,..., I ] N T t the intersection of all the flow constraint lines corresponding to the pixels in Ω. 5/52

6 Horn-Schunck s Method Assume: flow varies smoothly global regularization The measure of departure from smoothness can be written by: e s = ( v x 2 + v y 2 )dxdy = ( v x x )2 + ( v x y )2 + ( v y x )2 + ( v y y )2 dxdy The error of optical flow is: e c = ( I v m + I t )2 dxdy Objective function: e = e c + λe s = ( I v m + I t )2 + λ( v x 2 + v y 2 )dxdy 6/52

7 Horn-Schunck s Method Fixed-point iteration: v k+1 x v k+1 y Concisely, it is: = v x k = v y k ( I ) ( ) x v k x + I y λ + ( I x ( I x ) v k x + λ + ( I x ) 2 + ( I ( ) I y v y k + I y ) 2 + ( I y v k+1 = v k α( I) t ) 2 v y k + I t ) 2 I x I y In each iteration, the new optical flow field is constrained by its local average and the optical flow constraints. 7/52

8 Parametric Flow: Affine flow Affine model is under two assumptions: planar surface orthographic projection We can write a 3D plane by Z = AX + BY + C. Then we have the 6-parameter affine flow model: [ ] [ ] [ ] [ ] vx a1 a = 2 x a5 + v y a 3 a 4 y a 6 In this case, the flow can be determined by at least 3 points. 8/52

9 Parametric Flow: Quadratic flow Quadratic model is under two assumptions: planar surface perspective projection Under perspective projection, a plane can be written as So, we have 1 Z = 1 C A C X B C Y v x = a 1 + a 2 x + a 3 y + a 7 x 2 + a 8 xy v y = a 4 + a 5 x + a 6 y + a 7 xy + a 8 y 2 In this case, if we know at least 4 points on a planar object, we can also {a 1,...,a 8 }. 9/52

10 Parametric Flow: Parametric flow fitting LS formulation min I(x + v x (Θ)dt, y + v y (Θ)dt, t + dt) I(x, y, t) 2 θ Ω or min θ ] 2 [ I T v(θ) + I t Ω denote by I θ = I T v(θ), [ min Iθ T Θ + I t θ Ω ] 2 Easy to figure out the LS solution. 10/52

11 Exercises Exercise 1: Recovering rotation Assume the motion is a pure rotation, i.e., [ ] [ ] [ ] [ ] x2 x1 cos θ sinθ x1 = R(θ) = sinθ cos θ y 2 y 1 [ min I(R(θ) θ [ x1 y 1 y 1 ] ] 2, t + dt) I(x, y, t) Exercise 2: Recovering 2D affine motion [ ] [ ] [ ] vx a1 a = 2 x + a 3 a 4 y v y [ a5 a 6 min A I T v(a) + I t 2 ] 11/52

12 Robust Flow Computation 1 Motivation violation Brightness constancy specular reflection violation Spatial smoothness motion discontinuities Outliers ruin LS estimation One solution: influence function ρ(x, σ) Applying influence function to flow estimation min v ρ(i(x, y, t) I(x + v x dt, y + v y dt, t + dt), σ) min v Ω ρ c ( I T v(θ) + I t, σ c ) + λ[ρ s (v x, σ s ) + ρ s (v y, σ s )] Ω 1 Michael Black and P. Anandan, The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields, CVIU, vol.63. no.1, pp , /52

13 Multi-frame Optical Flow 2 For a static scene, the flow induced by camera motion in multiple frames lie in a low-dimensional subspace Example: 3D planar scene under orthographic projection [ ] [ ] vx a5 a = 1 a 2 1 x v y a 6 a 3 a 4 y We have F frames and all have the same N points. Denote by [vx ij vy ij ] T the flow for the i-th point at the j-th frame, w.r.t. a reference frame. Collect all the flows: vx 11 vx v N1 x a5 1 a1 1 a U = =... x 1 x 2... x N vx 1F vx 2F... vx NF a5 F a1 F a2 F y 1 y 2... y N 2 Michal Irani, Multi-Frame Optical Flow Estimation Using Subspace Constraints, ICCV 99 13/52

14 Multi-frame Optical Flow And V = vy 11 vy 21. v 1F y... vy N1 a6 1 a3 1 a =... x 1 x 2... x N... vy NF a6 F a3 F a4 F y 1 y 2... y N v 2F y It is clear that rank [ ] U 3 V rank [ U V ] 6 14/52

15 Outline Optical Flow Beyond Basic Optical Flow Considering Lighting Variations Considering Appearance Variations Considering Spatial-Appearance Variations Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 15/52

16 Considering lighting models Brightness constancy assumption is too restrictive Are there constraints for lighting? For a pure Lambertain surface if no shadowing, then all images under varying illumination lie in a 3-D subspace in R N with shadowing, the dimension will be higher, but we may learn it The subspace can be learnt from a set of training images by PCA, so we have the basis B = [B 1, B 2,...,B m ] (note: B T B = I). Then the appearance of the template at t is modeled by I(x, y, t) + BΛ, where Λ =. λ 1 λ m 16/52

17 Considering lighting models 3 therefore, we have E(Θ,Λ) = Ω I(x + v x (Θ)dt,y + v y (Θ)dt,t + dt) I(x,y,t) BΛ 2 or E(Θ,Λ) = Ω or E(Θ,Λ) = Ω T v(θ) + I t BΛ 2 T θ Θ + I t BΛ 2 denote by I T θ = M, we have [ ] [ ] Θ M B = I Λ t 3 Gregory Hager and Peter Belhumeur, Real-Time Tracking of Image Regions with Changes in Geometry and Illumination, CVPR 96 17/52

18 Considering lighting models So, we have [ ] Θ Λ = [ M B ] It [ M = T M M T ] 1 [ ] B M T B T M B T B B T I t easy to see [ 1 Θ = M T (I BB )M] T M T (I + BB T )I t 18/52

19 Considering appearance variations In-class appearance variations Low-level matching high-level matching If we know the target, we may learn its appearance variations We may build a classifier for matching mins (I(u + v x dt, v + v y dt, t + dt) : Λ), v where Λ are parameters of the classifier E.g., using an SVM classifier n y j α j k(i,x j ) + b j=1 Let s maximize the SVM matching score max u,v n y j α j k(i + ui x + vi y,x j ) j=1 19/52

20 Considering appearance variations 4 Let use a 2nd order polynomial kernel k(x,x j ) = (x T x j ) 2 so, we have E(u, v) = n j=1 y j α j [ (I + ui x + vi y ) T x j ] 2 E u E v = y j α j I T x x j (I + ui x + vi y ) T x j = 0 = y j α j I T y x j (I + ui x + vi y ) T x j = 0 the solution is: [ αj y j (x T j I x ) 2 αj y j (x T ] [ ] j I x )(x j I y ) αj y j (x T j I x )(x T u j I y ) αj y j (x T j I y ) 2 = v [ αj y j (x T j I x )(x T ] j I) α j y j (x T j I y )(x T j I) 4 Shai Avidan, Subset Selection for Efficient SVM Tracking, CVPR 03 20/52

21 5 Ting Yu and Ying Wu, Differential Tracking based on Spatial-Appearance Model (SAM), CVPR 06 21/52 Spatial-appearance model (SAM) 5 Denote by y = [x c(x)], where x is the location and c color Assume a Gaussian component be factorized g(y; µ k, Σ k ) = g(x; µ ks, Σ ks )g(c(x); µ kc, Σ kc ) For a pixel, the likelihood is a mixture K p(y Θ) = p k g(y; µ k, Σ k ) k=1 Let s use an affine motion here [ ] a1t a T(x; a t ) = 2t x + then, we have = a 3t a 4t [ a5t p(t(y;a t ) Θ) = p(t(x;a t ),c(t(x;a t )) Θ) K K p k g(x;µ ks,σ ks )g(c(t(x;a t ));µ kc,σ kc ) = q(k,y i ;a t ) k=1 a 6t ] k=1

22 Spatial-appearance model (SAM) For an image region E(a t ; Θ) = x i Ω our task is to log p(t(y i ; a t ) Θ) = x i Ω max a t E(a t ; Θ) Solution: similar to the general EM algorithm ( K ) log q(k,y i ; a t ) k=1 22/52

23 Spatial-appearance model (SAM) 23/52

24 Outline Optical Flow Beyond Basic Optical Flow Considering Lighting Variations Considering Appearance Variations Considering Spatial-Appearance Variations Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 24/52

25 Representation The target is represented by a feature histogram q = [q 1, q 2,...,q m ] T R m, where q u = 1 C n K(s i x)δ(b(s i ), u) i=1 its matrix form q(x) = U T K(x) δ(b(s 1 ),u 1 )... δ(b(s 1 ),u m ) U =... R n m, K = 1 K(s 1 x) C. R n. δ(b(s n ),u 1 )... δ(b(s n ),u m ) K(s n x) Kernel profile: K(x) = k( x 2 ) denote by g(x) = k (x) 25/52

26 Formulation The target is initially at x to find the optimal motion x by mino(q,p(x + x)) x where q is the target model, and p is the image observation choices of O(, ) Bhattachayya coefficient O B ( x) = q, p(x + x) = q T p(x + x). Matusita metric O M ( x) = q p(x + x) 2. 26/52

27 Mean-shift tracking 6 O B = m pu (x + x)q u u=1 first order approximation O B ( x) = 1 2 where w i = m u=1 m u=1 pu (x)q u + 1 2C n i=1 w i K( x + x s i 2 ) h qu δ(b(s i ), u) p u (x) is the weight for s i CVPR 00 6 D. Comaniciu, V. Ramesh and P. Meer, Real-Time Tracking of Non-Rigid Objects using Mean Shift, 27/52

28 Mean-shift tracking So, we have min O B( x) = max x x n i=1 w i K( x + x s i 2 ) h The solution is an iterative mean-shift procedure x n i=1 n i=1 s i w i g( x s i 2 ) h w i g( x s i 2 ) h 28/52

29 SSD kernel-based tracking 7 let s use O M ( x) = q p(x + x) 2 Linearization where p(x + x) p(x) d(p(x)) 1 2 U T J k (x) x d(p(x)) = diag(p 1 (x),...,p m (x)) J k = [ c K(s 1 x) ] K K u v =. c K(s n x) 7 G. Hager, M. Dewan and C. Stewart, Multiple Kernel Tracking with SSD, CVPR 04 29/52

30 SSD kernel-based tracking So the objective function is O M ( x) = q p(x) 1 2 d(p(x)) 1 2 U T J k (x) x 2 Denote M(x) = 1 2 d(p(x)) 1 2U T J k (x) we have a linear system M x = q p(x) the solution is clear x = M ( q p(x)) 30/52

31 Singularities It is clear that M is in the form of d 1 x M =. d m x d 1 y. d m y where [ d j x d j y ] = 1 2 (s j i p x)g s j i x 2 j h i which is the center of mass for feature j. {[ ] } If dx j dy j, j = 1,...,m are linearly dependent, then rank(m) = 1 and the solution is not unique. 31/52

32 Optimal Kernel Placement 8 Different image regions have different properties. Some of them are singular, and some are far from singular. How can we find those that are far from singular? Checking the property of M. The Schatten 1-norm: A S = σ i The S-norm condition number κ S (A) = ( σ i ) 2 / σ i we can compute in a closed form κ S (M T M) = (M T M) S (M T M) S = exhaustive search v.s. gradient-based search ( (d j x) 2 + (d j y) 2 ) 2 (d j x ) 2 (d j y) 2 ( (d j xd j y)) 2 8 Zhimin Fan, Ming Yang, Ying Wu, Gang Hua and Ting Yu, Efficient Optimal Kernel Placement for Reliable Visual Tracking, CVPR 06 32/52

33 Optimal Kernel Placement 33/52

34 Kernel Concatenation Concatenate multiple kernels to increase the dimensionality of measurement the same as using more features a set of K kernels p i (x) = U T K i (x) stacking histograms into p and q. the objective function is min x K q p i (x + x) 2 i=1 easy to see the solution where M = 1 2 d(p) 1 2 M x = q p i (x) U T... U T J K1. J Kw 34/52

35 Kernel Combination Aggregating histograms to produce new features K K q = U T K i, p = U T K i (c). i=1 i=1 The objective function is min K q K p i (x + x) 2 x i=1 i=1 The corresponding linear system is: K q i K p i (x) = M x, i=1 i=1 where M = 1 K 2 d(p) 1 2 U T J Ki = K i=1 i=1 M i 35/52

36 Collaborative Multiple Kernels 9 x 1 Relaxed motion representation x =. x k Consider a structural constrain Objective function O(x 1,...,x k ) = Ω(x 1,...,x k ) = 0 k q i p i (x i ) 2 + γ Ω(x 1,...,x k ) 2 i=1 This is equivalent to a linear system { l = G x + ω1 y = M x + ω 2, 9 Zhimin Fan, Ying Wu and Ming Yang, Multiple Collaborative Kernel Tracking, CVPR 05 36/52

37 Collaborative Multiple Kernels where q1 M p(x 1 ) M y =. qk, M = , p(x k ) M k ] Ω Ω G = x 2 x k, l = Ω(x 1,x 2,...,x k ) [ Ω x 1 We have ([ ]) M rank γg rank(m) it enhances the motion observability 37/52

38 An Example special case: x 1 = x 2 =... = x k, and γ is chosen as the optimal Lagrangian multiplier, then I I I I G =......, and l = 0. I I we have rank(g) = (k 1) dim(x 1 ). E.g., supposing k = 10 and dim(x 1 ) = 2, this implies that the motion resides in a 2-D manifold in R 20. Thus, as long as rank(m) dim(x 1 ), all the motion parameters are observable, or can be uniquely determined. It is be easily satisfied if any of the xi is observable through its kernel, there are a number of dim(x1 ) motion parameters that are observable through multiple kernels. 38/52

39 Solution and Collaboration The solution x = (M T M + γg T G) 1 (M T y + γg T l). A more efficient solution x = (I D)(M T M) 1 (M T y + γg T l), where D = γ(m T M) 1 G T (γg(m T M) 1 G T + I) 1 G Notice that x u = (M T M) 1 M T y = M y, is the solution to the independent kernels, and x = (I D) x u + z(x) The collaboration through a fixed-point recursion x k+1 (I D k )[M( x k )] y k + z k, 39/52

40 MKL for scale 10 Determining the scale of the target is an important issue It is related to the scale of the kernel Basic idea: using mean-shift in the spatial-scale space (x, σ) Algorithm: alternating a spatial mean-shift and a scale one 1. initial states (x 0,σ 0 ); 2. fix σ 0, perform a 2-D spatial mean-shift to obtain x ; 3. fix x, perform a 1-D scale mean-shift to obtain σ ; 4. repeat 2 and 3 until convergence. 10 Robert Collins, Mean-shift Blob Tracking through Scale Space, CVPR 03 40/52

41 Outline Optical Flow Beyond Basic Optical Flow Considering Lighting Variations Considering Appearance Variations Considering Spatial-Appearance Variations Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 41/52

42 Distraction and Matching Ambiguity Spatial context can reduce matching ambiguity Questions: Modeling context for motion analysis? Methods resilient to local variations? 42/52

43 Spatial Context (for object recognition) Structure-stiff (e.g., template and filters) Structure-flexible random fields deformable templates shape context, AutoContext Structure-free bag-of-words or bag-of-features 43/52

44 Modeling Spatial Context Location x is associated with features f(x) feature class {ω 1,...,ω N } individual context: C i = {y f(y) ω i,y Ω(x)}, N total context: C = C i. i=1 context representation: p(ω i x) p(x ω i )p(ω i ) 44/52

45 Contextual Maps 45/52

46 Brightness Constancy Context Constancy 11 Context constancy p(ω i x + x, t + t, C) = p(ω i x, t, C) The motion x shall not change the context More flexible than constant brightness insensitive to lighting insensitive to local deformation Let s impose a small motion assumption Ying Wu and Jialue Fan, Contextual Flow, CVPR 09 46/52

47 A Differential Form T x p(ω i x, t) }{{} x + t p(ω i x, t) }{{} t = 0 contextual gradient contextual frame difference Contextual frame difference is approximated by p(ω i x, t + t) p(ω i x, t) Contextual gradient (details follow) { x p(ω i x) = x p(ω i ) p(x ω } i) p(x) = 1 [ ] c p(ω i x) µ i (x) µ 0 (x) 47/52

48 Context Gradient Conditional Shift µ i (x) = E{(y x) y ω i } = 1 Z i (x) After simple manipulation Total shift µ i (x) = c xp(x ω i ) p(x ω i ) µ 0 (x) = E{(y x) y Ω} = c xp(x) p(x) Ω (y x)p(y ω i )dy so we have x p(ω i x) = 1 [ ] c p(ω i x) µ i (x) µ 0 (x) 48/52

49 Illustration: Contextual Gradient 49/52

50 Context Flow Constraint It is easy to see: [ ] [ T p(ωi x, t + 1) µ i (x) µ 0 (x) x + c }{{} p(ω i x, t) µ i (x) ] 1 } {{ } b i = 0 µ i (x) is the centered shift b i is the change of context ratio Contextual flow constraint µ i (x) T x b i = 0 50/52

51 Local Contextual System Each context gives a constrain weighted by W i (x) = p(ω i x, t), and W(x) = diag[w 1 (x),...,w N (x)] Denote by U r (x) = [ µ 1 (x),..., µ N (x)] T, br (x) = [b 1, b 2...,b N ] T, U(x) = W(x)U r (x), b(x) = W(x)b r (x) we have a linear contextual system U(x) x = b(x), or simply U x = b 51/52

52 Extended Lucas-Kanade Method If U is rank deficient, we have an aperture problem as well Considering the nearby locations X = {x 1,...,x m } each of which is associated with a contextual system U i (x i ) x i = b(x i ), or simply U i x i = b i where x i is the motion for location x i. If they share the same motion, i.e., x i = x, then Extended Lucas-Kanade method U 1 b 1... x = U c x =... U m b m 52/52

Differential Motion Analysis (II)

Differential Motion Analysis (II) Differential Motion Analysis (II) Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu 1/30 Outline

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Methods in Computer Vision: Introduction to Optical Flow

Methods in Computer Vision: Introduction to Optical Flow Methods in Computer Vision: Introduction to Optical Flow Oren Freifeld Computer Science, Ben-Gurion University March 22 and March 26, 2017 Mar 22, 2017 1 / 81 A Preliminary Discussion Example and Flow

More information

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani) Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow

More information

Iterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research

Iterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research Iterative Image Registration: Lucas & Kanade Revisited Kentaro Toyama Vision Technology Group Microsoft Research Every writer creates his own precursors. His work modifies our conception of the past, as

More information

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani) Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow

More information

Principal Component Analysis and Linear Discriminant Analysis

Principal Component Analysis and Linear Discriminant Analysis Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29

More information

Computer Vision Motion

Computer Vision Motion Computer Vision Motion Professor Hager http://www.cs.jhu.edu/~hager 12/1/12 CS 461, Copyright G.D. Hager Outline From Stereo to Motion The motion field and optical flow (2D motion) Factorization methods

More information

Introduction to motion correspondence

Introduction to motion correspondence Introduction to motion correspondence 1 IPAM - UCLA July 24, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Why estimate visual motion? 2 Tracking Segmentation Structure

More information

March Algebra 2 Question 1. March Algebra 2 Question 1

March Algebra 2 Question 1. March Algebra 2 Question 1 March Algebra 2 Question 1 If the statement is always true for the domain, assign that part a 3. If it is sometimes true, assign it a 2. If it is never true, assign it a 1. Your answer for this question

More information

Visual Object Detection

Visual Object Detection Visual Object Detection Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu 1 / 47 Visual Object

More information

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan Motion estimation Digital Visual Effects Yung-Yu Chuang with slides b Michael Black and P. Anandan Motion estimation Parametric motion image alignment Tracking Optical flow Parametric motion direct method

More information

CS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint

CS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint CS4495/6495 Introduction to Computer Vision 6B-L1 Dense flow: Brightness constraint Motion estimation techniques Feature-based methods Direct, dense methods Motion estimation techniques Direct, dense methods

More information

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015 EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,

More information

Subspace Methods for Visual Learning and Recognition

Subspace Methods for Visual Learning and Recognition This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

Photometric Stereo: Three recent contributions. Dipartimento di Matematica, La Sapienza

Photometric Stereo: Three recent contributions. Dipartimento di Matematica, La Sapienza Photometric Stereo: Three recent contributions Dipartimento di Matematica, La Sapienza Jean-Denis DUROU IRIT, Toulouse Jean-Denis DUROU (IRIT, Toulouse) 17 December 2013 1 / 32 Outline 1 Shape-from-X techniques

More information

TRACKING and DETECTION in COMPUTER VISION

TRACKING and DETECTION in COMPUTER VISION Technischen Universität München Winter Semester 2013/2014 TRACKING and DETECTION in COMPUTER VISION Template tracking methods Slobodan Ilić Template based-tracking Energy-based methods The Lucas-Kanade(LK)

More information

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2018 CS 551, Fall

More information

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al. 6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Multi-Frame Factorization Techniques

Multi-Frame Factorization Techniques Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

1 Integration in many variables.

1 Integration in many variables. MA2 athaye Notes on Integration. Integration in many variables.. Basic efinition. The integration in one variable was developed along these lines:. I f(x) dx, where I is any interval on the real line was

More information

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit Augmented Reality VU Camera Registration Prof. Vincent Lepetit Different Approaches to Vision-based 3D Tracking [From D. Wagner] [From Drummond PAMI02] [From Davison ICCV01] Consider natural features Consider

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Image Analysis. Feature extraction: corners and blobs

Image Analysis. Feature extraction: corners and blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Camera Models and Affine Multiple Views Geometry

Camera Models and Affine Multiple Views Geometry Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Discriminative Learning and Big Data

Discriminative Learning and Big Data AIMS-CDT Michaelmas 2016 Discriminative Learning and Big Data Lecture 2: Other loss functions and ANN Andrew Zisserman Visual Geometry Group University of Oxford http://www.robots.ox.ac.uk/~vgg Lecture

More information

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM 1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University

More information

EE 6882 Visual Search Engine

EE 6882 Visual Search Engine EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,

More information

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI Support Vector Machines II CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Outline Linear SVM hard margin Linear SVM soft margin Non-linear SVM Application Linear Support Vector Machine An optimization

More information

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations

More information

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,

More information

Clustering with k-means and Gaussian mixture distributions

Clustering with k-means and Gaussian mixture distributions Clustering with k-means and Gaussian mixture distributions Machine Learning and Category Representation 2012-2013 Jakob Verbeek, ovember 23, 2012 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.12.13

More information

Optical Flow, Motion Segmentation, Feature Tracking

Optical Flow, Motion Segmentation, Feature Tracking BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature

More information

EE364 Review Session 4

EE364 Review Session 4 EE364 Review Session 4 EE364 Review Outline: convex optimization examples solving quasiconvex problems by bisection exercise 4.47 1 Convex optimization problems we have seen linear programming (LP) quadratic

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Linear DifferentiaL Equation

Linear DifferentiaL Equation Linear DifferentiaL Equation Massoud Malek The set F of all complex-valued functions is known to be a vector space of infinite dimension. Solutions to any linear differential equations, form a subspace

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 6 Standard Kernels Unusual Input Spaces for Kernels String Kernels Probabilistic Kernels Fisher Kernels Probability Product Kernels

More information

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University Mean-Shift Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Mean Shift Algorithm A mode seeking algorithm Fukunaga & Hostetler (1975) Mean Shift Algorithm A mode seeking algorithm

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Kernel-Based Contrast Functions for Sufficient Dimension Reduction

Kernel-Based Contrast Functions for Sufficient Dimension Reduction Kernel-Based Contrast Functions for Sufficient Dimension Reduction Michael I. Jordan Departments of Statistics and EECS University of California, Berkeley Joint work with Kenji Fukumizu and Francis Bach

More information

Discriminative part-based models. Many slides based on P. Felzenszwalb

Discriminative part-based models. Many slides based on P. Felzenszwalb More sliding window detection: ti Discriminative part-based models Many slides based on P. Felzenszwalb Challenge: Generic object detection Pedestrian detection Features: Histograms of oriented gradients

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

EE Camera & Image Formation

EE Camera & Image Formation Electric Electronic Engineering Bogazici University February 21, 2018 Introduction Introduction Camera models Goal: To understand the image acquisition process. Function of the camera Similar to that of

More information

Computer Vision I. Announcements

Computer Vision I. Announcements Announcements Motion II No class Wednesda (Happ Thanksgiving) HW4 will be due Frida 1/8 Comment on Non-maximal supression CSE5A Lecture 15 Shi-Tomasi Corner Detector Filter image with a Gaussian. Compute

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions Notation S pattern space X feature vector X = [x 1,...,x l ] l = dim{x} number of features X feature space K number of classes ω i class indicator Ω = {ω 1,...,ω K } g(x) discriminant function H decision

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

First order differential equations

First order differential equations First order differential equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T. First

More information

Diff. Eq. App.( ) Midterm 1 Solutions

Diff. Eq. App.( ) Midterm 1 Solutions Diff. Eq. App.(110.302) Midterm 1 Solutions Johns Hopkins University February 28, 2011 Problem 1.[3 15 = 45 points] Solve the following differential equations. (Hint: Identify the types of the equations

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/32 Margin Classifiers margin b = 0 Sridhar Mahadevan: CMPSCI 689 p.

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?

More information

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several

More information

Non-linear Support Vector Machines

Non-linear Support Vector Machines Non-linear Support Vector Machines Andrea Passerini passerini@disi.unitn.it Machine Learning Non-linear Support Vector Machines Non-linearly separable problems Hard-margin SVM can address linearly separable

More information

CHAPTER 2 POLYNOMIALS KEY POINTS

CHAPTER 2 POLYNOMIALS KEY POINTS CHAPTER POLYNOMIALS KEY POINTS 1. Polynomials of degrees 1, and 3 are called linear, quadratic and cubic polynomials respectively.. A quadratic polynomial in x with real coefficient is of the form a x

More information

ML (cont.): SUPPORT VECTOR MACHINES

ML (cont.): SUPPORT VECTOR MACHINES ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem

Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem Multigrid Acceleration of the Horn-Schunck Algorithm for the Optical Flow Problem El Mostafa Kalmoun kalmoun@cs.fau.de Ulrich Ruede ruede@cs.fau.de Institute of Computer Science 10 Friedrich Alexander

More information

Basis Expansion and Nonlinear SVM. Kai Yu

Basis Expansion and Nonlinear SVM. Kai Yu Basis Expansion and Nonlinear SVM Kai Yu Linear Classifiers f(x) =w > x + b z(x) = sign(f(x)) Help to learn more general cases, e.g., nonlinear models 8/7/12 2 Nonlinear Classifiers via Basis Expansion

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements

More information

Loss Functions and Optimization. Lecture 3-1

Loss Functions and Optimization. Lecture 3-1 Lecture 3: Loss Functions and Optimization Lecture 3-1 Administrative Assignment 1 is released: http://cs231n.github.io/assignments2017/assignment1/ Due Thursday April 20, 11:59pm on Canvas (Extending

More information

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) Diffeomorphic Warping Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) What Manifold Learning Isn t Common features of Manifold Learning Algorithms: 1-1 charting Dense sampling Geometric Assumptions

More information

Reconnaissance d objetsd et vision artificielle

Reconnaissance d objetsd et vision artificielle Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le

More information

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of, 269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Shivani Agarwal Support Vector Machines (SVMs) Algorithm for learning linear classifiers Motivated by idea of maximizing margin Efficient extension to non-linear

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection

More information

Machine Learning 2017

Machine Learning 2017 Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information

Region Moments: Fast invariant descriptors for detecting small image structures

Region Moments: Fast invariant descriptors for detecting small image structures Region Moments: Fast invariant descriptors for detecting small image structures Gianfranco Doretto Yi Yao Visualization and Computer Vision Lab, GE Global Research, Niskayuna, NY 39 doretto@research.ge.com

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask Machine Learning and Data Mining Support Vector Machines Kalev Kask Linear classifiers Which decision boundary is better? Both have zero training error (perfect training accuracy) But, one of them seems

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition LINEAR CLASSIFIERS Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification, the input

More information

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract

More information

Extra Problems and Examples

Extra Problems and Examples Extra Problems and Examples Steven Bellenot October 11, 2007 1 Separation of Variables Find the solution u(x, y) to the following equations by separating variables. 1. u x + u y = 0 2. u x u y = 0 answer:

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

A Least Squares Formulation for Canonical Correlation Analysis

A Least Squares Formulation for Canonical Correlation Analysis A Least Squares Formulation for Canonical Correlation Analysis Liang Sun, Shuiwang Ji, and Jieping Ye Department of Computer Science and Engineering Arizona State University Motivation Canonical Correlation

More information

Representer theorem and kernel examples

Representer theorem and kernel examples CS81B/Stat41B Spring 008) Statistical Learning Theory Lecture: 8 Representer theorem and kernel examples Lecturer: Peter Bartlett Scribe: Howard Lei 1 Representer Theorem Recall that the SVM optimization

More information

4 Differential Equations

4 Differential Equations Advanced Calculus Chapter 4 Differential Equations 65 4 Differential Equations 4.1 Terminology Let U R n, and let y : U R. A differential equation in y is an equation involving y and its (partial) derivatives.

More information

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University Chapter 9. Support Vector Machine Yongdai Kim Seoul National University 1. Introduction Support Vector Machine (SVM) is a classification method developed by Vapnik (1996). It is thought that SVM improved

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information