A New Shape Adaptation Scheme to Affine Invariant Detector

Size: px
Start display at page:

Download "A New Shape Adaptation Scheme to Affine Invariant Detector"

Transcription

1 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December Copyright c 2010 KSII A New Shape Adaptation Scheme to Affine Invariant Detector Congxin iu, Jie Yang, Yue Zhou and Deying Feng Institute of Image Processing and Pattern ecognition, Shanghai Jiao ong University Shanghai 2002, China [ phillx@sjtu.edu.cn] *Corresponding author: Congxin iu eceived July 26, 2010; revised August, 2010; revised October 8, 2010; accepted October 28, 2010; published December 23, 2010 Abstract In this paper, we propose a new affine shape adaptation scheme for the affine invariant feature detector, in which the convergence stability is still an opening problem. his paper examines the relation between the integration scale matrix of next iteration and the current second moment matrix and finds that the convergence stability of the method can be improved by adjusting the relation between the two matrices instead of keeping them always proportional as proposed by previous methods. By estimating and updating the shape of the integration kernel and differentiation kernel in each iteration based on the anisotropy of the current second moment matrix, we propose a coarse-to-fine affine shape adaptation scheme which is able to adjust the pace of convergence and enable the process to converge smoothly. he feature matching experiments demonstrate that the proposed approach obtains an improvement in convergence ratio and repeatability compared with the current schemes with relatively fixed integration kernel. Keywords: ocal image feature, affine invariant feature, scale invariant feature, adaptive kernel shape his research is supported by projects of international cooperation between Ministries of Science and echnology, No: 2009DFA12870 and aviation science fund, China, No: 2008CZ57. We would like to thank the anonymous reviewers for their valuable comments. DOI: /tiis

2 1254 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector 1. Introduction ocal image features have shown excellent performance in many applications such as image matching and registration, image retrieval and classification, object and texture recognition, 3D reconstruction and stereo matching. he fundamental difficulty of using local image features resides in extracting features invariant to differing views. here are many impressive works in the literature trying to address the problem, which are shown in Section 2. Harris interest point [1], which is a very popular local feature, is robust to geometric and photometric deformations, but is not invariant to scaling [2]. Mikolajczyk, K. and Schmid [3] extended Harris to be invariant to scale and affine transformation based on previous works [4][5]. his detector is called Harris-Affine interest point in this paper. However, the convergence stability of this algorithm needs to be further investigated [3]. indeberg and Garding [4] introduced the second moment matrix defined in affine Gaussian scale space as the metric matrix (determined by an integration Gaussian kernel and a differentiation Gaussian kernel, which are determined by an integration scale matrix and a differentiation scale matrix respectively. See equation (3) in Section for more details) of local image pattern to find blob-like affine features. During the iteration process of finding affine invariant regions, to reduce the search space, indeberg et al. assumed that the integration scale matrix and differentiation scale matrix are coupled and derived fixed point property based on the above assumption, which constitutes the theory basis of later iteration algorithms [3][4][5]. According to the fixed-point property, these iteration algorithms all required that the expected integration and differentiation scale matrices (next iteration) are proportional to the current second moment matrix (current iteration) at the given point. However, such linear relation between them is probable to cause overshoot of anisotropy measured by the second moment matrix at the next iteration, and may hence reduce the stability of convergence. his paper regards the expected integration and differentiation scale matrices as the function of the current second moment matrix and its anisotropy. Based on the anisotropy measured in each iteration, we dynamically adjust the relation between the two scale matrices and the second moment matrix. o sum up, our approach has the following properties: (1) he relation between the expected integration and differentiation scale matrices and the current second moment matrix is always non-linear. (2) he relation between the expected integration and differentiation scale matrices and the current second moment matrix is dynamically updated. (3) With the increasing number of iteration, the shapes of the expected integration and differentiation scale matrices will gradually approach to the shape determined by the current second moment matrix. he remainder of this paper is organized as follows: In Section 2, we review the previous works in scale- and affine-invariant feature detection. In Section 3, the details of the proposed method are presented. In Section 4, we provide the detailed experiment results on feature matching experiments. Section 5 concludes the paper. 2. elated Work Many scale- and affine- invariant features detection algorithms have been explored in the literature [6].

3 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December Scale Invariant Feature he automatic scale selection for various keypoint detectors have been thoroughly explored by indeberg [7]. For the detection of blob-like feature points, indeberg proposed to search for the local absolute extremum of σ 2 Drace ( Η ( x ; σd) ) (also called OG, where σ D is the σ 4 det Η x ; σ in the 3D scale-space. differentiation scale) and D ( D) xx ( xy, ; σd ) xy ( xy, ; σd ) ( ; σ D ) ( xy, ; σ ) ( xy, ; σ ) Η x = (1) xy D yy D OG can be well approximated by DOG (difference of Gaussian) [8]. By convolving the image with the difference of Gaussian kernels at a variety of scales and selecting local maxima in 3D scale-space, DOG achieves better computation efficiency compared with OG. o further accelerate the detection, Bay et al. [9] provided an efficient implementation of ( σ ) σ 4 Ddet Η x ; D by applying the integral image for the computation of image derivatives. Mikolajczyk and Schmid [3] proposed to use the multi-scale Harris detector (or structure tensor) to determine spatial interest points and the characteristic scales at these points were σ 2 race Η x; σ over scales, leading to the determined by searching local peaks of D ( ( D) ) Harris-aplace detector. his detector provides complementary features points to blob-like detectors. o optimize keypoint detection to achieve stable local descriptors, Gyuri Dorko and Schmid [10] constructed a scale space based on the stability of region descriptor. he characteristic scale was selected at the stability extremum of the local descriptor. Wei-ing ee et al. [11] followed this idea and extract feature points directly on the histogram-based representations. Förstner et al. [12][13] proposed a new scale space framework based on the structure tensor and the general spiral feature model to detect scale-invariant features (SFOP) which have shown good repeatability under scale and rotation transformation. 2.2 Affine Invariant Feature Many methods such as MSE [14], EB [15], IB [15], affine saliency [16], Harris-Affine [3], and Hessian-Affine [3][17] are motivated by the fact that only scale invariance cannot arrive at a reliable image matching under significant view changes. MSE (Maximally Stable Extremal egions), EB and IB are all region and boundary based detector. MESE extracts a nest of regions called Extremal egions, where each pixel intensity value in a region is less (or greater) than a certain threshold, and all intensities around the boundary are greater (or less) than the same threshold. An Extremal egion is a Maximally Stable Extremal egion when it remains stable over a range of thresholds. IB [15] first detects local maxima of the image intensity over multiple scales and then a region boundary is determined by seeking for intensity singularity of the intensity along rays radially emanating around these points. he regions extracted by MSE and IB are very similar. EB starts from a corner, then finds another two anchor points along the two edges by the relatively invariant geometrical constraint [15], and finally obtain parallelogram regions. Kadir et al. [16] extracts circular or elliptic regions in the image as maxima of the entropy scale-space of region. On one hand the detector can extract many features, but on the other hand it is known to have higher computational complexity and lower repeatability than most other detectors [17]. Harris-Affine and Hessian-Affine first detect feature points with multi-scale second moment matrix and Hessian matrix. Subsequently, affine covariant regions around these points are obtained by affine shape adaptation with the second moment matrix.

4 1256 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector A detailed performance comparison between them has been presented in [17], where it is shown that Hessian-Affine and Harris-Affine not only provides more features compared with other methods, but also gives good repeatability and matching scores. herefore, we focus on them and try to enhance their performance. Scale space based affine invariant feature Harris-Affine and Hessian-Affine experienced the following developing process. indeberg and Garding [4] developed a method for finding blob-like affine features using an affine adaptation scheme based on the second moment matrix, deriving the affine invariance property of the second moment matrix and the shape adapted fixed-point property. ater, Baumberg [5] found the scheme can be used to determine the affine deformation of an isotropic structure. He first extracted multi-scale Harris points and then adapted the shapes of their neighborhoods to the local image structures using the iteration scheme proposed by indeberg. During the adaptation, the location and scale of the feature point keep invariant, thereby introducing some localization and shape bias. F. Schaffalitzky and A. Zisserman [18] employed the Harris-aplace [3] detector to extract feature point and computed an affine invariant neighborhood using the method proposed by Baumberg. ikewise, this approach is also likely to result in localization and shape bias when matching image pairs under significant viewpoint. Mikolajczyk and Schmid [3] improved the previous methods and implemented a full scale adaptation approach of indeberg, that is the scale, location and affine shape are updated in each iterative step, thereby gaining better localization effect. Our method tries to further enhance the performance of the detector by dynamically changing the shapes of integration and differentiation kernels in iteration process. 3.1 Background 3. he Proposed Affine Invariant Detector Scale Normalized Second Moment Matrix indeberg and Garding [4] proposed to use the second moment matrix to estimate the anisotropic shape of a local image structure. Scale alized second moment matrix, also known as the local shape matrix, is defined as follows: 2 µ ( x; σd, σi) = σdg( x; σi) ( ( x; σd) ( x ; σd) ) (2) where σ D is the differentiation scale, σ I is the integration scale, ( x ; σ D ) is the gradient vector computed at x with the Gaussian derivative at scaleσ D. his matrix can also be used to extract multi-scale Harris points [3][5] Affine Second Moment Matrix Many textured patches show distinct anisotropy, varying scales along different directions. o obtain a more accurate description of textured patches, indeberg et al. [4] used the second moment matrices defined in affine Gaussian scale space to describe the anisotropic shape of the local image structure. he alized second moment matrices are defined as follows: µ ( x; I, D) = det( D) G( x; I) ( ( x; D) ( x ; D) ) (3) where: 1 x I x G( x ; I ) = exp( ) (4) 2π det 2 ( ) I

5 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December In (4) and (5), I and D ( ) ( x; ) = G( x; ) I x (5) D D, which are 2 2 covariance matrices, refer to integration scale matrix and differentiation scale matrix. hese two matrices can determine the shapes of the integration and differentiation Gaussian kernels. Suppose two image patches are related by an affine transformation x = Ax. hen the affine second moment matrices µ and µ computed at x and x are shown as follows: µ = µ ( x ;, ), µ = µ ( x ;, ) I, D, I, I, indeberg et al. [4] had verified the following properties: µ = A µ A, I, = AI, A, D, = AD, A (6) Based on the above equation(6), it s easy to verify the property of affine invariance of µ Invariance Property of Fixed-point In a realistic situation, the affine transformation A is always unknown. herefore, to find corresponding µ and µ, we must compute all possible combinations of kernel parameters on brute force, which is unpractical. But if the second moment µ is calculated in such a way: ( ts ) µ = t, µ =s, (7) I, D, indeberg et al. [4] claimed the above fixed -point property is persevered under affine transformation. In other words, the following equations also hold if x = Ax : ( ts ) µ = t, µ =s, (8) 1 + I, D, During the process of detection, suppose estimated I and D verify (7) or(8), we assume that there is an unknown affine transformation A between the two patches. Based on the fixed-point property, the previous iterative schemes in [3][4] and [5] all let and be proportional to the second moment matrix in each iteration. I D In addition, all the operations in affine shape adaptation are performed in the transformed image domain [3][5]. he points in neighborhood around feature points are alized by 1/2 transformation x = x, and then uniform Gaussian kernels can be applied to calculation. µ 1 1 If computed ( µ, I, D ) transformed image domain. verifies the condition (7) or (8), µ will be isotropic in the 3.2 Adaptive Kernel-shape based Affine Invariant Detector In the following, we discuss adaptive kernel-shape based affine invariant detector. Our approach does not restrict the type of interest point. Without loss of generality, we choose multi-scale Harris as detectors. For each feature point, the second moment matrix is calculated and the corresponding integration and differentiation scales are selected automatically Motivation At first, in order to make following description more clear, we assume the shape of a matrix A is the elliptical region defined by x Ax = 1 and its aspect is defined as the direction of the eigenvector corresponding to the minimum eigenvalue of A. Fig. 1 shows the shape

6 1258 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector relationships of the kernels between the two image domains in the previous methods, where the relatively fixed integration kernel is adopted. From this figure, we can observe if the expected integration scale matrix for the next iteration is proportional to the current I, new second moment matrix µ cut, then there is always an apparent shape difference between the expected integration kernel (shown in blue ellipse) and the current integration kernel (shown in green ellipse) in the alized image domain. Such shape difference makes the obtained second moment matrix µ in the next iteration always anisotropic with a great probability. new I, new µ cut I, cut 1/2 µ cut I, new, µ new I, cut, µ (a) he current image domain (b) he alized image domain In (a), the blue filled elliptic region is determined by the current second moment matrix; the green circle represents the shape of current integration kernel; the blue ellipse refers to the expected integration kernel; (b) illustrates shapes of the matrices in the alized image domain, the color labels correspond to (a). As we can see that under the smoothing of the Gaussian kernel labeled with green color, the local image structure is transformed into an isotropic structure region shown in the blue filled circle. Fig. 1. he shape relations between the kernels when the relatively fixed integration kernel is adopted he above conclusions can be proved as follows: Proof: 1/2 Suppose x = µ x,based on (6),we can get: cut 1/2 ( ) 1/2 /2 /2 ( ) ( ) ( ) 1/2 1/2 ( ) ( ) 1 + s µ µ µ sµ ( s ) 1/2 1/2 1 + ( µ ) ( µ ) ( ) µ = µ µ µ µ = µ µ µ =I (9) cut cut cut cut cut cut = I = = (10) I, cut, cut cut cut I, cut, cut, I, new, = cut I, new cut I, new, = ti, t (11) he subscript cut represents the current matrices; refers to the forms of the current matrices in the alized image domain and new denotes the expected matrices in the next iteration. From equation (9), as can be observed, µ is always isotropic when using the Gaussian kernel with I, cut,. Based on this conclusion, it s reasonable to predict that the aspect of µ new, which is calculated using the Gaussian kernel with I, new,, will be in 1 accordance with the aspect of µ cut. his can be confirmed by Fig. 2-(a), where the affine iteration procedures of an ideal blob structure are illustrated with Hessian matrix as measure matrix (affine Hessian matrix is also affine covariant which is proved in Appendix A). In Fig. 2-(a), the aspects between the ellipses in consecutive iterations are always perpendicular. We call this phenomenon overshoot.

7 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December st iteration 2 nd iteration 3 rd iteration 4 th iteration (a) 1 st iteration 2 nd iteration (b) 3 rd iteration 4 th iteration (a) he affine iteration process of an ideal blob structure with the relatively fixed integration kernel. (b) he affine iteration process of an ideal blob structure with the adaptive integration kernel. he red ellipses represent the anisotropy of the local image structure measured by Hessian matrix. Fig. 2. he iteration process of an ideal blob structure with two different methods Intuitively, the greater the value of λ max ( µ cut )/ λ min ( µ cut ) is, the larger anisotropy of µ new may be. his phenomenon can also be observed in Fig. 2-(a). herefore, when the anisotropy of µ cut is larger, the iteration process may fall into a state of oscillation, thereby increasing the possibility of divergence. I, new µ cut I, cut γ µ cut I, new, I, cut, µ new µ (a) he current image domain (b) he alized image domain In (a), the blue filled elliptic region is determined by the current second moment matrix; the green circle represents the shape of the current integration kernel; the blue dotted ellipse refers to the expected integration kernel in previous methods; the red ellipse denotes the candidate integration kernel of the proposed method; (b) illustrates shapes of the matrices in the alized image domain ( γ < 0.5 ), the color labels correspond to (a). he local structure is still an anisotropic structure region under the smoothing of the green Gaussian kernel, as shown in the blue filled circle. However, at this time, if the local structure is smoothed by the red integration kernel, it is probable for it to be close to an isotropic structure shown in the dotted red circle. Fig. 3. he shape relations between the kernels when the adaptive integration kernel is used Based on the above analysis and comparison, we propose a method which is able to estimate the shape of expected integration kernel and adjust the convergence pace. As shown in Fig. 3-(a), if can share the shape similar to the red ellipse, then in the alized image I, new

8 1260 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector domain the probability p( λ max ( µ new)/ λ min ( µ new) 1) is greater than the one in previous methods. In this case, µ is no longer isotropic and its aspect has become consistent with that of µ. his is illustrated in blue filled elliptic region in Fig. 3-(b). Under the smoothing cur of the Gaussian kernel with I, new,, the shape of µ new is probable to approach to a circle, as shown in red dotted circle in Fig. 3-(b). he second row of Fig. 2 shows the iteration procedures when using the proposed method. From the figure, we can observe that the overshoot phenomenon has been effectively suppressed and the convergence process becomes smoother. Compared to previous methods, the anisotropy of the blob structure is getting smaller in each step when our method is adopted. In the following, we will discuss more details of our method Approximate Estimation of the Shape of Integration Kernel First, we use the eigenvalue ratio ξ [3] to measure the anisotropy of µ : λ max ( µ ) ξ = ξ [ 1, ] (12) λ min ( µ ) When ξ is equal to 1, the local image structure is a perfect isotropic structure. As can be observed from Fig. 1-(b), if there is linear relationship between the expected integration kernel and the shape of µ cur with a large anisotropy, then µ new probably has a large anisotropy. o such integration kernel, it s reasonable to assign it a low confidence level. Furthermore, we also expect that the estimated integration kernel can continue to reduce the anisotropy of the local image structure. Based on these criteria, the shapes of η I, new = tµ cut, η [ 0.5,1] (shown in blue ellipses in Fig. 4) can meet our requirements very well. xμx=1 previous methods Our method Fig. 4. he comparison of the integration kernels in iteration process between our method and the previous methods Same as [3], our iterative shape adaptation scheme also works in the alized image η domain. Assume that the estimated integration kernel with I, new = tµ cut can be transformed γ into an isotropic kernel using a linear transformation µ cur ( γ is called alization exponent here). Since µ cur is a positive definite symmetric matrix, then the following equation holds: η = 2 γ γ [ 0.25,0.5 ] (13) According to equation (13), we propose two functions to estimate the alization exponent γ directly.

9 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December ξ [1,1.5] ( ϑ 0.5 )( ξ.5) γ = F1 ( ξ) = +0.5 ξ [1.5, τ] (14) τ.5 ϑ others ( ϑ 0.5 ) 2 2 ( ξ 1) +0.5 ξ [1, τ ] γ = F2 ( ξ) = ( τ ) (15) ϑ others heir figures are shown in Fig. 5. For the function F 1, we assume that γ is piecewise linear with ξ while the relationship between them becomes nonlinear in F 2. In these two figures, when ξ approaches to 1, there is a sufficient margin in which γ approaches to 0.5. One reason for the above assumptions is that such forms enable the extracted local image regions to better meet the fixed point condition (see Section 3.2.5). γ 0.5 ϑ γ 0.5 ϑ τ 1 τ Fig. 5. he diagrams of the two estimation functions Since γ is less than 0.5, the shape of µ is transformed into an anisotropic structure illustrated in Fig. 3-(b) with the blue elliptical region. he diagram of the shape evolution of µ in the alized image domains can be observed in Fig. 6. xμx 2 2 =1 2 xμx 1 1=1 1 xμx 4 4 =1 4 xμx 3 3 =1 3 μ 1 is the current second moment matrix xμxμμμμ = = xμxμμ = = xμxμμμ = = μμ=i μ Fig. 6. he shape evolution of the second moment matrix in the alized image domain Modified Estimation of the Shape of Integration Kernel In our approach, γ is estimated in each iteration. However, due to the noise effect, γ may have small variations. o make γ immune against to noise, we combine the current γ and the last γ to achieve a modified estimation of γ : γ = ( γ, γ ) w, w = 1 (16) cur last 1

10 1262 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector Shape Adaptation Matrix he proposed scheme is carried out in the transformed image domain. Often it requires several iterations before convergence. he second moment matrix µ is computed in each iteration. Based on its anisotropy, γ can be assigned a reasonable value by the above proposed method. Combining them, we can obtain a staged transformation matrix µ γ. he final shape adaptation matrix M, which is applied to the transformation of the neighborhoods of feature points in original image, can be achieved by concatenating these staged transformation th matrices. he matrix M obtained in k step of affine iteration is shown as follows: ( k ) γ i (0) M = µ M, γ [0.25, 0.5] (17) i= 1, k If the affine shape adaptation converges, then: ( ) M M (18) Integration Scale Matrix Although each staged transformation matrix is positive definite symmetric, the finally obtained transformation matrix M is not a symmetric matrix. he expected second moment matrix and integration scale matrix for the feature point assume the following forms: µ =M M (19) 1 + I = tmm, t (20) In order to fulfill the fixed-point property (7), the obtained µ =M M must be relatively affine invariant. Proposition 1: If the alization exponent γ in the last staged transformation matrix is equal to 0.5, then ( µ, I ) is a fixed-point. Here, the obtained feature point is assumed to undergo all the staged affine deformations µ γ i, i ( 1, k ), k is the iterative number for convergence. Proof: ( k ) γ i M = µ M (0), M (0) = I into (19), it follows that: Insert ( ) i= 1, k γi γi γ i γ k γk γi ( ) ( ) = ( ( ) )( ) ( ) i= 1, k i= 1, k i= k 1, 1 i= 1, k µ = MM= µ M (0) µ M (0) M (0) µ µ µ µ M (0) γ B= µ M, we have i (0) Enforcing that ( ) If γ k = 0.5, then i= 1, k k ( ) i γ γk = B B (21) µ µ µ µ = µ µ = µ Inserting (22) into (20), we get = tmm= tµ = tbµ B = B tµ B = B B (23) B kb k B B (22) ( ) 1 I k I k k µ =M M, I t Combining (22), (23), (19) and (20), we can obtain that ( = MM) is a

11 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December fixed-point. In practice, if lim = 0.5, then the fixed-point can be obtained with sufficient accuracy. γ i i k he problem has been considered in the proposed algorithm by letting γ k be close to 0.5 in the later iterations (see Fig. 5). Proposition 2: If the local regions around points x and, x are linked by an affine x = Mx and transformation A and are alized by shape adaptation matrices, x = Mx, then the relation between these two alizations is up to an orthogonal transformation and A = M M. Proof: Based on equation (6), we can obtain: µ = M µ M µ = M µ M = M M M M = I,, In the same way, we can get: µ, = I. If x, = Bx,, then µ, = Bµ, B BB= I. herefore, B is an orthogonal transformation. Furthermore, x = x M x = M x A = M M.,, hus, the proposition 2 has been proved. he affine alization process can be observed in Fig. 7. he ambiguity of rotation can be removed by other geometric constraints, such as the dominant direction [8]. x = Ax µ = MM µ = MM x, = Mx x, = Mx x µ = I, = x,, µ = I, Fig. 7. Diagram illustrates the affine alization based on the second moment matrices. Image coordinates are transformed with the shape adaptation matrix M. he transformed images are related by an orthogonal transformation Convergence Criteria We stop iterating when the matrix µ new is sufficiently close to a pure rotation and favors the fixed-point condition. eferring to [3], we propose the following convergence criteria: λ max ( µ new) <1.05 and γ k 0.5 < 0.1 (24) 2 λ ( µ ) min new where k is the number of iteration.

12 1264 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector 4.1 Data Set 4. Evaluation for Image Matching (a) (b) (c) (d) (e) (f) Fig. 8. Data set. Viewpoint change for structured scene (a), and for textured scene (b); scale and rotation for structured scene (c); blur for textured scene (d); (e) light change; (f) JPEG compression. In the experimental comparisons, the left most image of each set is used as the reference image

13 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December In this section, we evaluate adaptive kernel-shape based affine invariant detector and compare it with the relatively fixed kernel detectors on a standard dataset, which comes from [19], a popular dataset for the evaluation of local feature properties. he dataset consists of eight image sequences including textured-images and structured-images. here are different deformations among these images, like viewpoint change, scale and rotation change, light change, blur, and JPEG compression. he test images of the standard dataset used in the experiments are shown in Fig. 8. hese image pairs are either planar scene or captured from fixed-position camera during acquisition. hus, the relation between them can be modeled by a 2D homography matrix. In this way, the image set has also provided homographies for some pair of images. 4.2 Evaluation Metric We adopt the metrics in [17] to evaluate the performance of affine detectors. More specifically, the following three metrics are adopted: µ A a µ A b (1) Correct match metric : 1 <ε ( A A ) µ µ a where µ represents the elliptic region defined by x µ x = 1 ; A is a locally affine transformation of the homography between the two images; µ a µ b b A A and µ a µ b A A refer to the area intersection and area union respectively; ε represents the overlap error. (2) epeatability: #of correspondences epeatability= min #of correspondences image1,#of correspondences image 2 ( ( ) ( )) (3) Convergence ratio: #of feature points of convergence convergence ratio= #of total feature points 4.3 Experimental esults We compare the proposed method to previous method in two schemes: multi-scale Harris points and using the second moment matrix as affine shape adaptation; multi-scale Hessian points and employing the Hessian matrix as affine shape adaptation. Both of the two schemes employ the same matrices in feature point extraction and affine shape adaptation, so the shape adaptation matrices definitely contain sufficient information of the local image structures. he proposed approach is based on the prediction of the local image structure, so it naturally fits in such framework. We have corrected some errors and bugs of the code of aptev ( and employ it for the next experiments candidate feature points are first extracted at three scales: 2, 6 and 10. hen these points proceed to scale selection, location modification and affine shape adaptation, and finally converge to fixed-points. o reduce the complexity of the algorithm, we choose I = 2 D across iterations. We aim at comparing the performance of relatively fixed kernel and adaptive kernel. Simplifying relevant steps is also helpful to observe the performance variation between them. he test programs are running on Intel() Xeon() E GHz.

14 1266 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector Scheme Selection here are three unknowns required to be determined in our scheme: the estimation function, two thresholds ϑ and τ, and weighting vector w. In general, if ϑ and τ are larger, the estimation functions will reduce the ability to adjust iteration process. In contrast, if ϑ and τ are smaller, this is likely to cause premature convergence of the iterative algorithm. As a result, the obtained image regions cannot satisfy the fixed-point property (7). Hence it is important for our scheme to select proper parameters. We determine these unknowns by referring to [3] and doing experiments. For a given estimation function, we fix a threshold and just vary the other one. he repeatability and matching score on the standard image set [19] are used as evaluation metric. In these experiments, we have found that the performance of our method is not very sensitive to these parameters. We propose to use the linear function with ϑ = 0.25, τ ( 6,13) and the quadratic function with ϑ = 0.25, τ ( 6,9). As for w, it is proposed that w1 > w2. In the next experiment, the quadratic function with ϑ = 0.25, τ = 6, and w = 0.9,0.1 is selected as the candidate scheme. ( ) Experimental esults Fig. 9, Fig. 10, Fig. 11, Fig. 12, Fig. 13, and Fig. 14 show the experiment results of our method on the different image pairs on the standard dataset. In Fig. 9 and Fig. 10, we show the comparative results of two types of methods: the methods based on relatively fixed kernel shape ( and ) and the methods based on adaptive kernel shape ( and ), under affine transformation. he symbol Ha-Affine represents Harris points with the second moment matrix for affine adaptation and HH-Affine refers to Hessian points with the Hessian matrix for affine adaptation. Fig. 11 shows the result under scale change and rotation. Moreover, Fig. 12 shows the performance comparison of methods under significant amount of image blur in the structured scene. Finally, the comparative results under illumination change and JPEG compression are shown in Fig. 13 and Fig. 14. In all of these Fig., plot (a) shows repeatability and plot (b) the corresponding convergence ratio. As can be seen from these plots, in most cases, AK- based methods achieve better convergence ratio and repeatability compared with FK-based methods. his demonstrates the effectiveness of the proposed method on improving the convergence stability of Harris-Affine although the performance improvement is not very significant. Moreover, in Fig. 15, we also show the experiment results of descriptor matching score which are used to evaluate the distinctiveness of local image regions obtained by various kinds of detectors [17]. Our methods and Harris-Affine extract same local image regions. heir difference is only residing in affine shape adaptation. hus, the scores of descriptor matching and repeatability should be similar between them. he plots in Fig. 15 confirm this inference, and the experiment results are in accordance with conclusions in [10][17]. Finally, in terms of the number of iteration, our methods achieve comparable iteration times compared to the FK-based methods, as can be shown in Fig. 16. Each data mark in the plots represents the total iteration times from all the feature points in one image sequence (6 images). In addition, from the view of computation complexity, our approach mainly increases a linear discrimination process to each iteration, which assumes a very low computation cost and can be fully neglected compared to the Gaussian smoothing of scale-space in each iterative process.

15 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December repeatability % Graffiti viewpoint change convergence ratio % Graffiti viewpoint changes (a) (b) Fig. 9. Viewpoint change for structured scene (a) repeatability (b) convergence ratio repeatability % Wall convergence ratio % Wall viewpoint change viewpoint changes (a) (b) Fig. 10. Viewpoint change for textured scene (a) repeatability (b) convergence atio repeatability % Boat convergence ratio % Boat scale changes scale changes (a) (b) Fig. 11. Scale rotation for structured scene (a) repeatability (b) convergence ratio

16 1268 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector ree ree repeatability % convergence ratio % increasing blur increasing blur (a) (b) Fig. 12. Blur for textured scene (a) repeatability (b) convergence ratio repeatability % euven convergence ratio % euven decreasing light decreasing light (a) (b) Fig. 13. ight change (a) repeatability (b) convergence ratio repeatability % Ubc convergence ratio % Ubc JPEG Compress % JPEG compression % (a) (b) Fig. 14. JPEG compression (a) repeatability (b) convergence ratio

17 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December matching score % Graffiti matching score % Boat viewpoint change 35 (a) euven scale changes (b) Ubc matching score % decreasing light matching score % JPEG compression % (c) (d) Fig. 15. Matching score for (a) viewpoint change (b) scale changes light change (d) JPEG compression 1.15 x x iteration times iteration times sequence number sequence number (a) (b) Fig. 16. Iteration times comparison (a) Ha-Affine (b) HH-Affine

18 1270 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector 5. Conclusions In this paper, we have described how the overshoot phenomenon may arise when keeping the integration kernel shape linearly related to the shape determined by the second moment matrix. Especially when the anisotropy of the second moment matrix is larger, the overshoot phenomenon may lead to the oscillation of iterative process, thereby increasing the possibility of divergence of the iterative algorithm. A method for reducing the problem is presented by introducing a new iterative scheme with adaptive integration kernel. In the proposed scheme, the shape of the integration kernel in each iteration is determined by the anisotropy of the second moment matrix. o this end, we define two criteria for estimating the integration kernel: first, if an integration kernel is in accordance with a second moment matrix which shares a large anisotropy, then the integration kernel is given a low confidence level. Second, the estimated integration kernel is beneficial to reduce the anisotropy of the second moment matrix. Based on the above criteria, two functions (the one is linear and the other is non-linear) have been proposed to estimate and modify the shape of the integration kernel in each iteration. Finally, the establishment condition of fixed-point property for the local image regions obtained by the proposed scheme is also presented. o satisfy the condition, we have also taken some specific measures embodied in the design of the estimation functions and the final convergence criteria. In synthetic image data, as can be observed, the overshoot phenomenon of second moment matrix has been effectively suppressed when our approach is adopted. Whereas the discussion of the article is concerned with the second moment matrix, the underlying ideas can be extended to other structure metric matrices as long as they are affine invariant, i.e. affine Hessian matrix. Our method naturally fits in the framework that the feature detection matrix is same as the metric matrix. When the matrices are different, our method can still keep our performance advantage. Experiment results on the standard dataset demonstrate that we obtain small improvement in convergence ratio and repeatability as compared with Harris-Affine and Hessian-Hessian-Affine. In the future, we will further explore how to further improve the convergence stability by building a reasonable switching system based on the location changes of feature points and the aspect alterations of the integration scale matrices. Acknowledgment We would like to thank aptev for his valuable code. eferences [1] C. Harris and M. Stephens, A combined corner and edge detector, in Proc. of Alvey Vision Conf., pp , Article (Crossef ink) [2] C. Schmid,. Mohr and C. Bauckhage, Evaluation of interest point detectors, International Journal of Computer Vision, vol. 37, no. 2, pp , Article (Crossef ink) [3] K. Mikolajczyk and C. Schmid, Scale & affine invariant interest point detectors, International Journal of Computer Vision, vol. 60, no. 1, pp , Article (Crossef ink) [4]. indeberg and J. Garding, Shape-adapted smoothing in estimation of 3-D shape cues from affine deformations of local 2-D brightness structure, Image and Vision Computing, vol. 15, no. 6, pp , Article (Crossef ink)

19 KSII ANSACIONS ON INENE AND INFOMAION SYSEMS VO. 4, NO. 6, December [5] A. Baumberg, eliable feature matching across widely separated views, in Proc. of the IEEE Conf. on Computer Vision and Pattern ecognition, Hilton Head Island, South Carolina, USA, pp , Article (Crossef ink) [6]. uytelaars and K. Mikolajczyk. ocal invariant feature detectors: A survey, Foundations and rends in Computer Graphics and Vision, vol. 3, no. 3, pp , Article (Crossef ink) [7]. indeberg, Feature detection with automatic scale selection, International Journal of Computer Vision, vol., no. 2, pp , Article (Crossef ink) [8] D.G. owe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp , Article (Crossef ink) [9] H. Bay,. uytelaars and.v. Gool, SUF: speeded up robust features, in Proc. of European Conf. on Computer Vision, pp.4-417, Article (Crossef ink) [10] G. Dorkó and C. Schmid, Maximally stable local description for scale selection, in Proc. of European Conference on Computer Vision, Article (Crossef ink) [11] Wei-ing ee and Hwann-zong Chen, Histogram-based interest point detectors, in Proc. of IEEE Conf. on Computer Vision and Pattern ecognition, Article (Crossef ink) [12] Wolfgang F orstner, imo Dickscheid and Falko Schindler, Detecting interpretable and accurate scale-invariant keypoints, in Proc. of the IEEE Conf. on Computer vision and Pattern ecognition, Kyoto, Japan, Article (Crossef ink) [13] Wolfgang Förstner, A framework for low level feature extraction, in Proc. of European conf. on computer vision, Stockholm, Sweden, vol. 3, pp , Article (Crossef ink) [14] J. Matas, O. Chum and M. Urban, and. Pajdla, obust wide baseline stereo from maximally stable extremal regions, IVC, vol. 22, no. 10, pp , Article (Crossef ink) [15]. uytelaars and.v. Gool, Matching widely separated views based on affine invariant regions, International Journal of Computer Vision, vol. 59, no. 1, pp , Article (Crossef ink) [16] imor Kadir, Andrew Zisserman and Michael Brady, An affine invariant salient region detector, in Proc. of European Conference on Computer Vision, pp , Article (Crossef ink) [17] K. Mikolajczyk,. uytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky,. Kadir and.v. Gool, A comparison of affine region detectors, International Journal of Computer Vision, vol. 65, no. 1/2, pp , Article (Crossef ink) [18] F. Schaffalitzky and A. Zisserman, Multi-view matching for unordered image sets, in Proc. of European Conference on Computer Vision, pp , Article (Crossef ink) [19] Appendix A. he affine invariance of affine Hessian matrix Affine Hessian matrix defined in affine Gaussian scale space is as follows H = ( ( x ; D )) (25) which can be used to describe the anisotropic shape of blob-like structures and is affine invariant. Proof: Assume that two patches are related by an affine transformation x = Ax, and then the relation between their affine Gaussian scale-space representations is shown as follows: ( x; ) = ( x ; ) (26) then: ( x ; ) = A ( Ax ; A A ) = A ( Ax ; A A ), ( Ax ; A A ) ( 1 2 )

20 1272 iu et al.: A New Shape Adaptation Scheme to Affine Invariant Detector H ( ( )) ( ) ( ( )) ( ) x A A A ( ) ( ) A 1( Ax; AA ) 1( Ax; AA ) = ( ( ; )) = = A 2( Ax; AA ) 2( Ax; AA ) = A ( ( Ax ; A A )) A = A H A his proves that affine Hessian matrix is affine invariant. Congxin iu received a B.S. degree from Wuhan University of Hydraulic and Electrical Engineering (Yi Chang), China, in 1997, a M.S. degree from hree Gorges University, China, in He is currently a Ph.D. student in the Department of Electronic and Electrical Engineering in Shanghai Jiao ong University. His research interests include local invariant feature and image matching. Jie Yang received a Ph.D. degree in computer science from the University of Hamburg, Germany in Dr Yang is now the professor of Institute of Image Processing & Pattern ecognition in Shanghai Jiao ong University. He has taken charge of many research projects (e.g. National Science Foundation, 863 National High ech. Plan) and published one book in Germany and more than 200 journal papers. His major research interests are image retrieval, object detection and recognition, data mining, and medical image processing. Yue Zhou is an associate professor at the Institute of Image Processing and Pattern ecognition, Shanghai Jiao ong University, China. His research interests include visual surveillance, object detection, and pattern analysis. Deying Feng received a B.S. degree from Shandong University of echnology, China, in 2005, a M.S. degree from Shanghai Maritime University, China, in He is now a Ph.D. student in Shanghai Jiao ong University, China. His research interests include image retrieval and similarity search.

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

Maximally Stable Local Description for Scale Selection

Maximally Stable Local Description for Scale Selection Maximally Stable Local Description for Scale Selection Gyuri Dorkó and Cordelia Schmid INRIA Rhône-Alpes, 655 Avenue de l Europe, 38334 Montbonnot, France {gyuri.dorko,cordelia.schmid}@inrialpes.fr Abstract.

More information

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different detectors Region descriptors and their performance

More information

Affine Adaptation of Local Image Features Using the Hessian Matrix

Affine Adaptation of Local Image Features Using the Hessian Matrix 29 Advanced Video and Signal Based Surveillance Affine Adaptation of Local Image Features Using the Hessian Matrix Ruan Lakemond, Clinton Fookes, Sridha Sridharan Image and Video Research Laboratory Queensland

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors KRYSTIAN MIKOLAJCZYK AND CORDELIA SCHMID [2004] Shreyas Saxena Gurkirit Singh 23/11/2012 Introduction We are interested in finding interest points. What

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors Krystian Mikolajczyk and Cordelia Schmid Presented by Hunter Brown & Gaurav Pandey, February 19, 2009 Roadmap: Motivation Scale Invariant Detector Affine

More information

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison of different detectors Region descriptors and

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Detectors part II Descriptors

Detectors part II Descriptors EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al. 6.869 Advances in Computer Vision Prof. Bill Freeman March 3, 2005 Image and shape descriptors Affine invariant features Comparison of feature descriptors Shape context Readings: Mikolajczyk and Schmid;

More information

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Recap: edge detection. Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel

More information

Image Analysis. Feature extraction: corners and blobs

Image Analysis. Feature extraction: corners and blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).

More information

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba   WWW: SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points

More information

Wavelet-based Salient Points with Scale Information for Classification

Wavelet-based Salient Points with Scale Information for Classification Wavelet-based Salient Points with Scale Information for Classification Alexandra Teynor and Hans Burkhardt Department of Computer Science, Albert-Ludwigs-Universität Freiburg, Germany {teynor, Hans.Burkhardt}@informatik.uni-freiburg.de

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

Blobs & Scale Invariance

Blobs & Scale Invariance Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features? Lecture 1 Why extract eatures? Motivation: panorama stitching We have two images how do we combine them? Local Feature Detection Guest lecturer: Alex Berg Reading: Harris and Stephens David Lowe IJCV We

More information

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004. SIFT keypoint detection D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (), pp. 91-110, 004. Keypoint detection with scale selection We want to extract keypoints with characteristic

More information

arxiv: v1 [cs.cv] 10 Feb 2016

arxiv: v1 [cs.cv] 10 Feb 2016 GABOR WAVELETS IN IMAGE PROCESSING David Bařina Doctoral Degree Programme (2), FIT BUT E-mail: xbarin2@stud.fit.vutbr.cz Supervised by: Pavel Zemčík E-mail: zemcik@fit.vutbr.cz arxiv:162.338v1 [cs.cv]

More information

Local Features (contd.)

Local Features (contd.) Motivation Local Features (contd.) Readings: Mikolajczyk and Schmid; F&P Ch 10 Feature points are used also or: Image alignment (homography, undamental matrix) 3D reconstruction Motion tracking Object

More information

The state of the art and beyond

The state of the art and beyond Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 5: Feature descriptors and matching Szeliski: 4.1 Reading Announcements Project 1 Artifacts due tomorrow, Friday 2/17, at 11:59pm Project 2 will be released

More information

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context Silvio Savarese Lecture 10-16-Feb-15 From the 3D to 2D & vice versa

More information

Extract useful building blocks: blobs. the same image like for the corners

Extract useful building blocks: blobs. the same image like for the corners Extract useful building blocks: blobs the same image like for the corners Here were the corners... Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D 2 g=

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest t points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

Rotational Invariants for Wide-baseline Stereo

Rotational Invariants for Wide-baseline Stereo Rotational Invariants for Wide-baseline Stereo Jiří Matas, Petr Bílek, Ondřej Chum Centre for Machine Perception Czech Technical University, Department of Cybernetics Karlovo namesti 13, Prague, Czech

More information

Achieving scale covariance

Achieving scale covariance Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant

More information

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching Invariant local eatures Invariant Local Features Tuesday, February 6 Subset o local eature types designed to be invariant to Scale Translation Rotation Aine transormations Illumination 1) Detect distinctive

More information

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region

More information

Blob Detection CSC 767

Blob Detection CSC 767 Blob Detection CSC 767 Blob detection Slides: S. Lazebnik Feature detection with scale selection We want to extract features with characteristic scale that is covariant with the image transformation Blob

More information

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble nstance-level recognition: ocal invariant features Cordelia Schmid NRA Grenoble Overview ntroduction to local features Harris interest points + SSD ZNCC SFT Scale & affine invariant interest point detectors

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION Peter M. Roth and Martin Winter Inst. for Computer Graphics and Vision Graz University of Technology, Austria Technical Report ICG TR 01/08 Graz,

More information

Hilbert-Huang Transform-based Local Regions Descriptors

Hilbert-Huang Transform-based Local Regions Descriptors Hilbert-Huang Transform-based Local Regions Descriptors Dongfeng Han, Wenhui Li, Wu Guo Computer Science and Technology, Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of

More information

Instance-level l recognition. Cordelia Schmid INRIA

Instance-level l recognition. Cordelia Schmid INRIA nstance-level l recognition Cordelia Schmid NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these landmars...in

More information

Orientation Map Based Palmprint Recognition

Orientation Map Based Palmprint Recognition Orientation Map Based Palmprint Recognition (BM) 45 Orientation Map Based Palmprint Recognition B. H. Shekar, N. Harivinod bhshekar@gmail.com, harivinodn@gmail.com India, Mangalore University, Department

More information

Lecture 6: Finding Features (part 1/2)

Lecture 6: Finding Features (part 1/2) Lecture 6: Finding Features (part 1/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 6 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Do We Really Have to Consider Covariance Matrices for Image Feature Points?

Do We Really Have to Consider Covariance Matrices for Image Feature Points? Electronics and Communications in Japan, Part 3, Vol. 86, No. 1, 2003 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J85-A, No. 2, February 2002, pp. 231 239 Do We Really Have to Consider Covariance

More information

On the Completeness of Coding with Image Features

On the Completeness of Coding with Image Features FÖRSTNER ET AL.: COMPLETENESS OF CODING WITH FEATURES 1 On the Completeness of Coding with Image Features Wolfgang Förstner http://www.ipb.uni-bonn.de/foerstner/ Timo Dickscheid http://www.ipb.uni-bonn.de/timodickscheid/

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for

More information

Scale-space image processing

Scale-space image processing Scale-space image processing Corresponding image features can appear at different scales Like shift-invariance, scale-invariance of image processing algorithms is often desirable. Scale-space representation

More information

Given a feature in I 1, how to find the best match in I 2?

Given a feature in I 1, how to find the best match in I 2? Feature Matching 1 Feature matching Given a feature in I 1, how to find the best match in I 2? 1. Define distance function that compares two descriptors 2. Test all the features in I 2, find the one with

More information

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015 CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature

More information

Rapid Object Recognition from Discriminative Regions of Interest

Rapid Object Recognition from Discriminative Regions of Interest Rapid Object Recognition from Discriminative Regions of Interest Gerald Fritz, Christin Seifert, Lucas Paletta JOANNEUM RESEARCH Institute of Digital Image Processing Wastiangasse 6, A-81 Graz, Austria

More information

Object Recognition Using Local Characterisation and Zernike Moments

Object Recognition Using Local Characterisation and Zernike Moments Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Corner detection: the basic idea

Corner detection: the basic idea Corner detection: the basic idea At a corner, shifting a window in any direction should give a large change in intensity flat region: no change in all directions edge : no change along the edge direction

More information

Feature Vector Similarity Based on Local Structure

Feature Vector Similarity Based on Local Structure Feature Vector Similarity Based on Local Structure Evgeniya Balmachnova, Luc Florack, and Bart ter Haar Romeny Eindhoven University of Technology, P.O. Box 53, 5600 MB Eindhoven, The Netherlands {E.Balmachnova,L.M.J.Florack,B.M.terHaarRomeny}@tue.nl

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 SIFT: Scale Invariant Feature Transform With slides from Sebastian Thrun Stanford CS223B Computer Vision, Winter 2006 3 Pattern Recognition Want to find in here SIFT Invariances: Scaling Rotation Illumination

More information

Deformation and Viewpoint Invariant Color Histograms

Deformation and Viewpoint Invariant Color Histograms 1 Deformation and Viewpoint Invariant Histograms Justin Domke and Yiannis Aloimonos Computer Vision Laboratory, Department of Computer Science University of Maryland College Park, MD 274, USA domke@cs.umd.edu,

More information

Chapter 3 Salient Feature Inference

Chapter 3 Salient Feature Inference Chapter 3 Salient Feature Inference he building block of our computational framework for inferring salient structure is the procedure that simultaneously interpolates smooth curves, or surfaces, or region

More information

The Kadir Operator Saliency, Scale and Image Description. Timor Kadir and Michael Brady University of Oxford

The Kadir Operator Saliency, Scale and Image Description. Timor Kadir and Michael Brady University of Oxford The Kadir Operator Saliency, Scale and Image Description Timor Kadir and Michael Brady University of Oxford 1 The issues salient standing out from the rest, noticeable, conspicous, prominent scale find

More information

Visual Object Recognition

Visual Object Recognition Visual Object Recognition Lecture 2: Image Formation Per-Erik Forssén, docent Computer Vision Laboratory Department of Electrical Engineering Linköping University Lecture 2: Image Formation Pin-hole, and

More information

Harris Corner Detector

Harris Corner Detector Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC Kepoint etraction: Corners 9300 Harris Corners Pkw Charlotte NC Wh etract kepoints? Motivation: panorama stitching We have two images how do we combine them? Wh etract kepoints? Motivation: panorama stitching

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE Overview Motivation of Work Overview of Algorithm Scale Space and Difference of Gaussian Keypoint Localization Orientation Assignment Descriptor Building

More information

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Esa Rahtu Mikko Salo Janne Heikkilä Department of Electrical and Information Engineering P.O. Box 4500, 90014 University of

More information

Estimators for Orientation and Anisotropy in Digitized Images

Estimators for Orientation and Anisotropy in Digitized Images Estimators for Orientation and Anisotropy in Digitized Images Lucas J. van Vliet and Piet W. Verbeek Pattern Recognition Group of the Faculty of Applied Physics Delft University of Technolo Lorentzweg,

More information

An Improved Harris-Affine Invariant Interest Point Detector

An Improved Harris-Affine Invariant Interest Point Detector An Improved Harris-Affine Invariant Interest Point Detector KARINA PEREZ-DANIE, ENRIQUE ESCAMIA, MARIKO NAKANO, HECTOR PEREZ-MEANA Mechanical and Electrical Engineering School Instituto Politecnico Nacional

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

Algorithms for Computing a Planar Homography from Conics in Correspondence

Algorithms for Computing a Planar Homography from Conics in Correspondence Algorithms for Computing a Planar Homography from Conics in Correspondence Juho Kannala, Mikko Salo and Janne Heikkilä Machine Vision Group University of Oulu, Finland {jkannala, msa, jth@ee.oulu.fi} Abstract

More information

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA nstance-level l recognition Cordelia Schmid & Josef Sivic NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these

More information

EE 6882 Visual Search Engine

EE 6882 Visual Search Engine EE 6882 Visual Search Engine Prof. Shih Fu Chang, Feb. 13 th 2012 Lecture #4 Local Feature Matching Bag of Word image representation: coding and pooling (Many slides from A. Efors, W. Freeman, C. Kambhamettu,

More information

KAZE Features. 1 Introduction. Pablo Fernández Alcantarilla 1, Adrien Bartoli 1, and Andrew J. Davison 2

KAZE Features. 1 Introduction. Pablo Fernández Alcantarilla 1, Adrien Bartoli 1, and Andrew J. Davison 2 KAZE Features Pablo Fernández Alcantarilla 1, Adrien Bartoli 1, and Andrew J. Davison 2 1 ISIT-UMR 6284 CNRS, Université d Auvergne, Clermont Ferrand, France {pablo.alcantarilla,adrien.bartoli}@gmail.com

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Region Moments: Fast invariant descriptors for detecting small image structures

Region Moments: Fast invariant descriptors for detecting small image structures Region Moments: Fast invariant descriptors for detecting small image structures Gianfranco Doretto Yi Yao Visualization and Computer Vision Lab, GE Global Research, Niskayuna, NY 39 doretto@research.ge.com

More information

Edge Detection. CS 650: Computer Vision

Edge Detection. CS 650: Computer Vision CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching Advanced Features Jana Kosecka Slides from: S. Thurn, D. Lowe, Forsyth and Ponce Advanced Features: Topics Advanced features and feature matching Template matching SIFT features Haar features 2 1 Features

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2013/2014 Lesson 18 23 April 2014 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

Interest Point Detection with Wavelet Maxima Lines

Interest Point Detection with Wavelet Maxima Lines Interest Point Detection with Wavelet Maxima Lines Christophe Damerval, Sylvain Meignen To cite this version: Christophe Damerval, Sylvain Meignen. Interest Point Detection with Wavelet Maxima Lines. 7.

More information

Learning Discriminative and Transformation Covariant Local Feature Detectors

Learning Discriminative and Transformation Covariant Local Feature Detectors Learning Discriminative and Transformation Covariant Local Feature Detectors Xu Zhang 1, Felix X. Yu 2, Svebor Karaman 1, Shih-Fu Chang 1 1 Columbia University, 2 Google Research {xu.zhang, svebor.karaman,

More information

Low-level Image Processing

Low-level Image Processing Low-level Image Processing In-Place Covariance Operators for Computer Vision Terry Caelli and Mark Ollila School of Computing, Curtin University of Technology, Perth, Western Australia, Box U 1987, Emaihtmc@cs.mu.oz.au

More information

A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY

A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY IJAMML 3:1 (015) 69-78 September 015 ISSN: 394-58 Available at http://scientificadvances.co.in DOI: http://dx.doi.org/10.1864/ijamml_710011547 A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE

More information

Coding Images with Local Features

Coding Images with Local Features Int J Comput Vis (2011) 94:154 174 DOI 10.1007/s11263-010-0340-z Coding Images with Local Features Timo Dickscheid Falko Schindler Wolfgang Förstner Received: 21 September 2009 / Accepted: 8 April 2010

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? Local invariant features MoPvaPon Requirements, invariances

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005 6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose

More information

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2) Lecture 7: Finding Features (part 2/2) Professor Fei- Fei Li Stanford Vision Lab Lecture 7 -! 1 What we will learn today? Local invariant features MoHvaHon Requirements, invariances Keypoint localizahon

More information

Region Covariance: A Fast Descriptor for Detection and Classification

Region Covariance: A Fast Descriptor for Detection and Classification MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, Peter Meer TR2005-111 May 2006 Abstract We

More information

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

Confidence and curvature estimation of curvilinear structures in 3-D

Confidence and curvature estimation of curvilinear structures in 3-D Confidence and curvature estimation of curvilinear structures in 3-D P. Bakker, L.J. van Vliet, P.W. Verbeek Pattern Recognition Group, Department of Applied Physics, Delft University of Technology, Lorentzweg

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

SYMMETRY is a highly salient visual phenomenon and

SYMMETRY is a highly salient visual phenomenon and JOURNAL OF L A T E X CLASS FILES, VOL. 6, NO. 1, JANUARY 2011 1 Symmetry-Growing for Skewed Rotational Symmetry Detection Hyo Jin Kim, Student Member, IEEE, Minsu Cho, Student Member, IEEE, and Kyoung

More information

Affine Differential Invariants for Invariant Feature Point Detection

Affine Differential Invariants for Invariant Feature Point Detection Affine Differential Invariants for Invariant Feature Point Detection Stanley L. Tuznik Department of Applied Mathematics and Statistics Stony Brook University Stony Brook, NY 11794 stanley.tuznik@stonybrook.edu

More information

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract

More information

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica SIFT, GLOH, SURF descriptors Dipartimento di Sistemi e Informatica Invariant local descriptor: Useful for Object RecogniAon and Tracking. Robot LocalizaAon and Mapping. Image RegistraAon and SAtching.

More information

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou 1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process

More information