Available online at http://www.urpjournals.com Science Insights: An International Journal Universal Research Publications. All rights reserved ISSN 2277 3835 Original Article Object Recognition using Zernike Moment Invariants G.RameshBabu*, Sri M.Venkata Dasu Annamacharya Institute of Technology and Sciences, Rajampet *Email: rameshthetrunz@gmail.com Received 26 August 2012; accepted 07 September 2012 Abstract Moment invariants are widely used in the past decades. In this paper, a new set of moment invariants with respect to rotation, translation, and scaling suitable for recognition of objects. A set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Moment invariants described earlier cannot be used for this purpose because most moments of symmetric objects vanish. The invariants proposed here are based on Zernike moments, Legendre moments, complex moments. The contributions provided are: the theoretical framework for deriving the Zernike moments of a blurred image and the way of constructing the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated Mean Relative Error and mean classification rate. Experimental results show that the proposed descriptors perform on the overall better. 2011 Universal Research Publications. All rights reserved Index Terms Zernike moments, Complex moments, Legendre moments, moment invariants, Mean Relative Error, Object Recognition rate. I. INTRODUCTION RECOGNITION of objects is depended on their position, size and orientation. Deriving a Set of invariants which does not depend on position, size and orientation is an important concern. Many techniques including moment invariants [1] [5], Fourier descriptors [6] and point set invariants [7] [10] have been reported in the literature. Of all these, moment invariants are widely used for image description in object recognition [11], [12], image classification [13] and scene matching [14]. However, much less attention has been paid to invariants with respect to changes of the image intensity function (known as radiometric invariants) as to joint radiometric-geometric invariants. Moment invariants have become a classical tool for object recognition during last 40 years. No doubt they are one of the most important and most frequently used shape descriptors. Even if they suffer from some intrinsic limitations (the most important of which is their globality, which prevents them from being used for recognition of occluded objects), they frequently serve as a reference method for evaluation of the performance of other shape descriptors. Despite of large amount of effort and huge number of published papers, there are still open problems to be resolved. Since real sensing systems are usually imperfect and environmental conditions are changing during the acquisition, the observed images often provide a degraded version of the true scene. Image blurring is an important class of degradations we have to face in practice due to the camera defocus, atmospheric turbulence, vibrations, and by sensor or scene motion[15]. Blurring can be usually described by a convolution of an unknown original image with a space invariant point spread function (PSF). A conventional way to carry out blur object recognition is first to deblur the image, and then to apply the recognition methods [16] [21]. Unfortunately, the blind image deconvolution is an ill-posed problem. Moreover, the deconvolution process may introduce new artifacts to the image. To avoid these disadvantages, it is of great importance to find a set of invariants that is not affected by blurring and TRS. This is the objective of this paper. The first paper on this subject was reported by Flusser and Suk [15] who derived invariants to convolution with an arbitrary centrosymmetric PSF. These invariants have been successfully used in pattern recognition [22] [25], in blurred digit and character recognition [26], [27], in medical image registration [28], and in focus/defocus quantitative measurement [29]. Other sets of blur invariants have been derived for some particular kinds of PSF like the axisymmetric blur invariants [30] and motion blur invariants [31], [32]. More recently, Flusser and Zitova introduced the combined blur-rotation invariants [33] and applied them to satellite image registration [34] and camera motion estimation [35]. Zhang et al. [36] proposed a method to get a set of affine-blur invariants. However, in 15
their approach, the affine invariance is achieved through a normalization process. Suk and Flusser derived explicitly a set of combined invariants with respect to affine transform and to blur [37]. These blur invariants have been further extended to -dimensions in continuous case [38] as well as in discrete form [39]. The previously mentioned methods are mainly based upon geometric or complex moments. The resulting invariants have information redundancy and are more sensitive to noise. We [40] have recently proposed an approach based upon the orthogonal Legendre moments to derive a set of blur invariants, and we have shown that they are more robust to noise and have better discriminative power than the existing methods. However, as pointed out in [40], one weak point of Legendre moment descriptors is that they are only invariant to translation, but not invariant under image rotation and scaling. Zhu et al. [41] and Ji and Zhu [42] proposed the use of the Zernike moments to construct a set of combined blur-rotation invariants. Unfortunately, there are two limitations to their methods: 1) only the Gaussian blur has been taken into account, which is a special case of PSF having circularly symmetry; 2) only a subset of Zernike moments of order with repetition, has been used in the derivation of invariants. Since corresponds to the radial moment or the complex moment if neglecting the normalization factor, the set of invariants constructed by Zhu et al. is a subset of that proposed by Flusser [43]. In this paper, we propose a new method to derive a set of combined geometric-blur invariants based upon orthogonal Zernike moments for translational, rotational and scaling (TRS). We further assume that the applied PSF is circularly symmetric. The reasons for such a choice of PSF are as follows [43]: 1) the majority of the PSFsoccurring in real situations exhibit a circular symmetry; 2) since the PSFs having circular symmetry are a subset of centrosymmetric functions, it could be expected that we can derive some new invariants. In fact, the previously reported convolution invariants with centrosymmetric PSF include only the odd order moments. Flusser and Zitova [43] have shown that there exist even order moment invariants with circularly symmetric PSF. The organization of this paper is as follows. In Section II, some preliminaries about the radial moments, Zernike moments and image blurring are given. In Section III, Combined invariants of Zernike Moment Invariants is proposed. Experimental results on the proposed descriptors performance are provided in Section IV. Section V concludes the paper. II. PRELIMINARIES This section presents the definition of moments, centralised moments,invariants, radial and Zernike moments, legendre moments and complex Zernike moments and reviews the concept of image blurring. Moments are the statistical expectation of certain power functions of a random variable. The most common moment is the mean which is just the expected value of a random variable: More generally, moments of order p = 0, 1, 2 can be calculated as m p = E[X p ] (1). These are sometimes referred to as the raw moments. There are other kinds of moments that are often useful. One of these is the central moments p = E[(X ) p ]. (2) The best known central moment is the second, which is known as the variance Two less common statistical measures, skewness and kurtosis, are based on the third and fourth central moments. One property of moments of probability distributions is that distributions have unique sets of moments, much like the terms of a Fourier series. But, also like the Fourier series, it is possible for two different distributions to have some of the same moments. But, not all moments will be the same. Still moments of lower orders are often useful to distinguish between different distributions, especially since they can be so easily estimated from samples from the distributions. One just needs to make sure that appropriate moments are computed. 2.1 Moments M N Moment for 2-D is M p,q = x=1 y =1 x p y q p xy (3) 2.2 Centralised Moments Centralised moments is defined by M N μ p,q = x =1 y =1(x x ) p (y y ) q p xy (4) In many applications such as shape recognition, it is useful to generate shape features which are independent of parameters which cannot be controlled in an image. Such features are called invariant features. There are several types of invariance like scale invariants, rotational invariants, translational invariants and combination of three is TRS invariant. 2.3 Zernike moments Zernike polynomials are orthogonal over the unit disk and are specified in polar coordinates in terms of a real valued radial component R nl (r) which is a polynomial of order n, and a complex exponential component: V nl (r, ) = R nl (r) e jl, where 0 r 1, 0 < 2, (5) and n l is even and l n. l is called the repetition and represents the angle dependence. The radial moment of order with repetition of image intensity function is The Zernike moment of order with repetition q of f(r, θ) is p 0, q < p, p q being even wherer p,q r is the real-valued radial polynomial given by Let f be a rotated version of f, f r, θ = f (r, θ β) i.e.,, (f where βis the angle of rotation, and let Z ) q+2l,q be the Zernike moments of f. It can be easily seen from (2) that Let g x, y be a blurred version of the original image f(x,y), under the condition that of imaging system being 16
linear shift-invariant, the blurring can be usually described by the convolution g x, y = (f h) x, y (10) where h x, y is the PSF of the imaging system, and denotes the linear convolution. In this paper, we assume that the PSF h x, y, is a circularly symmetric image function, and that the imaging system is energy-preserving, which leads to h x, y = h r, θ = h r h x, y dxdy = 1 2.4 Legendre moments The Legendre of order m + n are defined as: where m, n = 0,1,2,3,., P m and P n are the Legendre polynomials and f x, y is the continuous image function. The Legendre polynomials are a complete orthogonal basis set defined over the interval [ 1,1]. For orthogonality to exist in the moments, the image function f x, y is defined over the same interval as the basis set, where the n t order Legendre polynomial is defined as: V mn (x, y)zernike polynomial expressed in polar coordinates are: where r, θ are defined over the unit disc, j = 1an R mn r d is the orthogonal radial polynomial, defined as R mn r Orthogonal radial polynomial: where: where R m,n r = R m, n (r)and it must be noted that if the conditions in Equation are not met, then R m,n r = 0. The first six orthogonal radial polynomials are: and a nj are the Legendre coefficients given by: So, for a discrete image with current pixel P xy, Equation and x, yare defined over the interval [ 1,1]. 2.5 Complex Zernike moments The Zernike polynomials were first proposed in 1934 by Zernike. Their moment formulation appears to be one of the most popular, outperforming the alternatives (in terms of noise resilience, information redundancy and reconstruction capability). The pseudo-zernike formulation proposed by Bhatia and Wolf further improved these characteristics. However, here we study the original formulation of these orthogonal invariant moments. Complex Zernike moments are constructed using a set of complex polynomials which form a complete orthogonal basis set defined on the unit disc (x 2 + y 2 ) 1. They are expressed as A pq Two dimensional Zernike moment: III METHOD A.Combined Invariants of Zernike Moments In this subsection, we construct a set of combined geometric blur invariants. As we have already stated, the translation invariance can be achieved by using the central Zernike moments. Equation shows that the magnitude of is invariant to rotation. However, as indicated by Flusser, the magnitudes do not yield a complete set of the invariants. Herein, we provide a way to build up such a set. Let f and f be two images having the same content but distinct orientation β and scale λ, that is, f r, θ = f( r λ image is given by, θ β), the Zernike moment of the transformed This is invariant to both image rotation and scaling for any nonnegative integers. It is preferable to use the lower order moments because they are less sensitive to noise than the higher order ones. For any q 0 and l 0, let Where I q + 2m, q f is defined as linear combination of the lower order blur invariants and it is also invariant to where m = 0,1,2,3,., and defines the order, f x, y is the function being described and * denotes the complex conjugate. While n is an integer (that can be positive or negative) depicting the angular dependence, or rotation, subject to the conditions: m n = even, n m and A mn = A m, n is true. The Zernike polynomials 17 blur. Then, SI q + 2l, q f is both invariant to convolution and to image scaling and rotation. A set of blur invariants of Zernike moments with arbitrary order and express them in explicit form is constructed. For simplicity the invariants of Zernike moments is denoted as ZMI s.
IV EXPERIMENTAL RESULTS Fig 1:- Butterfly image (std) Fig 4:- Mean relative errors of CMIs, LMIs, and ZMIs for different blurred versions of the images shown in Fig. 1. Fig 2:- ZMI s of TRS butterfly image and the mean relative error. Fig 3:- Mean relative errors of CMI, LMI, ZMI of a TRS butterfly Fig. 5. Mean classification rates (%) of the LMIs, CMIs, and the proposed ZMIs in object recognition for different moment orders.izzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz V. CONCLUSION In this paper, we constructed a set of combined geometricblur invariants using the orthogonal Zernike moments. Combined blur invariants of Zernike moments have been established. The advantages of the proposed method over the existing ones are the following: 1) The proposed descriptors are simultaneously invariant to similarity transformation and to convolution. The image deblurring and geometric normalization process can be well avoided by using these invariants,. 2) Like the method reported in [43], our method can also derive the even order invariants. The experiments conducted so far in very distinct situations demonstrated that the proposed descriptors. REFERENCES [1] M. K. Hu, Visual pattern recognition by moment invariants, IRE Trans. Inf. Theory, vol. IT-8, no. 2, pp. 179 187, 1962. [2] T. H. Reiss, The revised fundamental theorem of moment invariants, IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 8, pp. 830 834, Aug. 1991. 18
[3] J. Flusser and T. Suk, Pattern recognition by affine moment invariants, Pattern Recognit., vol. 26, no. 1, pp. 167 174, 1993. [4] S. S. Reddi, Radial and angular moment invariants for image identification, IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-3, no. 2, pp. 240 242, Feb. 1981. [5] Y. S. Abu-Mostafa and D. Psaltis, Recognitive aspects of moment invariants, IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-6, no. 6, pp. 698 706, Jun. 1984. [6] K. Arbter,W. E. Snyder, H. Burkhardt, and G. Hirzinger, Application of affine-invariant Fourier descriptors to recognition of 3-D objects, IEEE Trans. Pattern Anal. Machine Intell., vol. 12, no. 7, pp. 640 647, Jul. 1990. [7] T. Suk and J. Flusser, Vertex-based features for recognition of projectively deformed polygons, Pattern Recognit., vol. 29, no. 3, pp. 361 367, 1996. [8] R. Lenz and P. Mer, Point configuration invariants under simultaneous projective and permutation transformations, Pattern Recognit., vol. 27, no. 11, pp. 1523 1532, 1994. [9] N. S. V. Rao, W. Wu, and C. W. Glover, Algorith ms for recognizing planar polygonal configurations using perspective images, IEEE Trans. Robot. Autom., vol. 8, no. 4, pp. 480 486, Aug. 1992. [10] D. G. Lowe, Distinctive image features from scaleinvariant key points, Int. J. Comput. Vis., vol. 60, no. 2, pp. 91 110, 2004. FirstAuthor: G.RameshBabu, Mobile: +919502624356, Email: rameshthetrunz@gmail.com Second Author: M.VenkataDasu, Associate Professor, Annamacharya Institute of Technology and Sciences, Rajampet Email:dasu_marri@gmail.com Source of support: Nil; Conflict of interest: None declared 19