Similarity Measures for Categorical Data A Comparative Study. Technical Report

Similar documents
Similarity Measures for Categorical Data: A Comparative Evaluation

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Optimization of Geometries by Energy Minimization

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems

Implicit Differentiation

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

SYNCHRONOUS SEQUENTIAL CIRCUITS

CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu

Level Construction of Decision Trees in a Partition-based Framework for Classification

A Modification of the Jarque-Bera Test. for Normality

05 The Continuum Limit and the Wave Equation

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

The Principle of Least Action

Lagrangian and Hamiltonian Mechanics

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Necessary and Sufficient Conditions for Sketched Subspace Clustering

Linear First-Order Equations

u!i = a T u = 0. Then S satisfies

Why Bernstein Polynomials Are Better: Fuzzy-Inspired Justification

6 General properties of an autonomous system of two first order ODE

Function Spaces. 1 Hilbert Spaces

Estimating Causal Direction and Confounding Of Two Discrete Variables

Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion

Least-Squares Regression on Sparse Spaces

The total derivative. Chapter Lagrangian and Eulerian approaches

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Schrödinger s equation.

Parameter estimation: A new approach to weighting a priori information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

Vectors in two dimensions

On the Surprising Behavior of Distance Metrics in High Dimensional Space

CONTROL CHARTS FOR VARIABLES

The Press-Schechter mass function

Introduction to variational calculus: Lecture notes 1

Part I: Web Structure Mining Chapter 1: Information Retrieval and Web Search

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

A New Minimum Description Length

Situation awareness of power system based on static voltage security region

Nuclear Physics and Astrophysics

DEGREE DISTRIBUTION OF SHORTEST PATH TREES AND BIAS OF NETWORK SAMPLING ALGORITHMS

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

ELEC3114 Control Systems 1

Introduction to the Vlasov-Poisson system

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Acute sets in Euclidean spaces

Topic 7: Convergence of Random Variables

7.1 Support Vector Machine

Code_Aster. Detection of the singularities and computation of a card of size of elements

Math 342 Partial Differential Equations «Viktor Grigoryan

A Course in Machine Learning

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

How to Minimize Maximum Regret in Repeated Decision-Making

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Influence of weight initialization on multilayer perceptron performance

arxiv: v1 [physics.class-ph] 20 Dec 2017

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Separation of Variables

Cascaded redundancy reduction

How the potentials in different gauges yield the same retarded electric and magnetic fields

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

MULTIFRACTAL NETWORK GENERATORS

arxiv:hep-th/ v1 3 Feb 1993

Linear and quadratic approximation

arxiv: v1 [physics.flu-dyn] 8 May 2014

Delocalization of boundary states in disordered topological insulators

EIGEN-ANALYSIS OF KERNEL OPERATORS FOR NONLINEAR DIMENSION REDUCTION AND DISCRIMINATION

u t v t v t c a u t b a v t u t v t b a

A Randomized Approximate Nearest Neighbors Algorithm - a short version

Multi-View Clustering via Canonical Correlation Analysis

Computing the Longest Common Subsequence of Multiple RLE Strings

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

An Analytical Expression of the Probability of Error for Relaying with Decode-and-forward

Optimal Signal Detection for False Track Discrimination

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets

Inverse Theory Course: LTU Kiruna. Day 1

Space-time Linear Dispersion Using Coordinate Interleaving

Bayesian Estimation of the Entropy of the Multivariate Gaussian

Bohr Model of the Hydrogen Atom

Calculus in the AP Physics C Course The Derivative

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties

Calculus of Variations

A New Family of Near-metrics for Universal Similarity

A simple model for the small-strain behaviour of soils

2Algebraic ONLINE PAGE PROOFS. foundations

Non-deterministic Social Laws

Math 1B, lecture 8: Integration by parts

KNN Particle Filters for Dynamic Hybrid Bayesian Networks

State observers and recursive filters in classical feedback control theory

Axiometrics: Axioms of Information Retrieval Effectiveness Metrics

Lower bounds on Locality Sensitive Hashing

Chapter 6: Energy-Momentum Tensors

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

Transcription:

Similarity Measures for Categorical Data A Comparative Stuy Technical Report Department of Computer Science an Engineering University of Minnesota 4-92 EECS Builing 200 Union Street SE Minneapolis, MN 55455-059 USA TR 07-022 Similarity Measures for Categorical Data A Comparative Stuy Varun Chanola, Shyam Boriah, an Vipin Kumar October 5, 2007

Similarity Measures for Categorical Data A Comparative Stuy Varun Chanola, Shyam Boriah, an Vipin Kumar Department of Computer Science & Engineering University of Minnesota Abstract Measuring similarity or istance between two entities is a key step for several ata mining an knowlege iscovery tasks. The notion of similarity for continuous ata is relatively well-unerstoo, but for categorical ata, the similarity computation is not straightforwar. Several ata-riven similarity measures have been propose in the literature to compute the similarity between two categorical ata instances but their relative performance has not been evaluate. In this paper we stuy the performance of a variety of similarity measures in the context of a specific ata mining task: outlier etection. Results on a variety of ata sets show that while no one measure ominates others for all types of problems, some measures are able to have consistently high performance. Introuction Measuring similarity or istance between two ata points is a core requirement for several ata mining an knowlege iscovery tasks that involve istance computation. Examples inclue clustering (kmeans), istance-base outlier etection, classification (knn, SVM), an several other ata mining tasks. These algorithms typically treat the similarity computation as an orthogonal step an can make use of any measure. For continuous ata sets, the Minkowski Distance is a general metho to compute istance between two multivariate points. In particular, the Minkowski Distance of orer (Manhattan) an orer 2 (Eucliean) are the two most wiely use istance measures for continuous ata. The key observation about the above measures is that they are inepenent of the unerlying ata set to which the two points belong. Several ata riven measures such as Mahalanobis Distance have also been explore for continuous ata. The notion of similarity or istance for categorical ata is not as straightforwar as for continuous ata. The key characteristic of categorical ata is that ifferent values that a categorical attribute takes are not inherently orere. Thus it is not possible to irectly compare two ifferent categorical values. The simplest way to fin similarity between two categorical attributes is to assign a similarity of if the values are ientical an a similarity of 0 if the values are not ientical. For two multivariate categorical ata points, the similarity between them will be irectly proportional to the number of attributes in which they match. This simple measure is also known as the overlap measure in the literature [29]. One obvious rawback of the overlap measure is that it oes not istinguish between the ifferent values taken by an attribute. All matches as well as mismatches are treate as equal. For example, consier a categorical ata set, D efine over two attributes: color an shape. Let color take 3 possible values in D: {re, blue, green} an shape take 3 possible values in D: {square, circle, triangle}. Table summarizes the frequency of occurrence for each possible combination in D. color shape square circle triangle Total re 30 2 3 35 blue 25 25 0 50 green 2 2 5 Total 57 28 5 Table : Frequency Distribution of a Simple 2-D Categorical Data Set The overlap similarity between two instances (green,square) an (green,circle) is 3. The overlap similarity between (blue,square) an (blue,circle) is also 3. But the frequency istribution in Table shows that while (blue,square) an (blue,circle) are frequent combinations, (green,square) an (green,circle) are very rare combinations in the ata set. Thus, it woul appear that the overlap measure is too simplistic in giving equal importance to all matches an mismatches. Although there is no inherent orering in categorical ata, the previous example shows that there is other information in categorical ata sets that can be use to efine what shoul be consiere more similar an what shoul be consiere less similar.

This observation has motivate researchers to come up with ata-riven similarity measures for categorical attributes. Such measures take into account the frequency istribution of ifferent attribute values in a given ata set to efine similarity between two categorical attribute values. In this paper, we stuy a variety of similarity measures propose in iverse research fiels ranging from statistics to ecology as well as many of their variations. Each measure uses the information present in the ata uniquely, to efine similarity. Since we are evaluating ata-riven similarity measures it is obvious that their performance is highly relate to the ata set that is being analyze. To unerstan this relationship, we first ientify the key characteristics of a categorical ata set. For each of the ifferent similarity measure that we stuy, we analyze how it relates to the ifferent characteristics of the ata set.. Key Contributions The key contributions of this paper are as follows: We have brought together several categorical measures from ifferent fiels an stuie them together in a single context. We evaluate 4 ifferent ata-riven similarity measures for categorical ata on a wie variety of benchmark ata sets. In particular, we show the utility of ata-riven measures for the problem of etermining similarity with categorical ata. We have also propose a number of new measures that are either variants of other previously propose measures or erive from previously propose similarity frameworks. The performance of some of the measures we propose is among the best performance of all the measures we stuie. We ientify the key characteristics of a categorical ata set an analyze each similarity measure in relation to the characteristics of categorical ata..2 Organization of the Paper The rest of the paper is organize as follows. We first mention all relate efforts in the stuy of similarity measures in Section 2. In section 3, we ientify various characteristics of categorical ata that are relevant to this stuy. We then introuce the 4 ifferent similarity measures that are stuie in this paper in Section 4. We escribe our experimental setup, evaluation methoology an the results on public ata sets in Section 6. 2 Relate Work Sneath an Sokal iscuss categorical similarity measures in some etail in their book [28] on numerical taxonomy. They were among the first to put together an iscuss many of the measures iscusse in their book. At the time, two major concerns were (), biological relevance, since numerical taxonomy was mainly concerne with taxonomies from biology, ecology, etc. an (2) computation efficiency since computational resources were limite an scarce. Nevertheless, many of the observations mae by Sneath an Sokal are quite relevant toay an offer key insights into many of the measures. There are several books [2, 9, 6, 20] on cluster analysis that iscuss the problem of etermining similarity between categorical attributes. However, most of these books o not offer solutions to the problem or iscuss the measures in this paper, an the usual recommenation is to binarize the ata an then use binary similarity measures. Wilson an Martinez [3] performe a etaile stuy of heterogeneous istance functions (for ata with categorical an continuous attributes) for instancebase learning. The measures in this stuy are base upon a supervise approach where each ata instance has class information in aition to a set of categorical/continuous attributes. Measures iscusse in this paper are orthogonal to [3] since supervise measures etermine similarity base on class information, while ata-riven measures etermine similarity base on the ata istribution. In principle, both ieas can be combine. There have been a number of new ata mining techniques for categorical ata that have been propose recently. Some of them use notions of similarity which are neighborhoo-base [5, 4, 8, 24,, 2], or incorporate the similarity computation into the learning algorithm [3, 8, 2]. Neighborhoo-base approaches use some notion of similarity (usually the overlap measure) to efine the neighborhoo of a ata instance, while the measures we stuy in this paper are irectly use to etermine similarity between a pair of ata instances; hence, we see the measures iscusse in this paper as being useful to compute the neighborhoo of a point an neighborhoo-base measures as meta-similarity measures. Since techniques which embe similarity measures into the learning algorithm o not explicitly efine general categorical similarity measures, we o not iscuss them in this paper. 3 Categorical Data Categorical ata (also known as nominal or qualitative multi-state ata) has been stuie for a long time in various contexts. As mentione earlier, computing

similarity between categorical ata instances is not straightforwar; owing to the fact that there is no explicit notion of orering between categorical values. To overcome this problem, several ata-riven similarity measures have been propose for categorical ata. The behavior of such measures irectly epens on the ata. In this section we ientify the key characteristics of a categorical ata set, that can potentially affect the behavior of a ata riven similarity measure. For the sake of notation, consier a categorical ata set D containing N objects, efine over a set of categorical attributes where A k enotes the k th attribute. Let the attribute A k take n k values in the given ata set that are enote by the set A k. We also use the following notation: f k (x): The number of times attribute A k takes the value x in the ata set D. Note that if x / A k, f k (x) = 0 ˆp k (x): The sample probability of attribute A k to take the value x in the ata set D. The sample probability is given by ˆp k (x) = f k(x) N p 2 k (x): Another probability estimate of attribute A k to take the value x in a given ata set an is given by p 2 k(x) = f k(x)(f k (x) ) N(N ) 3. Characteristics of Categorical Data Set Since this paper iscusses ata riven similarity measures for categorical ata, a key task is to ientify the characteristics of a categorical ata set that affect the behavior of such a similarity measure. We enumerate the characteristics of a categorical ata set below: Size of Data, N. As we will see later, most measures are typically invariant of the size of the ata, but some (e.g. Smirnov) o incorporate it. Number of attributes,. Most measures are invariant of this characteristic, since they typically normalize the similarity over the number of attributes. But in our experimental results we observe that the number of attributes oes affect the performance of the outlier etection algorithms. Number of values taken by each attribute, n k. A ata set might contain attributes that take several values an attributes that take a very few values. For example, one attribute might take several hunre possible values, while the other attribute might take very few values. A similarity measure might give more importance to the secon attribute, while ignoring the first one. In fact one of the measures iscusse in this paper (eskin) behaves exactly like this. Distribution of f k (x). This refers to the istribution of frequency of values taken by an attribute in the given ata set. In certain ata sets an attribute might be istribute uniformly over the A k, while in others the istribution might be skewe. A similarity measure might give more importance to attribute values that occur rarely, while another similarity measure might give more importance to frequently occurring attribute values. 4 Similarity Measures for Categorical Data The stuy of similarity between ata objects with categorical variables has ha a long history. Pearson propose a chi-square statistic in the late 800s which is often use to test inepenence between categorical variables in a contingency table. Pearson s chi-square statistic was later moifie an extene, leaing to several other measures [25, 23, 7]. More recently, however, the overlap measure has become the most commonly use similarity measure for categorical ata. Its popularity is perhaps relate to simplicity an easy of use. In this section, we will iscuss the overlap measure an several ata-riven similarity measures for categorical ata. Note that we have converte measures that were originally propose as istance to similarity measures in orer to make the measures comparable in this stuy. The measures iscusse henceforth will all be in the context of similarity, with istance measures being converte using the formula: sim = +ist. Any similarity measure assigns a similarity between two ata instances belonging to the ata set D (introuce in Section 3) as follows: (4.) S(X, Y ) = w k S k (X k, Y k ) k= where S k (X k, Y k ) is the per attribute similarity between two values for the categorical attribute A k. Note that X k, Y k A k. The quantity w k enotes the weight assigne to the attribute A k. To unerstan how ifferent measures calculate the per attribute similarity, S k (X k, Y k ), consier a categorical attribute A, which takes one of the values {a, b, c, }. We have roppe the subscript k for simplicity. The per attribute similarity computation is equivalent to constructing the (symmetric) matrix shown in Figure. Essentially, in etermining the similarity between two values, any categorical measure is filling the entries

Figure : Attribute a b c a S(a, a) S(a, b) S(a, c) S(a, ) b S(b, b) S(b, c) S(b, ) c S(c, c) S(c, ) S(, ) Similarity Matrix for a Single Categorical of this matrix. For example, the overlap measure sets the iagonal entries to an the off-iagonal entries to 0, i.e. the similarity is if the values match an 0 if the values mismatch. Aitionally, measures may use the following information in computing a similarity value (all the measures in this paper use only this information): f(a), f(b), f(c), f(), the frequencies of the values in the ata set N, the size of the ata set n, the number of values taken by the attribute (4 in the case above) We can classify measures in several ways, base on: (i) the manner in which they fill the entries of the similarity matrix, (ii) whether more weight is a function of the frequency of the attribute values, (iii) the arguments use to propose the measure (probabilistic, information-theoretic, etc.). In this paper, we will escribe the measures by classifying them as follows: those that fill the iagonal entries only. These are measures that set the off-iagonal entries to 0 (mismatches are uniformly given the minimum value) an give possibly ifferent weights to matches. those that fill the off-iagonal entries only. These measures set the iagonal entries to (matches are uniformly given the maximum value) an give possibly ifferent weights to mismatches. those that fill both iagonal an off-iagonal entries. These measures give ifferent weights to both matches an mismatches. Table 4 gives the mathematical formulas for the measures we will be escribing in this paper. The various techniques escribe in Table 4 compute the per-attribute similarity S k (X k, Y k ) as shown in column 2 an compute the attribute weight w k as shown in column 3. 4. Measures that fill Diagonal Entries only. Overlap. The overlap measure simply counts the number of attributes that match in the two ata instances. The range of per attribute similarity for the overlap measure is [0, ], with a value of 0 occurring when there is no match, an a value of occurring when the attribute values match. 2. Gooall. Gooall [4] propose a measure that attempts to normalize the similarity between two objects by the probability that the similarity value observe coul be observe in a ranom sample of two points. This measure assigns higher similarity to a match if the value is infrequent than if the value is frequent. Gooall s original measure etails a proceure to combine similarities in the multivariate setting which takes into account epenencies between attributes. Since this proceure is computationally expensive, we use a simpler version of the measure (escribe next as Gooall). Gooall s original measure is not empirically evaluate in this paper. We also propose three variants of Gooall s measure in this paper: Gooall2, Gooall3 an Gooall4. 3. Gooall. The Gooall measure is the same as Gooall s measure on a per-attribute basis. However, instea of combining the similarities by taking into account epenencies between attributes, the Gooall measure takes the average of the perattribute similarities. The range of S k (X k, Y k ) for matches in Gooall measure is [ 2 N, ], with 2 the minimum being attaine when X k is the most frequent value for attribute k, an the maximum is attaine when the attribute k takes N values (every value occurs only once). 4. Gooall2. The Gooall2 measure is a variant of Gooall s measure propose by us. This measure assigns higher similarity if the matching values are infrequent, an at the same time there are other values are even less frequent, i.e. the similarity is higher if there are many values with approximately equal frequencies, an lower if the frequency istribution is skewe. The range of S k (X k, Y k ) for matches in the Gooall2 measure is [0, 2 N ], 2 with the minimum value being attaine if attribute k takes only one value, an maximum value is attaine when X k is the least frequent value for attribute k. 5. Gooall3. We also propose another variant of Gooall s measure calle Gooall3. The Gooall3 measure assigns a high similarity if the matching

Measure S k (X k, Y k ) w k, k =... {. Overlap = if X k = Y k 0 otherwise 2. Eskin = n 2 k n 2 k +2 { if Xk = Y k otherwise { if Xk = Y k 3. IOF = otherwise +log f k (X k ) log f k (Y k ) { if Xk = Y k 4. OF = +log otherwise 5. Lin = { N f k (X k ) log N f k (Y k ) 2 log ˆp k (X k ) if X k = Y k 2 log(ˆp k (X k ) + ˆp k (Y k )) otherwise log ˆp k (q) if X k = Y k q Q 6. Lin = 2 log ˆp k (q) otherwise q Q p 2 k (q) if X k = Y k 7. Gooall = q Q 0 otherwise p 2 k (q) if X k = Y k 8. Gooall2 = q Q 0 otherwise 9. Gooall3 = 0. Gooall4 =. Smirnov = 2 + N f k (X k ) f k (X k ) + { q {A k \X k } q {A k \{X k,y k }} p 2 k (X k) if X k = Y k 0 otherwise { p 2 k (X k) if X k = Y k 0 otherwise f k (q) N f k (q) f k (q) N f k (q) if X k = Y k otherwise i= log ˆp i(x i )+log ˆp i (Y i ) i= q Q log ˆp i(q) k= n k 2. Gambaryan = 3. Burnaby = log 4. Anerberg S(X, Y ) = { [ˆpk (X k ) log 2 ˆp k (X k )+ if X k = Y k ( ˆp k (X k )) log 2 ( ˆp k (X k ))] 0 otherwise if X k = Y k 2 log( ˆp k (q)) q A k ˆp k (X k ) ˆp k (Y k ) ( ˆp k (X k ))( ˆp k (Y k )) + q A k 2 log( ˆp k (q)) k { k :X k =Y k } ( k { k :X k =Y k } ) 2 2 ˆp k (X k ) n k (n k + ) + otherwise ( ) 2 2 ˆp k (X k ) n k (n k + ) k { k :X k Y k } ( k= n k 2ˆp k (X k )ˆp k (Y k ) ) 2 n k (n k + ) Table 2: Similarity Measures for Categorical Attributes. Note that S(X, Y ) = k= w ks k (X k, Y k ). For measure Lin, {Q A k : q Q, ˆp k (X k ) ˆp k (q) ˆp k (Y k )}, assuming ˆp k (X k ) ˆp k (Y k ). For measures Gooall an Gooall, {Q A k : q Q, p k (q) p k (X k )}. For measure Gooall2, {Q A k : q Q, p k (q) p k (X k )}.

values are infrequent regarless of the frequencies of the other values. The range of S k (X k, Y k ) for matches in the Gooall3 measure is [0, N ], 2 with the minimum value being attaine if X k is the only value for attribute k an maximum value is attaine if X k occurs only once. 6. Gooall4. The Gooall4 measure assigns similarity Gooall3 for matches. The range of S k (X k, Y k ) for matches in the Gooall4 measure is [ N, ], with 2 the minimum value being attaine if X k occurs only once, an the maximum value is attaine if X k is the only value for attribute k. 7. Gambaryan. Gambaryan propose a measure [] that gives more weight to matches where the matching value occurs in about half the ata set, i.e. in between being frequent an rare. The Gambaryan measure for a single attribute match is closely relate to the Shannon entropy from information theory, as can be seen from its formula in Table 4. The range of S k (X k, Y k ) for matches in the Gambaryan measure is [0, ], with the minimum value being attaine if X k is the only value for attribute k an the maximum value is attaine when X k has frequency N 2. 4.2 Measures that fill Off-iagonal Entries only. Eskin. Eskin et al. [9] propose a normalization kernel for recor-base network intrusion etection ata. The original measure is istance-base 2 an assigns a weight of for mismatches; when n 2 k aapte to similarity, this becomes a weight of n 2 k n 2 +2. This measure gives more weight to mismatches that occur on attributes that take many k values. The range of S k (X k, Y k ) for mismatches in the Eskin measure is [ 2 3, N 2 N 2 +2 ], with the minimum value being attaine when the attribute k takes only two values, an the maximum value is attaine when the attribute has all unique values. 2. Inverse Occurrence F requency (IOF ). The inverse occurrence frequency measure assigns lower similarity to mismatches on more frequent values. The IOF measure is relate to the concept of inverse ocument frequency which comes from information retrieval, where it is use to signify the relative number of ocuments that contain a specific wor. A key ifference is that inverse ocument frequency is compute on a term-ocument matrix which is usually binary, while the IOF measure is efine for categorical ata. The range of S k (X k, Y k ) for mismatches in the IOF measure is [, ], with the minimum value being attaine when X k an Y k each occur N 2 +(log N 2 )2 times (i.e. these are the only two values), an the maximum value is attaine when X k an Y k occur only once in the ata set. 3. Occurrence F requency (OF ). The occurrence frequency measure gives the opposite weighting of the IOF measure for mismatches, i.e. mismatches on less frequent values are assigne lower similarity an mismatches on more frequent values are assigne higher similarity. The range of S k (X k, Y k ) for mismatches in OF measure is [ (+(log N) 2 ), +(log 2) ], with the minimum value 2 being attaine when X k an Y k occur only once in the ata set, an the maximum value is attaine when X k an Y k occur N 2 times. 4. Burnaby. Burnaby [6] propose a similarity measure using arguments from information theory. He argues that the set of observe values are like a group of signals conveying information an as in information theory, attribute values that are rarely observe shoul be consiere more informative. In [6], Burnaby propose information weighte measures for binary, orinal, categorical an continuous ata. The measure we present in Table 4 is aapte from Burnaby s categorical measure. This measure assigns low similarity to mismatches on rare values an high similarity to mismatches on frequent values. The range of S k (X k, Y k ) for mismatches in N log( N ) Burnaby measure is [ N log( N ) log(n ), ], with the minimum value being attaine all values for attribute k occur only once, an the maximum value is attaine when X k an Y k each occur N 2 times. 4.3 Measures that fill both Diagonal an Offiagonal Entries. Lin. In [22], Lin escribes an information-theoretic framework for similarity, where he argues that when similarity is thought of in terms of assumptions about the space, the similarity measure naturally follows from the assumptions. Lin [22] iscusses the orinal, string, wor an semantic similarity settings; we applie his framework to the categorical setting to erive the Lin measure in Table 4. The Lin measure gives higher weight to matches on frequent values, an lower weight to mismatches on infrequent values. The range of S k (X k, Y k ) for a match in Lin measure is [ 2 log N, 0], with the minimum value being attaine when X k occurs only once an the maximum value is attaine when X k occurs N times. The range of S k (X k, Y k ) for a

mismatch in Lin measure is [ 2 log N 2, 0], with the minimum value being attaine when X k an Y k each occur only once, an the maximum value is attaine when X k an Y k each occur N 2 times. 2. Lin. The Lin measure is another measure we have erive using Lin s similarity framework. This measure gives lower weight to mismatches if either of the mismatching values are very frequent, or if there are several values that have frequency in between those of the mismatching values; higher weight is given when there are mismatches on infrequent values an there are few other infrequent values. For matches, lower weight is given for matches on frequent values or matches on values that have many other values of the same frequency; higher weight is given to matches on rare values. The range of S k (X k, Y k ) for matches in the Lin measure is [ N log N, 0], with the minimum value being attaine when attribute k takes N possible values, an maximum value is attaine when X k occurs N times. The range of S k (X k, Y k ) for mismatches in the Lin measure is [ 2 log N 2, 2], with the minimum value being attaine when X k an Y k both occur only once, an maximum value is attaine when X k is the most frequent value an Y k is the least frequent value or vice versa. 3. Smirnov. Smirnov [27] propose a measure roote in probability theory that not only consiers a given value s frequency, but also takes into account the istribution of the other values taken by the same attribute. The Smirnov measure is probabilistic for both matches an mismatches. For a match, the similarity is high when the frequency of the matching value is low, an the other values occur frequently. The range of S k (X k, Y k ) for a match in the Smirnov measure is [0, 2(N )], with the minimum value being attaine when X k occurs only once an there only one other possible value for attribute k, which occurs N times; the maximum value is attaine when X k occurs N times. The range of S k (X k, Y k ) for a mismatch in the Smirnov measure is [ 2, N 2 3], with the minimum value being attaine when the attribute k takes only two values, X k an Y k ; an the maximum is attaine when k takes only one more value apart from X k an Y k an it occurs N 2 times (X k an Y k occur once each). 4. Anerberg. In his book [2], Anerberg presents an approach to hanle similarity between categorical attributes. He argues that rare matches inicate a strong association an shoul be given a very high weight, an that mismatches on rare values shoul be treate as being istinctive an shoul also be given special importance. In accorance with these arguments, the Anerberg measure assigns higher similarity to rare matches, an lower similarity to rare mismatches. Anerberg measure is unique in the sense that it cannot be written in the form of Equation 4.. The range of the Anerberg measure is [0, ]; the minimum value is attaine when there are no matches, an the maximum value is attaine when all attributes match. 4.4 Further classification of similarity measures We can further classify categorical similarity measures base on the arguments use to propose the measures:. Probabilistic approaches take into account the probability of a given match taking place. The following measures are probabilistic: Gooall, Smirnov, Anerberg. 2. Information-theoretic approaches incorporate the information content of a particular value/variable with respect to the ata set. The following measures are information-theoretic: Lin, Lin, Burnaby. Table 3 provies a characterization of each of the 4 similarity measures in terms of how they hanle the various characteristics of a categorical. This table shows that measures Eskin an Anerberg assign weight to every attribute using the quantity n k, though in opposite ways. Another interesting observation from column 3 is that several measures Lin,Lin, Gooall,Gooall3, Smirnov, Anerberg assign higher similarity to a match when the attribute value is rare (f k is low), while Gooall2 an Gooall4 assign higher similarity to a match when the attribute value is frequent (f k is high). Only Gambaryan assigns the maximum similarity when the attribute value has a frequency close to 2. Column 4 shows that IOF,Lin,Lin,Smirnov an Burnaby assign greater similarity when the mismatch occurs between rare values, while OF an Anerberg assign greater similarity for mismatch between frequent values. 5 Outlier Detection in Categorical Data Outlier etection refers to etecting instances that o not conform to a specific efinition of normal behavior. For nearest neighbor techniques, a normal instance is the one that has a very tight neighborhoo. In categorical omain, this correspons to the frequency of occurrence of a combination of attribute values. Normal points are frequent combinations of categorical values

{f k (X k ), f k (Y k )} Measure n k X k = Y k X k Y k Overlap 0 Eskin n 2 k 0 IOF /(log f k (X k ) log f k (Y k )) OF log f k (X k ) log f k (Y k ) Lin / log f k (X k ) / log (f k (X k ) + f k (Y k )) Lin / log f k (X k ) / log f k (X k ) f k (Y k ) Gooall ( fk 2 (X k )) 0 Gooall2 fk 2 (X k ) 0 Gooall3 ( fk 2 (X k )) 0 Gooall4 fk 2 (X k ) 0 Smirnov /f k (X k ) /(f k (X k ) + f k (Y k )) Gambaryan Maximum at f k (X k ) = N 2 0 Burnaby / log f k (X k ), / log f k (Y k ) Anerberg /n k /fk 2 (X k ) f k (X k )f k (Y k ) Table 3: Relation between per-attribute similarity, S(X k, Y k ) an {n k, f k (X k ), f k (Y k )}. while outliers are the rarely occurring combinations. We will first provie an unerstaning of normal an outlier instances in categorical ata from this perspective. Consier the example shown earlier in Table. Assuming that a count of 20 or more is consiere as frequent an below is consiere as rare. Now let us consier following 4 instances belonging to D:. (re, square): The combination occurs 30 times (frequent). 2. (green, circle): The combination occurs times (rare); the value green for color occurs 5 times (rare) an the value circle for shape occurs 28 times (frequent). 3. (re, circle): The combination occurs 2 times (rare); the value re for color occurs 35 times (frequent) an the value circle for shape occurs 28 times (frequent). 4. (green, triangle): The combination occurs 2 times (rare); the value green for color occurs 5 times (rare) an the value triangle for shape occurs 5 times (rare). Instance seems to be an obvious normal instance, while instance 4 seems to be an obvious outlier. Instances 2 an 3 occur rarely, but one or both iniviual attribute values occur frequently. These might be consiere as outliers or normal epening on the ata omain. Thus we observe that normal an outlier instances in a categorical ata set might be ifferent in their composition. 5. Outlier Detection Using Nearest Neighbors Nearest Neighbor base techniques for outlier etection assume that the outliers are far away from normal points using a certain similarity measure. The general methoology of such techniques is to estimate the ensity of each point in a coorinate space. The ensity is measure by either counting the number of points within certain raius of the point, or by estimating the sparsity of a neighborhoo of a point. kn N Outlier Detection One of the nearest neighbor technique use in this paper [26] uses a single parameter k. The outlier score of a point is equal to the istance of the point to its k th nearest neighbor. lof Outlier Detection This technique [5] has the notion of k-istance for a given point p. The k-istance is efine as the istance of p to its k th nearest neighbor. The k-istance neighborhoo of a point p is all points that are at istance less than equal to its k-istance. Note that the size of k-istance neighborhoo of p nee not necessarily be k. The k-istance neighborhoo of a p is enote by N k (p). The reachability istance of a point p with respect to another point o is efine as (5.2) r k (p, o) = max{k istance(o), (p, o)} where (p, o) is the actual istance between p an o. For points that are far away, the reachability istance an actual istance are the same. For points that are close to p, the reachability istance is replace by the k-istance of the other point. The local reachability ensity (lr) of a point p is efine as (5.3) ( o N lr k (p) = k (p) r k(p, o) N k (p) ) If there are uplicates in the ata, such that the k neighborhoo of a point consists of only the uplicates, lr computation will run into the problem of ivision

by 0. For continuous ata sets, such scenario is highly unlikely, but might occur in categorical ata sets. In such cases there are two possible solutions. Assign a small istance (ɛ) between two ientical points 2. The k-neighborhoo of any point consists of k istinct points The local outlier factor or the outlier score (lof) of a point p is efine as (5.4) lof k (p) = 6 Experimental Evaluation lr k (o) o N k (p) lr k (p) N k (p) In this section we present an experimental evaluation of the 4 measures given in Table 4 on 23 ifferent ata sets in context of outlier etection. Of these ata sets, 2 are base on the ata sets available at the UCI Machine Learning Repository [3], two are base on network ata generate by SKAION Corp for the ARDA information assurance program [7]. The etails about the 23 ata sets is summarize in Table 4. Eleven of these ata sets were purely categorical, five (KD,KD2,Sk,Sk2,Cen) ha a mix of continuous an categorical attributes, an two ata sets, Irs an Sgm, were purely continuous. Continuous variables were iscretize using the MDL metho [0]. The KD,KD2 ata sets were obtaine from the KDDCup ata set by iscretizing the continuous attributes into 0 an 00 bins respectively. Another possible way to hanle a mixture of attributes is to compute the similarity for continuous an categorical attributes separately, an then o a weighte aggregation. In this stuy we converte the continuous attributes to categorical to simplify comparative evaluation. Each ata set contains labele instances belonging to multiple classes. We ientifie one class as the outlier class, an rest of the classes were groupe together an calle normal. The last two rows in Table 4 enote the cross-valiation classification recall an precision reporte by C4.5 classifier on the outlier class. This quantity inicates the separability between the instances belonging to normal class(es) an instances belonging to outlier class, using the given set of attributes. A low accuracy implies that istinguishing between outliers an normal instances is ifficult in that particular ata set using a ecision tree-base classifier. 6. Evaluation Methoology The performance of the ifferent similarity measures was evaluate in the context of outlier etection using nearest neighbors [26, 30]. We construct a test ata by taking equal number of instances as ranom samples from the outlier class (n) an the normal class(es). In aition, a ranom sample (comparable in size to the outlier class) is taken from the normal class to serve as the training set. For each test instance we fin its nearest neighbors, using the given similarity measure, in the training set (we chose the parameter k = 0). The outlier score is chosen for the knn algorithm an the lof algorithm as iscusse earlier. The test instances are then sorte in ecreasing orer of outlier scores. To evaluate a measure, we count the number of true outliers in the top p portion of the sorte test instances, where p = δn, 0 δ. Let o be the number of actual outliers in the top p preicte outliers. The accuracy of the algorithm is measure as o p. In this paper we present results for δ =. We have also experimente with other lower values of δ an the trens in relative performance are similar. 6.2 Experimental Results on Public Data Sets Our experimental results verifie our initial hypotheses about categorical similarity measures. As can be seen from Table 5, there are many situations where the Overlap measure oes not give goo performance. This is consistent with our intuition that the use of aitional information woul lea to better performance. In particular, we expecte that since categorical ata oes not have inherent orering, ata-riven measures woul be able to take avantage of information present in the ata set to make more accurate eterminations of similarity between a pair of ata instances. We make some key observations about the results in Table 5:. No single measure is always superior or inferior. This is to be expecte since each ata set has ifferent characteristics. 2. The use of some measures gives consistently better performance on a large variety of ata. The Lin, OF, Gooall3 measures give among the best performance overall in terms of outlier etection performance. This is noteworthy since Lin an Gooall3 have been introuce for the first time in this paper. 3. There are some pairs of measures that exhibit complementary performance, i.e. one performs well where the other performs poorly an vice-versa. Example complimentary pairs are (OF, IOF ), (Lin, Lin) an (Gooall3, Gooall4). This observation means that it may be possible to construct measures that raw on the strengths of two measures in orer to obtain superior performance. This

Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgmt Cen Bal Can Hys Lym Nur Tmr TTT Au Size 663 659 50 455 455 200 200 200 200 3480 2606 4308 406 20 504 625 277 32 48 6480 336 727 90 % Outls. 4 4 33 2 2 5 5 5 5 4 4 3 2 4 6 8 29 23 4 3 33 4 25 6 6 4 42 42 29 29 29 29 0 0 2 2 8 0 4 9 4 8 8 5 9 7 avg(nk) 3.23 3.28 6.80 2.9 2.9 4.80 4.80 37 05 337 286 4.8 4.09 6.37 7.09 4.40 3.90 2.80 3.0 3.33 2.3 2.80 2.65 me(nk) 3 3 8 3 3 4 4 26 42 9 9 3 4 6 7 5 3 3 3 3 2 3 2 fk Uni. 6 6 0 3 2 0 0 0 0 2 2 2 0 4 2 8 0 0 fk Gauss. 0 0 4 7 8 7 8 8 7 3 3 5 7 8 7 0 5 3 0 2 9 5 fk Skw. 0 0 0 32 32 22 2 2 22 5 5 4 3 9 3 0 3 0 5 0 2 0 5 Recall 0.75 0.9 0.90 0.9 0.00 0.88 0.89 0.89 0.88 0.00 0.83.00.00.00 0.96 0.00 0.24 0.63 0.74 0.62 0. 0.63 0.23 Precision 0.82 0.85 0.98 0.9 0.00 0.99 0.95 0.89 0.90 0.00 0.83.00.00.00 0.94 0.00 0.76.00 0.73 0.66 0.0 0.76 0.26 Table 4: Description of Public Data Sets Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp 0.9 0.9 0.78 0.03 0.04 0.77 0.94 0.77 0.77 0.4 0.2.00 0.89 0.93 0.07.00 0.30 0.60 0.59 0.00 0.29.00 0.40 0.59 eskn 0.00 0.00 0.66 0.00 0.00 0.5 0.89 0.00 0.00 0.00 0.06.00 0.84 0.07 0.00.00 0.47 0.60 0.56 0.46 0.29.00 0.38 0.38 iof.00 0.92 0.20 0.40 0.0 0.44 0.78 0.0 0.54 0.0 0.0.00 0.87 0.00 0.5 0.57 0.52 0.60 0.56 0.49 0.36.00 0.36 0.49 of 0.93 0.92 0.88 0.48 0.04 0.74 0.86 0.90 0.83 0.57 0.36.00 0.80.00 0.27.00 0.36.00 0.69 0.47 0.22 0.28 0.62 0.66 lin 0.9 0.9 0.86 0.6 0.0 0.78 0.94 0.85 0.87 0.66 0.2.00 0.87 0.93 0.2.00 0.49.00 0.70 0.00 0.40.00 0.55 0.67 lin 0.3 0.49 0.94 0. 0.08 0.88 0.96 0.66 0.00 0.75 0.40.00 0.89.00 0.25 0.8 0.49 0.60 0.69 0.3 0.37 0.25 0.49 0.52 goo 0.70 0.68 0.80 0. 0.06 0.77 0.76 0.0 0.00 0.59 0.40.00 0.82.00 0.36 0.45 0.30 0.87 0.72 0.00 0.47 0.75 0.40 0.52 goo2 0.9 0.9 0.78 0.44 0.06 0.6 0.82 0.0 0.0 0.64 0.64.00 0.92.00 0.29 0.27 0.49 0.60 0.64 0.32 0.26 0.90 0.55 0.57 goo3 0.9 0.9 0.80 0.6 0.08 0.75 0.76 0.0 0.00 0.59 0.4.00 0.88.00 0.37.00 0.46 0.87 0.67 0.04 0.50.00 0.43 0.59 goo4 0.70 0.98 0.56 0.02 0.00 0.52 0.9 0.79 0.79 0.05 0.08 0.63 0.74 0.07 0.8 0.27 0.52 0.60 0.56 0.52 0.29 0.60 0.49 0.47 smrnv 0.9 0.9 0.78 0.02 0.02 0.00 0.00 0.00 0.00 0.32 0.04 0.00 0.00 0.90 0.5 0.2 0.23 0.70 0.36 0.08 0.33 0.55 0.04 0.28 gmbrn 0.9 0.9 0.76 0.47 0.08 0.43 0.80 0.0 0.0 0.4 0.35.00 0.83 0.97 0.25 0.55 0.56 0.60 0.70 0.46 0.35.00 0.60 0.55 brnby 0.9 0.9 0.78 0.03 0.04 0.66 0.96 0.90 0.90 0.55 0.7.00 0.89 0.93 0.3.00 0.5 0.90 0.59 0.46 0.29.00 0.45 0.65 anbrg 0.93 0.92 0.90 0.07 0.02 0.52 0.52 0.46 0.44 0.46 0.07.00 0.88 0.93 0.26.00 0.33.00 0.64 0.05 0.30 0.70 0.26 0.55 Avg 0.77 0.8 0.75 0.8 0.05 0.60 0.78 0.39 0.37 0.42 0.24 0.90 0.79 0.77 0.2 0.67 0.43 0.75 0.62 0.26 0.34 0.79 0.43 Table 5: Experimental Results For knn Algorithm for 00 %

Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp.00 0.97 0.72 0.33 0.08 0.3 0.7 0.00 0.00 0.56 0.4.00.00 0.87 0.33.00 0.5 0.87 0.77 0.03 0.42.00 0.70 0.57 eskn.00.00 0.72 0.33 0.06 0.32 0.00 0.0 0.00 0.47 0.4.00 0.9 0.80 0.35.00 0.46 0.90 0.75 0.53 0.34.00 0.49 0.55 iof.00.00 0.2 0.70 0.30 0.04 0.6 0.00 0.00 0.24 0.34.00.00 0.03 0.36.00 0.63 0.53 0.85 0.53 0.34.00 0.68 0.53 of.00.00 0.94 0.72 0.30 0.4 0.82 0.00 0.00 0.43 0.36.00 0.9.00 0.3.00 0.43.00 0.79 0.55 0.34 0.80 0.5 0.62 lin.00.00 0.84 0.49 0.22 0.52 0.37 0.00 0.00 0.6 0.64.00.00 0.93 0.37.00 0.56.00 0.85 0.5 0.44.00 0.79 0.66 lin.00 0.98 0.62 0.33 0.4 0.60 0.50 0.00 0.05 0.37 0.06.00 0.97.00 0.23 0.8 0.60 0.3 0.84 0.45 0.44 0.97 0.74 0.53 goo 0.67 0.85 0.80 0.54 0.6 0.77 0.80 0.00 0.0 0.73 0.2.00.00.00 0.48 0.6 0.47 0.90 0.87 0.28 0.47 0.77 0.70 0.6 goo2.00 0.97 0.76 0.69 0.24 0.75 0.85 0.04 0.09 0.77 0.2.00 0.97 0.97 0.43 0.55 0.57 0.90 0.80 0.60 0.26 0.99 0.72 0.66 goo3.00 0.97 0.80 0.55 0.26 0.78 0.80 0.00 0.0 0.75 0.2.00.00.00 0.50.00 0.53 0.90 0.87 0.52 0.47 0.96 0.74 0.68 goo4 0.97 0.94 0.80 0.0 0.02 0.70 0.9 0.90 0.90 0.42 0.6 0.97 0.86 0.0 0.3.00 0.54 0.80 0.70 0.58 0.34 0.88 0.55 0.62 smrnv 0.58 0.75 0.62 0.00 0.04 0.07 0.00 0.00 0.00 0.67 0.90 0.84 0.87 0.87 0.25.00 0.4 0.93 0.5 0.43 0.36 0.62 0.34 0.48 gmbrn.00.00 0.78 0.69 0.6 0.79 0.83 0.0 0.0 0.6 0.57.00 0.95 0.97 0.29.00 0.58 0.90 0.82 0.50 0.35.00 0.77 0.68 brnby.00.00 0.76 0.57 0.24 0.52 0.5 0.00 0.00 0.5 0.7.00.00 0.97 0.33.00 0.54 0.93 0.84 0.49 0.34.00 0.66 0.6 anbrg.00.00 0.86 0.4 0.00 0.35 0.4 0.00 0.00 0.42 0.07.00 0.86 0.90 0.34.00 0.49.00 0.74 0.53 0.34 0.97 0.5 0.56 Avg 0.94 0.96 0.72 0.44 0.6 0.46 0.55 0.07 0.08 0.54 0.30 0.99 0.95 0.8 0.34 0.88 0.52 0.84 0.79 0.47 0.37 0.93 0.64 Table 6: Experimental Results For LOF Algorithm for 00 % Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp.00.00 0.56 0.06 0.08 0.80.00 0.94 0.92 0.78 0.24.00 0.98 0.87 0.3.00 0.59 0.20 0.55 0.00 0.24.00 0.62 0.63 eskn 0.00 0.00 0.92 0.00 0.00 0.58 0.86 0.00 0.00 0.00 0.07.00.00 0.00 0.00.00 0.63 0.20 0.55 0.88 0.8.00 0.33 0.40 iof.00.00 0.04 0.50 0.6 0.32 0.92 0.02 0.26 0.20 0.02.00.00 0.00 0.6.00 0.7 0.20 0.52 0.73 0.24.00 0.46 0.50 of.00.00.00 0.30 0.00 0.80 0.88.00 0.66 0.64 0.28.00 0.96.00 0.33.00 0.29.00 0.7 0.49 0.29 0.30 0.83 0.69 lin.00.00.00 0.2 0.08 0.78.00 0.84 0.82 0.64 0.37.00 0.94 0.87 0.9.00 0.56.00 0.74 0.00 0.49.00 0.54 0.70 lin 0.26 0.97.00 0.4 0.08 0.94.00 0.72 0.00 0.98 0.59.00.00.00 0.30 0.36 0.46 0.80 0.84 0.44 0.5 0.44 0.58 0.63 goo 0.74 0.76.00 0.20 0.08 0.92 0.92 0.02 0.00 0.86 0.48.00.00.00 0.20 0.60 0.32.00 0.7 0.00 0.56 0.82 0.54 0.60 goo2.00.00 0.64 0.58 0.08 0.68 0.74 0.02 0.00 0.49 0.48.00.00.00 0.23 0.52 0.5 0.20 0.7 0.40 0.24.00 0.75 0.58 goo3.00.00.00 0.24 0.6 0.92 0.94 0.02 0.00 0.86 0.43.00.00.00 0.8.00 0.39.00 0.8 0.09 0.53.00 0.54 0.66 goo4 0.77.00 0.88 0.02 0.00 0.60.00 0.96 0.96 0.07 0.5 0.76 0.92 0.00 0.6 0.52 0.63 0.60 0.58 0.83 0.29 0.86 0.42 0.56 smrnv.00.00 0.84 0.02 0.00 0.00 0.00 0.00 0.00 0.5 0.07 0.00 0.00.00 0.4 0.24 0.27.00 0.39 0. 0.25 0.66 0.00 0.33 gmbrn.00.00 0.88 0.60 0.2 0.58 0.74 0.00 0.00 0.27 0.26.00 0.94 0.93 0.23 0.52 0.66 0.60 0.77 0.87 0.27.00 0.7 0.6 brnby.00.00.00 0.06 0.00 0.74.00 0.94 0.94 0.46 0.26.00 0.98 0.93 0.2.00 0.66.00 0.68 0.00 0.8.00 0.58 0.68 anbrg.00.00.00 0.08 0.00 0.40 0.60 0.42 0.40 0.34 0.07.00.00.00 0.28.00 0.27.00 0.8 0. 0.33 0.92 0.38 0.58 Avg 0.84 0.9 0.84 0.2 0.06 0.65 0.83 0.42 0.35 0.5 0.27 0.9 0.9 0.76 0.9 0.77 0.50 0.70 0.67 0.35 0.33 0.86 0.52 Table 7: Experimental Results For knn Algorithm for 50 %

Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp.00.00 0.5 0.2 0.00 0.80.00 0.88 0.84.00 0.4.00.00 0.75 0.3.00 0.35 0.00 0.53 0.00 0.8.00 0.25 0.58 eskn 0.00 0.00 0.85 0.00 0.00 0.64.00 0.00 0.00 0.00 0..00.00 0.00 0.0.00 0.70 0.00 0.40 0.77 0.8.00 0.50 0.40 iof.00.00 0.00 0.44 0.23 0.28 0.96 0.00 0.20 0.3 0.04.00.00 0.00 0.2.00 0.85 0.00 0.40 0.74 0.2.00 0.33 0.48 of.00.00.00 0.60 0.00 0.76 0.76.00 0.32 0.50 0.26.00 0.96.00 0.40.00 0.25.00 0.67 0.45 0.4 0.60 0.92 0.68 lin.00.00.00 0.08 0.5 0.88.00 0.84 0.68 0.40 0.37.00.00 0.75 0.5.00 0.70.00 0.73 0.00 0.6.00 0.50 0.69 lin 0.53.00.00 0.20 0.5 0.96.00 0.76 0.00 0.97 0.37.00.00.00 0.3 0.33 0.60 0.75 0.93 0.64 0.46 0.36 0.92 0.66 goo 0.65 0.62.00 0.28 0.08 0.84 0.84 0.00 0.00.00 0.70.00.00.00 0.5 0.50 0.30.00 0.87 0.00 0.54 0.92 0.58 0.60 goo2.00.00 0.77 0.84 0.5 0.72 0.76 0.04 0.00 0.60 0.48.00.00.00 0.27.00 0.50 0.00 0.67 0.45 0.8.00 0.75 0.62 goo3.00.00.00 0.36 0.23 0.84 0.92 0.00 0.00.00 0.63.00.00.00 0.9.00 0.30.00 0.87 0.7 0.50.00 0.75 0.68 goo4 0.76.00 0.85 0.00 0.00 0.84.00 0.92 0.92 0.3 0.07.00.00 0.00 0.9.00 0.70 0.25 0.53 0.85 0.2.00 0.58 0.60 smrnv.00.00.00 0.04 0.00 0.00 0.00 0.00 0.00 0.73 0.5 0.00 0.00.00 0.05 0.50 0.5.00 0.47 0.3 0.29 0.64 0.00 0.35 gmbrn.00.00 0.92 0.72 0.5 0.40 0.80 0.00 0.00 0.53 0.5.00.00 0.88 0.22.00 0.80 0.25 0.67 0.74 0.4.00 0.83 0.62 brnby.00.00.00 0.00 0.00 0.76.00 0.96 0.88 0.53 0.22.00.00 0.88 0.08.00 0.85.00 0.60 0.00 0.8.00 0.58 0.68 anbrg.00.00.00 0.6 0.00 0.36 0.60 0.00 0.48 0.43 0.07.00.00.00 0.27.00 0.0.00 0.80 0.2 0.29.00 0.50 0.58 Avg 0.85 0.90 0.82 0.27 0.08 0.65 0.83 0.39 0.3 0.57 0.29 0.93 0.93 0.73 0.9 0.88 0.5 0.59 0.65 0.37 0.29 0.89 0.57 Table 9: Experimental Results For knn Algorithm for 25 % Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp.00.00 0.80 0.36 0.08 0.26.00 0.00 0.00 0.69 0.28.00.00 0.93 0.36.00 0.66.00 0.87 0.0 0.53.00 0.79 0.64 eskn.00.00 0.88 0.30 0.08 0.24 0.00 0.00 0.00 0.56 0.06.00 0.90 0.80 0.45.00 0.44.00 0.90 0.80 0.24.00 0.33 0.56 iof.00.00 0.2 0.82 0.40 0.08.00 0.00 0.00 0.03 0.20.00.00 0.00 0.36.00 0.80 0.20 0.97 0.62 0.36.00 0.67 0.55 of.00.00.00 0.90 0.40 0.28.00 0.00 0.00 0.76 0.24.00.00.00 0.27.00 0.49.00 0.87 0.7 0.33 0.96 0.54 0.68 lin.00.00.00 0.56 0.6 0.66 0.34 0.00 0.00 0.68 0.44.00.00 0.87 0.42.00 0.59.00 0.94 0.65 0.42.00 0.88 0.68 lin.00.00 0.96 0.50 0.2 0.54 0.50 0.00 0.00 0.27 0.09.00.00.00 0.26 0.36 0.68 0.27 0.87 0.58 0.45 0.96 0.75 0.57 goo.00.00.00 0.80 0.24 0.76 0.96 0.00 0.00 0.97 0.43.00.00.00 0.55 0.60 0.5.00 0.97 0.35 0.47.00 0.92 0.72 goo2.00.00 0.84 0.86 0.20 0.80 0.96 0.06 0.02 0.97 0.4.00.00.00 0.48.00 0.6.00 0.87 0.75 0.29.00 0.88 0.74 goo3.00.00.00 0.80 0.28 0.78 0.98 0.00 0.00 0.97 0.43.00.00.00 0.58.00 0.63.00 0.97 0.69 0.49.00 0.88 0.76 goo4.00.00 0.72 0.0 0.04 0.76.00.00.00 0.44 0.5.00.00 0.00 0.3.00 0.63 0.93 0.7 0.89 0.3.00 0.58 0.67 smrnv.00.00.00 0.00 0.08 0.00 0.00 0.00 0.00 0.93.00.00 0.94.00 0.24.00 0.49.00 0.6 0.86 0.5 0.82 0.42 0.60 gmbrn.00.00 0.84 0.88 0.32 0.76 0.94 0.02 0.02 0.90 0.30.00.00.00 0.30.00 0.73.00 0.84 0.68 0.3.00 0.92 0.73 brnby.00.00 0.80 0.74 0.36 0.64 0.2 0.00 0.00 0.75 0.07.00.00.00 0.30.00 0.66.00 0.94 0.66 0.38.00 0.75 0.66 anbrg.00.00 0.96 0.06 0.00 0.42 0.56 0.00 0.00 0.68 0.3.00.00 0.93 0.3.00 0.37.00 0.77 0.65 0.27.00 0.54 0.59 Avg.00.00 0.85 0.55 0.20 0.50 0.67 0.08 0.07 0.69 0.30.00 0.99 0.82 0.36 0.93 0.59 0.89 0.86 0.64 0.38 0.98 0.70 Table 8: Experimental Results For LOF Algorithm for 50 %

Msr. Cr Cr2 Irs Cn Cn2 KD KD2 KD3 KD4 Sk Sk2 Ms Ms2 Sgm Cen Bal Can Hys Lym Nur Tmr TTT Au Avg ovrlp.00.00.00 0.36 0.5 0.52.00 0.00 0.00 0.47 0.44.00.00.00 0.29.00 0.80.00.00 0.02 0.6.00 0.92 0.68 eskn.00.00.00 0.24 0.5 0.44 0.00 0.00 0.00 0.33 0..00 0.96 0.75 0.62.00 0.30.00.00 0.9 0.29.00 0.33 0.58 iof.00.00 0.08 0.96 0.54 0.6.00 0.00 0.00 0.07 0.04.00.00 0.00 0.27.00.00 0.00.00 0.66 0.46.00 0.50 0.55 of.00.00.00 0.96 0.54 0.28.00 0.00 0.00.00 0.48.00.00.00 0.23.00 0.35.00 0.93 0.79 0.46.00 0.58 0.72 lin.00.00.00 0.52 0.23 0.60 0.28 0.00 0.00 0.87 0.56.00.00 0.88 0.37.00 0.45.00.00 0.79 0.32.00.00 0.69 lin.00.00.00 0.60 0.00 0.72 0.52 0.00 0.00 0.33 0.9.00.00.00 0.25 0.67 0.70 0.38 0.93 0.68 0.32.00 0.83 0.6 goo.00.00.00 0.92 0.23 0.68.00 0.00 0.00.00 0.8.00.00.00 0.69 0.50 0.55.00.00 0.38 0.6.00.00 0.76 goo2.00.00.00 0.96 0.23 0.72 0.92 0.04 0.00.00 0.78.00.00.00 0.48.00 0.65.00.00 0.85 0.29.00.00 0.78 goo3.00.00.00 0.96 0.3 0.68.00 0.00 0.00.00 0.8.00.00.00 0.67.00 0.45.00.00 0.74 0.6.00.00 0.79 goo4.00.00 0.69 0.04 0.08 0.92.00.00.00 0.40 0.07.00.00 0.00 0.5.00 0.75.00 0.67.00 0.29.00 0.58 0.68 smrnv.00.00.00 0.00 0.5 0.00 0.00 0.00 0.00.00.00.00.00.00 0.9.00 0.45.00 0.67 0.89 0.75 0.64 0.58 0.62 gmbrn.00.00 0.77 0.88 0.38 0.52 0.92 0.04 0.04 0.87 0.52.00.00.00 0.23.00 0.90.00.00 0.66 0.32.00.00 0.74 brnby.00.00.00 0.88 0.38.00 0.08 0.00 0.00 0.87 0.07.00.00.00 0.33.00 0.75.00.00 0.70 0.29.00 0.67 0.70 anbrg.00.00.00 0.08 0.00 0.36 0.76 0.00 0.00 0.67 0.26.00.00.00 0.27.00 0.35.00 0.87 0.66 0.25.00 0.67 0.62 Avg.00.00 0.90 0.60 0.24 0.54 0.68 0.08 0.07 0.70 0.44.00.00 0.83 0.36 0.94 0.60 0.88 0.93 0.70 0.42 0.97 0.76 Table 0: Experimental Results For LOF Algorithm for 25 % is an aspect of this work that nees to be pursue in future work. 4. The performance of an outlier etection algorithm is significantly affecte by the similarity measure use. For example, for the Cn ata set, which has a very low classification accuracy for the outlier class, using OF still achieves close to 50 % accuracy. 5. The Eskin similarity measure weights attributes proportional to the number of values taken by the attribute (n k ). For ata sets in which the attributes take large number of values (e.g., KD2, Sk, Sk2), eskin performs very poorly. 6. The Smirnov measure assigns similarity to both iagonal an off-iagonal entries in the per-attribute similarity matrix (Figure ). But it still performs very poorly on most of the ata sets. The other measures that operate similarly Lin, Lin an Anerberg performs better than Smirnov in almost every ata set. 7. The performance of knn for varying values of δ is not very significant as seen from Tables 5, 7, an 9. 8. Using lof as the outlier etection algorithm (Refer to Tables 6, 8, an 0) improves the overall performance for almost every similarity measure. The rop in performance for 4 measures for δ =.00, is marginal. This inicates that lof is a better outlier etection algorithm than knn for categorical ata sets. The relation between the algorithm an similarity measure is also of significance an will be a part of our future research. 7 Concluing Remarks an Future Work Computing similarity in categorical attributes has been iscusse in a variety of contexts. In this paper we have brought together several such measures an evaluate them in context of outlier etection. We have also propose several variants (Lin, Gooall2, Gooall3, Gooall4) of existing similarity measures some of which perform very well as shown in our evaluation. Given this set of similarity measures, the first question that comes to min is: Which similarity measure is best suite for my ata mining task?. Our experimental results suggest that there is no one best performing similarity measure. Hence, one nees to unerstan how a similarity measure hanles the ifferent characteristics of a categorical ata set, an this nees to be explore in future research.