Data mining, 4 cu Lecture 5:

Similar documents
Lecture Notes for Chapter 6. Introduction to Data Mining

Association Analysis Part 2. FP Growth (Pei et al 2000)

CS 584 Data Mining. Association Rule Mining 2

DATA MINING - 1DL360

DATA MINING - 1DL360

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

COMP 5331: Knowledge Discovery and Data Mining

Associa'on Rule Mining

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Data Mining Concepts & Techniques

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Mining Infrequent Patter ns

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Chapter 4: Frequent Itemsets and Association Rules

Selecting a Right Interestingness Measure for Rare Association Rules

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

CS5112: Algorithms and Data Structures for Applications

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

COMP 5331: Knowledge Discovery and Data Mining

Frequent Itemset Mining

CS 484 Data Mining. Association Rule Mining 2

CSE 5243 INTRO. TO DATA MINING

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Data mining, 4 cu Lecture 7:

Association Rule Mining on Web

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

NetBox: A Probabilistic Method for Analyzing Market Basket Data

Handling a Concept Hierarchy

Association Analysis. Part 2

Selecting the Right Interestingness Measure for Association Patterns

Introduction to Data Mining

Chapters 6 & 7, Frequent Pattern Mining

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

CSE 5243 INTRO. TO DATA MINING

Discrete Multivariate Statistics

Frequent Itemset Mining

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Guaranteeing the Accuracy of Association Rules by Statistical Significance

Association Analysis 1

Machine Learning: Pattern Mining

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Algorithmic Methods of Data Mining, Fall 2005, Course overview 1. Course overview

Data Mining and Analysis: Fundamental Concepts and Algorithms

CS-C Data Science Chapter 8: Discrete methods for analyzing large binary datasets

Mining Positive and Negative Fuzzy Association Rules

Unit II Association Rules

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Mining Rank Data. Sascha Henzgen and Eyke Hüllermeier. Department of Computer Science University of Paderborn, Germany

Pattern Structures 1

Sequential Pattern Mining

Effective and efficient correlation analysis with application to market basket analysis and network community detection

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

1 Frequent Pattern Mining

Association Analysis. Part 1

Data Mining and Analysis: Fundamental Concepts and Algorithms

DATA MINING - 1DL105, 1DL111

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Lecture 9. Selected material from: Ch. 12 The analysis of categorical data and goodness of fit tests

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Lecture 8: Summary Measures

FUZZY ASSOCIATION RULES: A TWO-SIDED APPROACH

Frequent Pattern Mining. Toon Calders University of Antwerp

CS4445 B10 Homework 4 Part I Solution

Communities Via Laplacian Matrices. Degree, Adjacency, and Laplacian Matrices Eigenvectors of Laplacian Matrices

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

Lecture 28 Chi-Square Analysis

Frequent Itemset Mining

CS 412 Intro. to Data Mining

Part III: Unstructured Data

arxiv: v1 [cs.db] 31 Dec 2011

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Approximate counting: count-min data structure. Problem definition

proximity similarity dissimilarity distance Proximity Measures:

CSE-4412(M) Midterm. There are five major questions, each worth 10 points, for a total of 50 points. Points for each sub-question are as indicated.

Mining Approximate Top-K Subspace Anomalies in Multi-Dimensional Time-Series Data

Lecture 2. Judging the Performance of Classifiers. Nitin R. Patel

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Two Correlated Proportions Non- Inferiority, Superiority, and Equivalence Tests

Frequent Itemset Mining

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Discrete Distributions

Removing trivial associations in association rule discovery

Transcription:

582364 Data mining, 4 cu Lecture 5: Evaluation of Association Patterns Spring 2010 Lecturer: Juho Rousu Teaching assistant: Taru Itäpelto

Evaluation of Association Patterns Association rule algorithms potentially generate large quantities of rules Easily thousands to millions of rules depending on the database and the used support and confidence levels All of the patterns cannot be examined manually Problem in using the knowledge contained in the rules All of the rules may not be interesting (e.g. Plastic bag -> Bread) Some rules may be redundant (e.g if {A,B,C} {D} and {A,B} {D} have same support & confidence)

Effect of Skewed Support Distribution Many real data sets have skewed support distribution Most of items have low to moderate support Small number of items have very high support Group G1 G2 G3 Support <1% 1-90% > 90% #items 1735 358 20

Effect of Skewed Support Distribution How to set minsup threshold? Too high minsup threshold (e.g. 20%) misses interesting items with low support e.g. customers buying expensive jewelry or other high-profit items Too low minsup threshold generates too many rules easily generates spurious crosspatterns relating a low-frequency item to a high-frequency item: e.g. Caviar Bread Group G1 G2 G3 Support <1% 1-90% > 90% #items 1735 358 20

Multiple Minimum Support One solution is to apply multiple minimum supports levels MS(i): minimum support for item i e.g.: MS(Milk)=5%, MS(Coke)=3%, MS(Broccoli)=0.1%, MS(Salmon)=0.5% MS({Milk, Broccoli})= min(ms(milk),ms(broccoli})=0.1% Challenge: Support is no longer anti-monotone - Suppose: Support(Milk, Coke) = 1.5% and Support(Milk, Coke, Broccoli) = 0.5% - {Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent Apriori can be modified to accommodate this change (Liu, 1999) - The pruning of candidate itemsets needs to be relaxed

Cross-support patterns Consider the transaction data on the right q p, r p and {q,r} p are all highconfidence patterns that look spurious (caused by p being very frequent) Eliminating them by tightening the minsup requirement also drops the rules r q and q r that look ok A cross-support pattern is an itemset X={X 1,...,X k } with low ratio r(x) = min[s(x 1),...,s(X k )] max[s(x 1 ),...,s(x k )]

Eliminating cross-support patterns Recall the definition of confidence c(x Y) = σ(x,y)/σ(x) and its anti-monotone property (confidence can only decrease when items are moved from left to right-hand side of the rule) Given an itemset X = {X 1,...,X k } the lowest confidence rule that can be generated from X is the one with the highest support item on the left-hand side The lowest confidence (or all-confidence) can be used to measure the potential of the itemset to generate cross-support patterns Itemsets with low all-confidence can be filtered out before rule generation allconfidence(x) = s(x 1,..., X k ) max[s(x 1 ),...,s(x k )]

Evaluation of Association Patterns We will look at methods that let us rank or prune the discovered set of rules Called Interestingness measures in data mining community Objective interestingness measures: statistical methods to measure how exceptional the pattern is with respect to background assumptions Subjective interestingness measures: Using domain knowledge, e.g. filtering out obvious patterns or patterns that cannot be acted on In general, requires a human in the loop To some extent, an art rather than science: one man s trash is another man s treasure

Properties of Interestingness Measures Interestingness measures can be divided into two main categories based on their use Symmetric measures M satisfy M(A B) = M(B A) used to evaluate itemsets e.g. support Asymmetric measures generally give different values for M(A B) and M(B A) used to evaluate association rules e.g. confidence

Contingency table The contingency table for rule X Y is given by the support of four different combinations of observing X,Y, both or neither of them Contingency table contains sufficient information to compute different interestingness measures Intuitively: if f 11 has high support compared to the other cells, the rule is more likely to be interesting than not Contingency table for X Y Y not Y X f 11 f 10 f 11 +f 10 not X f 01 f 00 f 01 +f 10 f 11 +f 01 f 10 +f 00 T f 11 : support of X and Y f 10 : support of X and not Y f 01 : support of not X and Y f 00 : support of not X and not Y

Properties of Objective Measures: Inversion property An evaluation measure is invariant under inversion if its value remains the same when flipping the attribute values from 0 1 and 1 0, or equivalently, permuting the contingency table, f00 f11 and f01 f10 This property is not desirable for evaluating asymmetric attributes: e.g. items that are not bought by the customer would provide as strong associations as items the customer bought Y not Y X 60 10 70 not X 10 20 30 70 30 100 Y not Y X 20 10 30 not X 10 60 70 30 70 100

Properties of Objective Measures: Null Addition Property An evaluation property is invariant under null addition if it does not change its value when the value f00 is inreased in the contingency table This is useful property in applications such as market-basket analysis where the non-absense of items is not the focus of the analysis Y not Y X 60 10 70 not X 10 20 30 70 30 100 Y not Y X 60 10 70 not X 10 920 930 70 930 1000

Properties of Objective Measures: Scaling property An evaluation measure is invariant under row/column each column and row can be multiplied by a constant without the measure to change its value Most evaluation measures do not satisfy this property (odds ratio = f11*f00/(f10*f01) is an exception) Below, column not Y has been multiplied by 2, row X by 3 and row not X by 4 Y not Y X 6 1 7 not X 1 2 3 7 3 10 Y not Y X 18 6 24 not X 4 16 20 22 22 44

Property under Row/Column Scaling Grade-Gender Example (Mosteller, 1968): Male Female High 2 3 5 Low 1 4 5 3 7 10 Male Female High 4 30 34 Low 2 40 42 6 70 76 2x 10x Mosteller: Underlying association should be independent of the relative number of male and female students in the samples

Drawback of Confidence Via the use of contingency tables one can illustrate a drawback of the confidence measure Consider the rule Tea Coffee support 15/100 = 15% confidence 15/20 = 75% looks ok? Coffee Coffee Tea 15 5 20 Tea 75 5 80 90 10 100

Drawback of Confidence Consider the rule Tea Coffee support 15/100 = 15% confidence 15/20 = 75% But the fraction of people drinking coffee regardsless of whether they drink tea is 90% Thus knowing that the person drinks tea actually lowers our expectation that the person drinks coffee The rule is misleading! Coffee Coffee Tea 15 5 20 Tea 75 5 80 90 10 100

Lift and Interest factor Confidence c(a B) = σ(a,b)/σ(a) ignores the support of the itemset on the right-hand side of the rule Lift is a measure that aims to fix this problem For binary variables lift is equal to interest factor Lift/interest factor is symmetric not invariant under inversion not invariant under null addition not invariant under scaling Lift = c(a B) s(b) I(A,B) = s(a,b) s(a)s(b)

Lift and interest factor Interpretation: compare the support of itemset {A,B} to the expected support under the assumption that A and B are statistically independent: s(a,b) P(A and B) s(a) P(A), s(b) P(B) Statistical independence: P(A and B) = P(A)xP(B) Use of interest factor: I(A,B) >1 : A and B occur together more frequently than expected by chance I(A,B) < 1 : A and B occur together less frequently than expected by chance I(A,B) = s(a,b) s(a)s(b)

Example: Lift/interest factor Let us compute the interest factor for our Tea Coffee rule I(Tea,Coffee) = c(tea Coffee)/s(Coffee) = 0.75/0.9 = 0.83 I < 1 denotes the pattern occurs less often than expected from independent events Conforms to our everyday intuition! Coffee Coffee Tea 15 5 20 Tea 75 5 80 90 10 100

Drawback of Lift & Interest Lift/Interest loses its sensitivity when support of the itemset is very high in the above contingency table, X and Y look almost statistically independent (I(X,Y) = 1 for independent items) I(X,Y) = s(x,y) s(x)s(y) = 0.9 0.9 0.9 =1.11 Y not Y X 90 0 90 not X 0 10 10 90 10 100 I(X,Y) = s(x,y) s(x)s(y) = 0.1 0.1 0.1 =10 Y not Y X 10 0 10 not X 0 90 90 10 90 100

Correlation analysis: ϕ-coefficient For binary variables, correlation can be measured using the ϕ-coefficient: In our Tea Coffee example the ϕ- coefficient amounts to ϕ = (15*5-75*5)/ (90x20x10x80) =-0.25 φ = σ(xy)σ(x Y ) σ(xy )σ(x Y) σ(x)σ(y)σ(x )σ(y ) ϕ-coefficient is symmetric Invariant under inversion not invariant under null addition not invariant under scaling Coffee not Coffee Tea 15 5 20 not Tea 75 5 80 90 10 100

Property of φ-coefficient φ-coefficient considers the co-occurence and co-absense equally important: the two contingency tables evaluate to the same value This makes the measure more suitable to symmerical variables Y Y X 60 10 70 X 10 20 30 70 30 100 60 20 10 10 φ = 70 30 70 30 = 0.5238 Y not Y X 20 10 30 not X 10 60 70 30 70 100 20 60 10 10 φ = 70 30 70 30 = 0.5238

IS Measure IS Measure is an alternative measure proposed for asymmetric binary variables Equivalent to the cosine measure used in information retrieval IS Measure is symmetric not invariant under inversion invariant under null addition not invariant under scaling s(a,b) IS(A,B) = s(a)s(b) = I(A,B) s(a,b) cosine(x, y) = x t y t x y t

Testing statistical significance: p-values The interestingness measures discussed before are related to the concept of statistical hypothesis testing In hypothesis testing, we have two competing hypotheses H0: null hypothesis, assuming that the pattern seen in the data is created by random variation - e.g. the value c(x Y) is a result of random fluctuation H1: hypothesis that the pattern seen in the data represents true phenomenon The probability of observing the pattern if the null hypothesis is true is the p-value smaller p-values are more significant

Randomization Randomization is a general family of methods for assessing the statistical validity of data mining results Is used as an alternative to statistical tests, when the test statistic is too difficult to determine Basic idea: Given dataset D, generate a large collection of datasets D 1,...,D N where the statistical association of interest has been broken Run the data mining algorithm on all of the generated datasets and record the distribution of the property of interest If the property we observe in the original data falls into top p% of observations, we consider our data mining results significant

Simple randomization example Assume we want to assess the statistical significance of the support s(x Y) and confidence c(x Y) of the association rule X Y Y can contain more than one item From the dataset D, generate new datasets D 1,...,D 1000 by generating a random permutation R j of rows and setting D j (i,y) = D(R j,y) Compute support and confidence of the rule X Y in each version of the data: s j (X Y), c j (X Y) Sort the obtained support and confidence values and record the position from top where the values s(x Y) and c(x Y) fall Take the relative positions (fraction from the top) as estimates of statistical significance (p-value)

Swap randomization In the previous example, we could have as well computed analytically the probability of observing such support and confidence values The power of randomization comes more evident when the baseline hypothesis is more complicated Consider situation where we want to keep both the width of each transaction (how many items per transaction) and the support counts of individual items intact keeping a the size of shopping basket intact as well as the overall demand of items We look briefly at the approach described in Hanhijärvi et al.: Tell me Something I don t know: Randomization Strategies for Iterative Data Mining. Proc. KDD 09.

Swap randomization The row margins (widths of transactions) and column margins (support counts of items) can be preserved by swap randomization A randomized version of the dataset is generated via series of swaps In each swap, take two rows s,t and two columns x,y such that D(s,x) = D(t,y) = 1 and D(s,y) = D(t,x) = 0 swap the contents: D j (s,x) = D j (t,y) = 0 and D j (s,y) = D j (t,x) = 1 x y s 1 0 x y s 0 1 t 0 1 t 1 0

Algorithm for creating a swap randomized dataset Algorithm Swap Input : Dataset D, num. of swap attempts K Output : Randomized dataset D ˆ 1: D ˆ = D 2 : for i = 1 to K do 3 : Pick s,t and x,y such that D ˆ (s,x) = 1, D ˆ (t,y) = 1 4 : if D ˆ (s,y) = 0 and D ˆ (t,x) = 0 then 5 : D ˆ = swapped version of D ˆ 6 : end if 7 : end for 8 : return ˆ D

Swap randomization After generating the collection of randomized datasets D 1,...,D N, the statistical significance of the quantity of interest (e.g. support, confidence) is extracted Collect the distribution of the quantity of interest from the randomized datasets (e.g. confidence of c j (X Y) in all of the datasets) Sort the distribution and check how large fraction of the distribution is above the quantity computed from the original dataset - This is taken as the statistical significance of the quantity - below the confidence value c(x Y)=0.80 is in place r, so p-value is p = r/n 1 2 3... r r+1... N-1 N 0.87 0.85 0.85... 0.80 0.79... 0.17 0.15

Randomization: summary Randomization is powerful an general technique for assessing statistical significance It is particularly useful in situations where a traditional statistical testing is too difficult, e.g. when it is not evident what is the statistical distribution and the correct test in the given setting The drawback in data mining is its high time-complexity: We need to create large numbers of randomized versions of our data May not be possible with very large datasets