CS 584 Data Mining. Association Rule Mining 2

Similar documents
DATA MINING - 1DL360

DATA MINING - 1DL360

Lecture Notes for Chapter 6. Introduction to Data Mining

COMP 5331: Knowledge Discovery and Data Mining

Data Mining Concepts & Techniques

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Association Analysis Part 2. FP Growth (Pei et al 2000)

Association Rules. Fundamentals

D B M G Data Base and Data Mining Group of Politecnico di Torino

CS 484 Data Mining. Association Rule Mining 2

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Association Analysis 1

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

732A61/TDDD41 Data Mining - Clustering and Association Analysis

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Data mining, 4 cu Lecture 5:

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Data Mining and Analysis: Fundamental Concepts and Algorithms

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Chapters 6 & 7, Frequent Pattern Mining

Association Analysis. Part 2

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Frequent Itemset Mining

CS 412 Intro. to Data Mining

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Unit II Association Rules

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Association (Part II)

Knowledge Discovery and Data Mining I

FP-growth and PrefixSpan

Temporal Data Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Chapter 4: Frequent Itemsets and Association Rules

Selecting a Right Interestingness Measure for Rare Association Rules

Frequent Itemset Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

Frequent Itemset Mining

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Machine Learning: Pattern Mining

Associa'on Rule Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

Frequent Itemset Mining

CSE 5243 INTRO. TO DATA MINING

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

COMP 5331: Knowledge Discovery and Data Mining

Association Rule Mining on Web

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Four Paradigms in Data Mining

CS5112: Algorithms and Data Structures for Applications

1 Frequent Pattern Mining

CSE 5243 INTRO. TO DATA MINING

Association Analysis. Part 1

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Data Analytics Beyond OLAP. Prof. Yanlei Diao

CPT+: A Compact Model for Accurate Sequence Prediction

Approximate counting: count-min data structure. Problem definition

Effective Elimination of Redundant Association Rules

CS246 Final Exam. March 16, :30AM - 11:30AM

ST3232: Design and Analysis of Experiments

Frequent Pattern Mining: Exercises

Discovering Threshold-based Frequent Closed Itemsets over Probabilistic Data

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

OPPA European Social Fund Prague & EU: We invest in your future.

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

On Condensed Representations of Constrained Frequent Patterns

Experimental design (DOE) - Design

CS4445 B10 Homework 4 Part I Solution

TWO-LEVEL FACTORIAL EXPERIMENTS: REGULAR FRACTIONAL FACTORIALS

Frequent Pattern Mining. Toon Calders University of Antwerp

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Mining Strong Positive and Negative Sequential Patterns

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

Construction of Mixed-Level Orthogonal Arrays for Testing in Digital Marketing

Introduction to Data Mining

Mining Approximative Descriptions of Sets Using Rough Sets

Mining Class-Dependent Rules Using the Concept of Generalization/Specialization Hierarchies

Discovering Binding Motif Pairs from Interacting Protein Groups

CSE-4412(M) Midterm. There are five major questions, each worth 10 points, for a total of 50 points. Points for each sub-question are as indicated.

FARMER: Finding Interesting Rule Groups in Microarray Datasets

Mining Free Itemsets under Constraints

CS738 Class Notes. Steve Revilak

Transcription:

CS 584 Data Mining Association Rule Mining 2

Recall from last time: Frequent Itemset Generation Strategies Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M Reduce the number of transactions (N) Reduce size of N as the size of itemset increases Used by vertical-based mining algorithms Reduce the number of comparisons (NM) Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction 2

Reducing Number of Candidates Apriori principle: If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure: X, Y : ( X Y ) s( X ) s( Y ) Support of an itemset never exceeds the support of its subsets This is known as the anti-monotone property of support 3

Reducing Number of Comparisons Candidate counting: Scan the database of transactions to determine the support of each candidate itemset To reduce the number of comparisons, store the candidates in a hash structure Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets 4

Subset Operation (Enumeration) Given a transaction t, what are the possible subsets of size 3? Transaction, t 2 3 5 6 Level 2 3 5 6 2 3 5 6 3 5 6 Level 2 2 3 5 6 3 5 6 5 6 2 3 5 6 2 5 6 3 5 6 2 3 2 5 2 6 3 5 3 6 5 6 2 3 5 2 3 6 2 5 6 3 5 6 Level 3 Subsets of 3 items 5

Generate Hash Tree Suppose you have 5 candidate itemsets of length 3: { 4 5}, { 2 4}, {4 5 7}, { 2 5}, {4 5 8}, { 5 9}, { 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} You need: Hash function Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node) Hash function 3,6,9,4,7 2,5,8 4 5 2 4 4 5 7 2 5 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 6 3 5 7 6 8 9 5 9 3 6 7 3 6 8 6

Association Rule Discovery: Hash tree Hash Function Candidate Hash Tree,4,7 3,6,9 2,5,8 Hash on, 4 or 7 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 5 7 6 8 9 3 6 7 3 6 8 7

Association Rule Discovery: Hash tree Hash Function Candidate Hash Tree,4,7 3,6,9 2,5,8 Hash on 2, 5 or 8 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 5 7 6 8 9 3 6 7 3 6 8 8

Association Rule Discovery: Hash tree Hash Function Candidate Hash Tree,4,7 3,6,9 2,5,8 Hash on 3, 6 or 9 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 5 7 6 8 9 3 6 7 3 6 8 9

Subset Operation Using Hash Tree 2 3 5 6 transaction Hash Function + 2 3 5 6 2 + 3 5 6,4,7 3,6,9 3 + 5 6 2,5,8 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 5 7 6 8 9 3 6 7 3 6 8 Level Level 2 Transaction, t 2 3 5 6 2 3 5 6 2 3 5 6 2 3 5 6 3 5 6 5 6 2 3 5 6 2 5 6 2 3 2 5 2 6 Level 3 3 5 3 6 5 6 2 3 5 2 3 6 Subsets of 3 items 3 3 5 5 6 6 2 5 6 3 5 6

Subset Operation Using Hash Tree 2 3 5 6 transaction Hash Function 2 + 3 + 3 5 6 5 6 + 2 3 5 6 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 5 + 6 2 3 4 5 6 7 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9

Subset Operation Using Hash Tree 2 3 5 6 transaction Hash Function 2 + 3 + 3 5 6 5 6 + 2 3 5 6 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 5 + 6 2 3 4 5 6 7 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 6 8 9 3 6 8 Match transaction against 9 out of 5 candidates 2

Factors Affecting Complexity Choice of minimum support threshold Lowering support threshold results in more frequent itemsets This may increase number of candidates and max length of frequent itemsets Dimensionality (number of items) of the data set More space is needed to store support count of each item If number of frequent items also increases, both computation and I/O costs may also increase Size of database Since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width Transaction width increases with denser data sets This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction increases with its width) 3

Compact Representation of Frequent Itemsets Some itemsets are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A8 A9 A B B2 B3 B4 B5 B6 B7 B8 B9 B C C2 C3 C4 C5 C6 C7 C8 C9 C 2 3 4 5 6 7 8 9 2 3 4 5 Number of frequent itemsets Need a compact representation = = 3 k k 4

Maximal Frequent Itemset An itemset is maximal frequent if none of its immediate supersets is frequent null Maximal Itemsets A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Infrequent Itemsets ABCD E Border 5

Closed Itemset An itemset is closed if none of its immediate supersets has the same support as the itemset. Using the closed itemset support, we can find the support for the non-closed itemsets. TID Items {A,B} 2 {B,C,D} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} Itemset Support {A} 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 {C,D} 3 Itemset Support {A,B,C} 2 {A,B,D} 3 {A,C,D} 2 {B,C,D} 3 {A,B,C,D} 2 6

Maximal vs Closed Itemsets TID Items ABC 2 ABCD 3 BCE 4 ACDE 5 DE null Transaction Ids 24 23 234 245 345 A B C D E 2 24 24 4 23 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE Not supported by any transactions ABCDE 7

Maximal vs Closed Frequent Itemsets null 24 23 234 245 345 A B C D E Closed but not maximal Minimum support = 2 Closed and maximal 2 24 24 4 23 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE TID Items ABC 2 ABCD 3 BCE 4 ACDE 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE # Closed = 9 # Maximal = 4 5 DE 8

Determining support for non-closed itemsets null 24 23 234 245 345 A B C D E Closed but not maximal Minimum support = 2 Closed and maximal 2 24 24 4 23 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE TID Items ABC 2 ABCD 3 BCE 4 ACDE 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE # Closed = 9 # Maximal = 4 5 DE 9

Closed Frequent Itemset An itemset is closed frequent itemset if it is closed and it support is greater than or equal to minsup. Useful for removing redundant rules A rules X -> Y is redundant if there exists another rule X -> Y where X is a subset of X and Y is a subset of Y, such that the support/confidence for both rules are identical 2

Maximal vs Closed Itemsets Frequent Itemsets Closed Frequent Itemsets Maximal Frequent Itemsets 2

Apriori Problems High I/O Poor performance for dense datasets because of increasing width of dimensions.

Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice General-to-specific vs Specific-to-general Frequent itemset border null null Frequent itemset border null............ {a,a 2,...,a n } {a,a 2,...,a n } Frequent itemset border {a,a 2,...,a n } (a) General-to-specific (b) Specific-to-general (c) Bidirectional

Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Equivalent Classes based on prefix or suffix Consider frequent itemsets from these classes. null null A B C D A B C D AB AC AD BC BD CD AB AC BC AD BD CD ABC ABD ACD BCD ABC ABD ACD BCD ABCD ABCD (a) Prefix tree (b) Suffix tree

Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Breadth-first vs Depth-first (a) Breadth first (b) Depth first

Alternative Methods for Frequent Itemset Generation Representation of Database horizontal vs vertical data layout Horizontal Data Layout TID Items A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D B Vertical Data Layout A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 9

FP-growth Algorithm Use a compressed representation of the database using an FP-tree Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets

FP-tree construction

FP-Tree Construction E: Pointers are used to assist frequent itemset generation

FP-Growth Divide-and-conquer: decompose the frequent itemset generation problem into multiple subproblems. The algorithm works in bottom-up fashion: it looks for frequent itemsets ending in e first, followed by d, c, b, and then a.

minsup = 2 sup(e) = 3 Update support count (e.g. {b:2, c:2, e:} becomes {b:, c:, e:}) b is removed because it has support of. Also remove e. Frequent itemsets: those ending in de, ce, ae sup(de) = 2 c is removed because it has support of. ade is frequent

Pattern Evaluation Association rule algorithms tend to produce too many rules Many of them are uninteresting or redundant Redundant if {A,B,C} {D} and {A,B} {D} have same support & confidence Interestingness measures can be used to prune/rank the derived patterns In the original formulation of association rules, support & confidence are the only measures used

Subjective Interestingness Measure Objective measure: Rank patterns based on statistics computed from data e.g., 2 measures of association (support, confidence, Laplace, Gini, mutual information, Jaccard, etc). Subjective measure: Rank patterns according to user s interpretation A pattern is subjectively interesting if it contradicts the expectation of a user A pattern is subjectively interesting if it is actionable

Computing Interestingness Measure Given a rule X Y, information needed to compute rule interestingness can be obtained from a contingency table Contingency table for X Y Y Y X f f f + X f f f o+ f + f + T f : support of X and Y f : support of X and Y f : support of X and Y f : support of X and Y Used to define various measures support, confidence, lift, Gini, J-measure, etc.

Drawback of Confidence Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Although confidence is high, rule is misleading P(Coffee Tea) =.9375

Statistical Independence Population of students 6 students know how to swim (S) 7 students know how to bike (B) 42 students know how to swim and bike (S,B) P(S B) = 42/ =.42 P(S) P(B) =.6.7 =.42 P(S B) = P(S) P(B) => Statistical independence P(S B) > P(S) P(B) => Positively correlated P(S B) < P(S) P(B) => Negatively correlated

Statistical-based Measures Measures that take into account statistical dependence Lift(X > Y ) = conf (X > Y ) P(Y ) InterestFactor = P(X,Y ) P(X)P(Y ) Leverage = P(X,Y ) P(X)P(Y ) ϕ coefficient = = P(Y X) P(Y ) P(X,Y ) P(X)P(Y ) P(X)[ P(X)]P(Y )[ P(Y )] Lift is equivalent to Interest Factor for binary variables. Correlation for binary variables

Interestingness Measure: Lift play basketball eat cereal [4%, 66.7%] is misleading The overall % of students eating cereal is 75% > 66.7%. play basketball not eat cereal [2%, 33.3%] is more accurate, although with lower support and confidence Measure of dependent/correlated events: lift (= Interest Factor) lift = P( A B) P( A) P( B) Basketball Not basketball Sum (row) Cereal 2 75 375 Not cereal 25 25 Sum(col.) 3 2 5 2 / 5 lift( B, C) = =.89 3 / 5*375 / 5 / 5 lift( B, C) = =.33 3 / 5*25 / 5 April 8, 26 Data Mining: Concepts and Techniques 39

Example: Lift/Interest Factor Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Lift =.75/.9=.8333 (<, therefore is negatively associated)

Drawback of Lift & Interest Factor Y Y X X 9 9 9 Y Y X 9 9 X 9. Lift = =.9 Lift = =. (.)(.) (.9)(.9) Statistical independence: If P(X,Y)=P(X)P(Y) => Lift =

There are lots of measures proposed in the literature Some measures are good for certain applications, but not for others What criteria should we use to determine whether a measure is good or bad? What about Apriori-style support based pruning? How does it affect these measures?

Properties of Objective Measures Symmetric/Asymmetric Scaling Property Inversion property Null Addition Property 43

Property under Variable Permutation B B A p q A r s A A B p r B q s Does M(A,B) = M(B,A)? Symmetric measures: support, lift, collective strength, cosine, Jaccard, etc Asymmetric measures: confidence, conviction, Laplace, J-measure, etc

Property under Row/Column Scaling Grade-Gender Example (Mosteller, 968): Male Female High 2 3 5 Low 4 5 3 7 Male Female High 4 3 34 Low 2 4 42 6 7 76 2x x Mosteller: Underlying association should be independent of the relative number of male and female students in the samples

Property under Inversion Operation A B C D (a) (b) (c) E F Transaction Transaction N.....

Example: φ-coefficient φ-coefficient is analogous to correlation coefficient for continuous variables Y Y X 6 7 X 2 3 7 3 Y Y X 2 3 X 6 7 3 7 φ = =.6.7.7.7.3.7.3.5238 φ = φ Coefficient is the same for both tables =.2.3.3.7.3.7.3.5238

Property under Null Addition B B A p q A r s B B A p q A r s + k Invariant measures: support, cosine, Jaccard, etc Non-invariant measures: correlation, Gini, mutual information, odds ratio, etc

Resources Good summary of interestingness measures: http://michael.hahsler.net/research/ association_rules/measures.html 49