Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Similar documents
Lecture Notes for Chapter 6. Introduction to Data Mining

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

DATA MINING - 1DL360

COMP 5331: Knowledge Discovery and Data Mining

Data Mining Concepts & Techniques

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

DATA MINING - 1DL360

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Association Analysis Part 2. FP Growth (Pei et al 2000)

CS 584 Data Mining. Association Rule Mining 2

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

CS 484 Data Mining. Association Rule Mining 2

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Association Rules. Fundamentals

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Association Analysis 1

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

Data mining, 4 cu Lecture 5:

Data Mining and Analysis: Fundamental Concepts and Algorithms

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Chapters 6 & 7, Frequent Pattern Mining

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

CSE 5243 INTRO. TO DATA MINING

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

CS5112: Algorithms and Data Structures for Applications

CSE 5243 INTRO. TO DATA MINING

COMP 5331: Knowledge Discovery and Data Mining

Association Rule Mining on Web

Selecting a Right Interestingness Measure for Rare Association Rules

Chapter 4: Frequent Itemsets and Association Rules

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

FP-growth and PrefixSpan

CS 412 Intro. to Data Mining

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Knowledge Discovery and Data Mining I

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Unit II Association Rules

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Associa'on Rule Mining

Introduction to Data Mining

Association Analysis. Part 1

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Frequent Itemset Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Association Analysis. Part 2

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Analytics Beyond OLAP. Prof. Yanlei Diao

ST3232: Design and Analysis of Experiments

Frequent Itemset Mining

Handling a Concept Hierarchy

Temporal Data Mining

sor exam What Is Association Rule Mining?

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

Machine Learning: Pattern Mining

OPPA European Social Fund Prague & EU: We invest in your future.

Association (Part II)

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Frequent Itemset Mining

Sequential Pattern Mining

Mining Infrequent Patter ns

Frequent Pattern Mining: Exercises

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

CPT+: A Compact Model for Accurate Sequence Prediction

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Interestingness Measures for Data Mining: A Survey

Data Mining Concepts & Techniques

Four Paradigms in Data Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

Selecting the Right Interestingness Measure for Association Patterns

Approximate counting: count-min data structure. Problem definition

CS246 Final Exam. March 16, :30AM - 11:30AM

Frequent Itemset Mining

Set Notation and Axioms of Probability NOT NOT X = X = X'' = X

Density-Based Clustering

3. Problem definition

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Effective Elimination of Redundant Association Rules

Discovering Binding Motif Pairs from Interacting Protein Groups

DATA MINING - 1DL105, 1DL111

1 Frequent Pattern Mining

CS738 Class Notes. Steve Revilak

Mining Approximative Descriptions of Sets Using Rough Sets

Effective and efficient correlation analysis with application to market basket analysis and network community detection

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

Transcription:

.3.6 Descriptive data analysis Data Mining MTAT.3.83 (6EAP) Frequent itemsets and association rules Jaak Vilo 26 Spring Aims to summarise the main qualitative traits of data. Used mainly for discovering underlying processes and relations in data. Facts are presented together with understandable explanations. Used for example in medical, physical and social sciences. Jaak Vilo and other authors UT: Data Mining 2 2 Example: Items in baskets E.g. set of items B C A F H F E C H E D B A C H F E F A D H B E C F B D A H C E G A E B H E Anne Britt Margus Gert Gert Toomas Anne Britt Gert Anne Laura Toomas Gert Laura Britt Laura Gert Anne Britt Margus Britt Laura 2 3 4 4 5 2 4 6 5 4 6 2 6 4 2 3 2 6 Jaak Vilo and other authors UT: Data Mining 2 3 Jaak Vilo and UT: other Data Mining authors 2 4 2 3 6 8 6 5 3 8 5 4 2 3 8 6 5 6 4 8 2 5 3 6 2 4 Example Sort or renumber in any way 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 Jaak Vilo and other authors UT: Data Mining 2 5 Jaak Vilo and other authors UT: Data Mining 2 6

.3.6 Example Example 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 frequency : 3 2: 4 3: 4 4: 3 5: 4 6: 5 8: 4 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 Jaak Vilo and other authors UT: Data Mining 2 7 Jaak Vilo and other authors UT: Data Mining 2 8 Example Example 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 T-ID ITEMS 2 3 4 5 6 7 8 2 3 4 5 6 7 Jaak Vilo and other authors UT: Data Mining 2 9 Jaak Vilo and other authors UT: Data Mining 2 Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Association Rule Mining Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Introduction to Data Mining by Tan, St einbac h, Kumar Chapter 6: http://wwwusers.cs.umn.edu/~kumar/dmbook/ch6.pdf Market-Basket transactions Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Example of Association Rules {Diaper} {Beer}, {Milk, Bread} {Eggs,Coke}, {Beer, Bread} {Milk}, Implication means co-occurrence, not causality! Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 2

.3.6 Definition: Frequent Itemset Definition: Association Rule Itemset A collection of one or more items u Ex ample: {Milk, Bread, Diaper} k-itemset u An itemset that contains k items Support count (σ) Frequency of occurrence of an itemset E.g. σ({milk, Bread,Diaper}) = 2 Support Fraction of transactions that contain an itemset E.g. s({milk, Bread, Diaper}) = 2/5 Frequent Itemset An itemset whose support is greater than or equal to a minsup threshold Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Association Rule An implication expression of the form X Y, where X and Y are itemsets Example: {Milk, Diaper} {Beer} Rule Evaluation Metrics Support (s) u Fraction of transactions that contain both X and Y Confidence (c) u Measures how often items in Y appear in transactions that contain X TID Items Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Example: { Milk, Diaper} Beer (Milk,Diaper,Beer) 2 s = σ = =.4 T 5 σ (Milk,Diaper,Beer) 2 c = = =.67 σ (Milk,Diaper) 3 Association Rule Mining Task Given a set of transactions T, the goal of association rule mining is to find all rules having support minsup threshold confidence minconf threshold Brute-force approach: List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds Computationally prohibitive! 2 goals Computational efficiency Efficient counting Rank the results by relevance Avoiding spurious results (that are there by pure chance) Reporting the most interesting/surprising/important facts Mining Association Rules TID Items Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Observations: Example of Rules: {Milk,Diaper} {Beer} (s=.4, c=.67) {Milk,Beer} {Diaper} (s=.4, c=.) {Diaper,Beer} {Milk} (s=.4, c=.67) {Beer} {Milk,Diaper} (s=.4, c=.67) {Diaper} {Milk,Beer} (s=.4, c=.5) {Milk} {Diaper,Beer} (s=.4, c=.5) All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} Rules originating from the same itemset have identical support but can have different confidence Thus, we may decouple the support and confidence requirements Mining Association Rules Two-step approach:. Frequent Itemset Generation Generate all itemsets whose support minsup 2. Rule Generation Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Frequent itemset generation is still computationally expensive 3

.3.6 Example how many itemsets? Frequent Itemset Generation 2 3 6 8 6 5 3 8 5 4 2 3 8 6 5 6 4 8 2 5 3 6 2 4 8 items # possible itemsets? A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2 d possible candidate itemsets Jaak Vilo and other authors UT: Data Mining 2 9 Frequent Itemset Generation Brute-force approach: Each itemset in the lattice is a candidate frequent itemset Count the support of each candidate by scanning the database N Transactions Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke w List of Candidates Match each transaction against every candidate Complexity ~ O(NMw) => Expensive since M = 2 d!!! M Computational Complexity Given d unique items: Total number of itemsets = 2 d Total number of possible association rules: d d d k d k R = k= j= k j d d + = 3 2 + If d=6, R = 62 rules Example counting frequent substrings How many words of length k, alphabet Σ? How to count the frequency of every possible substring in protein sequences? How to count all words in a book? Substring vs regular expressions Frequent Itemset Generation Strategies Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M Reduce the number of transactions (N) Reduce size of N as the size of itemset increases Used by DHP and vertical-based mining algorithms Reduce the number of comparisons (NM) Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction 4

.3.6 Reducing Number of Candidates Apriori principle: If an itemset is frequent, then all of its subsets must also be frequent Illustrating Apriori Principle A B C D E Apriori principle holds due to the following property of the support measure: X, Y : ( X Y ) s( X ) s( Y ) Support of an itemset never exceeds the support of its subsets This is known as the anti-monotone property of support Found to be Infrequent AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Pruned supersets ABCD ABCE ABDE ACDE BCDE ABCDE Illustrating Apriori Principle Illustrating Apriori Principle Item Count Bread 4 Coke 2 Milk 4 Beer 3 Diaper 4 Eggs Minimum Support = 3 If every subset is considered, 6 C + 6 C2 + 6 C3 = 4 With support-based pruning, 6 + 6 + = 3 Items (-itemsets) Itemset Count {Bread,Milk} 3 {Bread,Beer} 2 {Bread,Diaper} 3 {Milk,Beer} 2 {Milk,Diaper} 3 {Beer,Diaper} 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Itemset Count {Bread,Milk,Diaper} 3 Item Count Bread 4 Coke 2 Milk 4 Beer 3 Diaper 4 Eggs Minimum Support = 3 If every subset is considered, 6 C + 6 C2 + 6 C3 = 4 With support-based pruning, 6 + 6 + = 3 Items (-itemsets) Itemset Count {Bread,Milk} 3 {Bread,Beer} 2 {Bread,Diaper} 3 {Milk,Beer} 2 {Milk,Diaper} 3 {Beer,Diaper} 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Itemset Count {Bread,Milk,Diaper} 3 Apriori Algorithm Method: Let k= Generate frequent itemsets of length Repeat until no new frequent itemsets are identified u Generate length (k+) candidate itemsets from length k frequent itemsets u Prune candidate itemsets containing subsets of length k that are infrequent u Count the support of each candidate by scanning the DB u Eliminate candidates that are infrequent, leaving only those that are frequent Reducing Number of Comparisons Candidate counting: Scan the database of transactions to determine the support of each candidate itemset To reduce the number of comparisons, store the candidates in a hash structure N u Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets Transactions Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Hash Structure Buckets k 5

.3.6 Generate Hash Tree Association Rule Discovery: Hash tree Suppose you have 5 candidate itemsets of length 3: { 4 5}, { 2 4}, {4 5 7}, { 2 5}, {4 5 8}, { 5 9}, { 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} You need: Hash function Max leaf size: max number of itemsetsstored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node) Hash Function,4,7 3,6,9 2,5,8 Candidate Hash Tree 2 3 4 5 6 7 Hash function 3,6,9,4,7 2,5,8 4 5 2 4 4 5 7 2 5 4 5 8 2 3 4 5 6 7 3 6 5 9 3 4 5 3 5 6 3 5 7 6 8 9 3 6 7 3 6 8 Hash on, 4 or 7 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Association Rule Discovery: Hash tree Association Rule Discovery: Hash tree Hash Function Candidate Hash Tree Hash Function Candidate Hash Tree,4,7 3,6,9,4,7 3,6,9 2,5,8 2,5,8 2 3 4 5 6 7 2 3 4 5 6 7 Hash on 2, 5 or 8 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Hash on 3, 6 or 9 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Subset Operation Subset Operation Using Hash Tree Given a transaction t, what are the possible subsets of size 3? Level Level 2 Transaction, t 2 3 5 6 2 3 5 6 2 3 5 6 2 3 5 6 3 5 6 5 6 2 3 5 6 2 5 6 2 3 2 5 2 6 Level 3 3 5 3 6 5 6 2 3 5 2 3 6 Subsets of 3 items 3 5 6 3 5 6 2 5 6 3 5 6 4 5 3 6 2 4 4 5 7 + 2 3 5 6 2 5 4 5 8 2 3 5 6 transaction 5 9 2 3 4 5 6 7 2 + 3 5 6 3 + 5 6 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Hash Function,4,7 2,5,8 3,6,9 6

.3.6 Subset Operation Using Hash Tree Subset Operation Using Hash Tree 2 3 5 6 transaction Hash Function 2 3 5 6 transaction Hash Function 2 + 3 5 6 3 + 5 6 5 + 6 + 2 3 5 6 2 3 4 5 6 7 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 2 + 3 5 6 3 + 5 6 5 + 6 + 2 3 5 6 2 3 4 5 6 7 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 4 5 3 6 2 4 4 5 7 2 5 4 5 8 5 9 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 4 5 3 6 2 4 4 5 7 2 5 4 5 8 5 9 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Match transaction against out of 5 candidates Factors Affecting Complexity Choice of minimum support threshold lowering support threshold results in more frequent itemsets this may increase number of candidates and max length of frequent itemsets Dimensionality (number of items) of the data set more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width transaction width increases with denser data sets This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction increases with its width) Compact Representation of Frequent Itemsets Some itemsets are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A8 A9 A B B2 B3 B4 B5 B6 B7 B8 B9 B C C2 C3 C4 C5 C6 C7 C8 C9 C 2 3 4 5 6 7 8 9 2 3 4 5 Number of frequent itemsets Need a compact representation = = 3 k k 7

.3.6 Maximal Frequent Itemset Closed Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Maxim al A B C D E Item sets AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ACDE BCDE ABDE Infr equent Item sets Border ABCD E An itemset is closed if none of its immediate supersets has the same support as the itemset {A,B} 2 {B,C,D} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} Itemset Support {A} 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 {C,D} 3 Itemset Support {A,B,C} 2 {A,B,D} 3 {A,C,D} 2 {B,C,D} 3 {A,B,C,D} 2 Maximal vs Closed Itemsets Maximal vs Closed Frequent Itemsets ABC 2 ABCD 3 BCE 4 ACDE 5 DE 2 24 23 Tr ansaction Ids 24 23 234 245 345 A B C D E 24 2 4 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 24 23 Closed but Minimum support = 2 not m axim al 24 23 234 245 345 A B C E D 24 2 4 3 24 34 45 AB AC AD AE BC BD BE CD CE DE Closed and maximal 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE 2 4 ABCD ABCE ABDE ACDE BCDE # Closed = 9 # Maxim al = 4 Not supported by any tr ansactions ABCDE ABCDE Maximal vs Closed Itemsets Maximal Frequent Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Frequent Itemsets Maxim al Itemsets A B C D E Closed Frequent Itemsets AB AC AD AE BC BD BE CD CE DE Maximal Frequent Itemsets ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Infrequent Itemsets Border ABCD E 8

.3.6 Alternative Methods for Frequent Itemset Generation Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice General-to-specific vs Specific-to-general Traversal of Itemset Lattice Equivalent Classes Frequent itemset border........ Frequent itemset border.... A B C D AC BD CD AB AD BC ABC ABD ACD BCD A B C D AB AC AD BC BD CD ABC ABD ACD BCD {a,a 2,...,a n } {a,a 2,...,a n } Frequent itemset border {a,a 2,...,a n } ABCD ABCD (a) General-to-specific (b) Specific-to-general (c) Bidirectional (a) Prefix tree (b) Suffix tree Alternative Methods for Frequent Itemset Generation Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Breadth-first vs Depth-first Representation of Database horizontal vs vertical data layout (a) Breadth first (b) Depth first Horizontal Data Layout A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D B Vertical Data Layout A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 9 FP-growth Algorithm Use a compressed representation of the database using an FP-tree Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets Frequent pattern growth Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. Mining frequent patterns without candidate generation. Data Mining and Knowledge Discovery 8:53-87, 24. Slides: http://www.cis.hut.fi/opinnot/t-6.62/28/fptree.pdf Why FP, not Apriori? Apriori works well except when: Lots of frequent patterns Big set of items Low minimum support threshold Long patterns Why: Candidate sets become huge 4 frequent patterns of length 8 length 2 candidates Discovering pattern of length requires at least 2 candidates (nr of subsets) Repeated database scans costly (long patterns) 9

.3.6 FP-tree construction After reading TID=: {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} A: B: After reading TID=2: A: B: B: C: D: FP-Tree Construction {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Transaction Database B:5 A:7 C: D: B:3 C:3 Header table Item A B C D E Pointer C:3 D: D: D: E: E: D: Pointers are used to assist frequent itemset generation E:

.3.6 FP-growth All frequent itemsets of this example D: C:3 B:5 A:7 D: C: D: D: B: C: D: Conditional Pattern base for D: P = {(A:,B:,C:), (A:,B:), (A:,C:), (A:), (B:,C:)} Recursively apply FPgrowth on P Frequent Itemsets found (with sup > ): AD, BD, CD, ACD, BCD E: {e}, {d,e}, {a,d,e}, {c,e}, {a,e} D: {d}, {c,d}, {b,c,d}, {a,c,d}, {b,d}, {a,b,d}, {a,d} C: {c}, {b,c}, {a,b,c}, {a,c} B: {b}, {a,b} A: {a} FP-Tree Construction (example) Tree Projection {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Transaction Database B:5 A:7 C: D: B:3 C:3 Set enumeration tree: Possible Extension: E(A) = {B,C,D,E} A B C D E AB AC AD AE BC BD BE CD CE DE Header table Item A B C D E Pointer C:3 D: D: D: E: E: D: Pointers are used to assist frequent itemset generation E: Possible Extension: E(ABC) = {D,E} ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Tree Projection Items are listed in lexicographic order Each node P stores the following information: Itemset for node P List of possible lexicographic extensions of P: E(P) Pointer to projected database of its ancestor node Bitvector containing information about which transactions in the projected database contain the itemset Projected Database Original Database: {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Projected Database for node A: {B} 2 {} 3 {C,D,E} 4 {D,E} 5 {B,C} 6 {B,C,D} 7 {} 8 {B,C} 9 {B,D} {} For each transaction T, projected transaction at node A is T E(A)

.3.6 ECLAT For each item, store a list of transaction ids (tids) Horizontal Data Layout A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D B Vertical Data Layout A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 9 TID-list ECLAT Determine support of any k-itemset by intersecting tid-lists of two of its (k-) subsets. AB 5 7 8 A 4 5 6 7 8 9 B 2 5 7 8 3 traversal approaches: top-down, bottom-up and hybrid Advantage: very fast support counting Disadvantage: intermediate tid-lists may become too large for memory Rule Generation Given a frequent itemset L, find all non-empty subsets f L such that f L f satisfies the minimum confidence requirement If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABC AB CD, AC BD, AD BC, BC AD, BD AC, CD AB, If L = k, then there are 2 k 2 candidate association rules (ignoring L and L) Rule Generation How to efficiently generate rules from frequent itemsets? In general, confidence does not have an antimonotone property c(abc D) can be larger or smaller than c(ab D) But confidence of rules generated from the same itemset has an anti-monotone property e.g., L = {A,B,C,D}: c(abc D) c(ab CD) c(a BCD) u Confidence is anti-monotone w.r.t. number of items on the RHS of the rule Rule Generation for Apriori Algorithm Rule Generation for Apriori Algorithm Lattice of rules Low Confidence Rule ABCD=>{ } BCD=>A ACD=>B ABD=>C ABC=>D CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD Candidate rule is generated by merging two rules that share the same prefix in the rule consequent join(cd=>ab,bd=>ac) would produce the candidate rule D => ABC CD=>AB BD=>AC Pruned Rules D=>ABC C=>ABD B=>ACD A=>BCD Prune rule D=>ABC if its subset AD=>BC does not have high confidence D=>ABC 2

.3.6 Effect of Support Distribution Many real data sets have skewed support distribution Effect of Support Distribution How to set the appropriate minsup threshold? If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products) Support distribution of a retail data set If minsup is set too low, it is computationally expensive and the number of itemsets is very large Using a single minimum support threshold may not be effective Multiple Minimum Support Multiple Minimum Support How to apply multiple minimum supports? MS(i): minimum support for item i e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=.%, MS(Salmon)=.5% MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli)) =.% Item MS(I) Sup(I) A.%.25% B.2%.26% C.3%.29% A B C AB AC AD AE BC BD ABC ABD ABE ACD ACE ADE Challenge: Support is no longer anti-monotone u Suppose: Support(Milk, Coke) =.5% and Support(Milk, Coke, Broccoli) =.5% u {Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent D.5%.5% E 3% 4.2% D E BE CD CE DE BCD BCE BDE CDE Multiple Minimum Support Multiple Minimum Support (Liu 999) Item MS(I) Sup(I) A.%.25% B.2%.26% C.3%.29% D.5%.5% E 3% 4.2% A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Order the items according to their minimum support (in ascending order) e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=.%, MS(Salmon)=.5% Ordering: Broccoli, Salmon, Coke, Milk Need to modify Apriori such that: L : set of frequent items F : set of items whose support is MS() where MS() is min i( MS(i) ) C 2 : candidate itemsets of size 2 is generated from F instead of L 3

.3.6 Multiple Minimum Support (Liu 999) Pattern Evaluation Modifications to Apriori: In traditional Apriori, u A candidate (k+)-itemset is generated by merging two frequent itemsets of size k u The candidate is pruned if it contains any infrequent subsets of size k many rules many of them are uninteresting or redundant Redundant if {A,B,C} {D} and {A,B} {D} have same support & confidence Pruning step has to be modified: u u u Prune only if subset contains the first item e.g.: Candidate={Broccoli, Coke, Milk} (ordered according to minimum support) {Broccoli, Coke} and {Broccoli, Milk} are frequent but {Coke, Milk} is infrequent Candidate is not pruned because {Coke,Milk} does not contain the first item, i.e., Broccoli. Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 Association rule algorithms tend to produce too # Application of Interestingness Measure Interestingness measures can be used to prune/rank the derived patterns In the original formulation of association rules, support & confidence are the only measures used Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 # Laylines and patterns in data Interestingness Measures Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 # Stores built according to mystical plan Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 37 random points, 8 4-point alignments # 4

.3.6 What is interesting and not random? { diaper, bread } -> { beer } Support count (3), confidence (.75) lhs+rhs: 3 / lhs: 4 What would make it surprising (statistically significant) and useful? Evaluation of Associated Patterns Objective measure of interestingness (statistics) 2x2 contingency table approach Diaper, Milk support 4 / 34 Visualization Beer Not Beer Template based approaches Report those matching user templates Subjective interestingness Domain information, concept hierarchy, profit margins, etc Diaper, Milk 3 4 Not Diaper, Milk 2 3 3 2 34 2x2 contingency table approach 2x2 contingency table approach Diaper, Milk -> Beer support 3/34, confidence 3/4 Diaper, Milk -> Beer support 3/34, confidence 3/4 Beer Not Beer Beer Not Beer Diaper, Milk 3 4 Diaper, Milk 3 4 Not Diaper, Milk 2 3 Not Diaper, Milk 2 3 3 2 34 3 2 34 5

.3.6 Computing Interestingness Measure Drawback of Confidence Given a rule X Y, information needed to compute rule interestingness can be obtained from a contingency table Contingency table for X Y f : support of X and Y Y Y f: support of X and Y X f f f+ X f f fo+ f: support of X and Y f+ f+ T f: support of X and Y Used to define various measures support, confidence, lift, Gini, J-measure, etc. Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Although confidence is high, rule is misleading P(Coffee Tea) =.9375 Drawback of Confidence Statistical Independence Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Although confidence is high, rule is misleading P(Coffee Tea) =.9375 Population of students 6 students know how to swim (S) 7 students know how to bike (B) 42 students know how to swim and bike (S,B) P(S and B) = 42/ =.42 P(S) P(B) =.6.7 =.42 P(S and B) = P(S) P(B) => Statistical independence P(S and B) > P(S) P(B) => Positively correlated P(S and B) < P(S) P(B) => Negatively correlated Statistical-based Measures Measures that take into account statistical dependence Lift = P(Y X) P(Y ) Interest = P(X,Y ) P(X)P(Y ) PS = P(X,Y ) P(X)P(Y ) ϕ coefficient = Lift == Interest (old name) P(X,Y ) P(X)P(Y ) P(X)[ P(X)]P(Y )[ P(Y )] P(X Y ) = P(X,Y ) P(Y ) P(Y X) = P(X,Y ) P(X) P(X Y ) = P(Y X)P(X) P(Y ) 6

.3.6 Diaper, Milk -> Beer support 3/34, confidence 3/4 -- Lift (3/4) / (3/34) =.96 EXAMPLE Beer Not Beer Diaper, Milk 3 4 Not Diaper, Milk 2 3 3 2 34 X Y 2 2 2 2 2 2 2 P(X=)= 2/6 P(X=2)= 4/6 P(Y=)= 3/6 P(Y=2)= 3/6 P(Y= X=2) = /4 P(Y=2 X=2)= 3/4 Lift P(Y=2 X=2)/P(Y=2) = 6/4 =.5 Jaak Vilo and UT: other Data Mining authors 2 98 Example: Lift (Interest) Drawback of Lift & Interest Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Y Y X X 9 9 9 Y Y X 9 9 X 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Lift =.75/.9=.8333 (<, therefore is negatively associated) Lift =. Lift = (.)(.) = Lift =.9 (.9)(.9) =. P(Y X) = P(X,Y ) P(Y ) P(X)P(Y ) Statistical independence: If P(X,Y)=P(X)P(Y) => Interest = Simpson s Paradox (Pearl 2) Simpson s Paradox (Pearl 2) Overall E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 2 4 5% No Drug ( C) 6 24 4 4% Total 36 44 8 Males E (recovered) E (not recov.) Total Recov. Rate Drug (C) 8 2 3 6% No Drug ( C) 7 3 7% Total 25 5 4 Overall E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 2 4 5% No Drug ( C) 6 24 4 4% Total 36 44 8 Males E (recovered) E (not recov.) Total Recov. Rate Drug (C) 8 2 3 6% No Drug ( C) 7 3 7% Total 25 5 4 Females E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 8 2% No Drug ( C) 9 2 3 3% Total 29 4 2 Females E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 8 2% No Drug ( C) 9 2 3 3% Total 29 4 7

.3.6 3 Confounding factors l There may be some other hidden factors l Stratified data may show different results l Statistics 4 2x2 A -> B TRUE NOT Predicted B TP FP (marginal frequency) Not Predicted B FN TN Positives Negatives TP True Positives FP False Positives FN False Negatives TN True Negatives http://en.wikipedia.org/wiki/contingency_table http://en.wikipedia.org/wiki/confusion_matrix Computing Interestingness Measure Given a rule X Y, information needed to compute rule interestingness can be obtained from a contingency table Contingency table for Y X Y Y X f f f + X f f f o+ f + f + T f: support of X and Y f: support of X and Y f: support of X and Y f: support of X and Y There are lots of measures proposed in the literature Some measures are good for certain applications, but not for others What criteria should we use to determine whether a measure is good or bad? What about Aprioristyle support based pruning? How does it affect these measures? 8

.3.6 Properties of A Good Measure Piatetsky-Shapiro: 3 properties a good measure M must satisfy: M(A,B) = if A and B are statistically independent M(A,B) increase monotonically with P(A,B) when P(A) and P(B) remain unchanged Comparing Different Measures examples of contingency tables: Rankings of contingency tables using various measures: Example f f f f E 823 83 424 37 E2 833 2 622 46 E3 948 94 27 298 E4 3954 38 5 296 E5 2886 363 32 443 E6 5 2 5 6 E7 4 2 3 E8 4 2 2 2 E9 72 72 5 54 E 6 2483 4 7452 M(A,B) decreases monotonically with P(A) [or P(B)] when P(A,B) and P(B) [or P(A)] remain unchanged Property under Variable Permutation Property under Row/Column Scaling B B A p q A r s Symmetric measures: Does M(A,B) = M(B,A)? A A B p r B q s support, lift, collective strength, cosine, Jaccard, etc Asymmetric measures: confidence, conviction, Laplace, J-measure, etc Grade-Gender Example (Mosteller, 968): Male Female High 2 3 5 Low 4 5 3 7 Male Female High 4 3 34 Low 2 4 42 6 7 76 2x x Mosteller: Underlying association should be independent of the relative number of male and female students in the samples Property under Inversion Operation Example: φ-coefficient. Transaction Transaction N A B C D (a) (b) E (c) F φ-coefficient is analogous to correlation coefficient for continuous variables Y Y X 6 7 X 2 3 7 3.6.7.7 φ =.7.3.7.3 =.5238 Y Y X 2 3 X 6 7 φ Coefficient is the same for both tables 3 7.2.3.3 φ =.7.3.7.3 =.5238 9

.3.6 Property under Null Addition B B A p q A r s Invariant measures: support, cosine, Jaccard, etc Non-invariant measures: B B A p q A r s + k correlation, Gini, mutual information, odds ratio, etc Different Measures have Different Properties Sym bol Measure Range P P2 P3 O O2 O3 O3' O4 Φ Correlation - Yes Yes Yes Yes No Yes Yes No λ Lambda Yes No No Yes No No* Yes No α Odds ratio Yes* Yes Yes Yes Yes Yes* Yes No Q Yule's Q - Yes Yes Yes Yes Yes Yes Yes No Y Yule's Y - Yes Yes Yes Yes Yes Yes Yes No κ Cohen's - Yes Yes Yes Yes No No Yes No M Mutual Inf ormation Yes Yes Yes Yes No No* Yes No J J-Measure Yes No No No No No No No G Gini Index Yes No No No No No* Yes No s Support No Yes No Yes No No No No c Confidence No Yes No Yes No No No Yes L Laplace No Yes No Yes No No No No V Conviction.5 No Yes No Yes** No No Yes No I Interest Yes* Yes Yes Yes No No No No IS IS (cosine).. No Yes Yes Yes No No No Yes PS Piatetsky-Shapiro's -.25.25 Yes Yes Yes Yes No Yes Yes No F Certainty factor - Yes Yes Yes No No No Yes No AV Added value.5 Yes Yes Yes No No No No No S Collective strength No Yes Yes Yes No Yes* Yes No ζ Jaccard.. No Yes Yes Yes No No No Yes 2 K Klosgen's 2 Yes Yes Yes No No No No No Tan,Steinbach, Kumar 2 3 3!! Introduction 3 to Data 3 3Mining 4/8/24 # Support-based Pruning Most of the association rule mining algorithms use support measure to prune rules and itemsets Study effect of support pruning on correlation of itemsets Generate random contingency tables Compute support and pairwise correlation for each table Apply support-based pruning and examine the tables that are removed Effect of Support-based Pruning 9 8 7 6 5 4 3 2 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -. All Itempairs..2.3.4.5.6.7.8.9 Correlation Effect of Support-based Pruning Effect of Support-based Pruning 3 25 2 Support <. 3 25 2 Support <.3 Investigate how support-based pruning affects other measures 5 5 5 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -...2.3.4.5.6.7.8 Correlation Support-based pruning eliminates mostly negatively correlated itemsets.9 5 3 25 2 5 5 - -.9 -.8 -.7 -.6 -.5 -.4 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -. -.3 -.2 -...2.3.4.5.6.7.8.9 Correlation Support <.5..2.3.4.5.6.7.8 Correlation.9 Steps: Generate contingency tables Rank each table according to the different measures Compute the pair-wise correlation between the measures 2

Conviction Odds ratio Col Strength Correlation Interest PS CF Yule Y Reliability Kappa Klosgen Yule Q Confidence Laplace IS Support Jaccard Lambda Gini J-measure Mutual Info Support Interest Reliability Conviction Yule Q Odds ratio Confidence CF Yule Y Kappa Correlation Col Strength IS Jaccard Laplace PS Klosgen Lambda Mutual Info Gini J-measure 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Interest Conviction Odds ratio Col Strength Laplace Confidence Correlation Klosgen Reliability PS Yule Q CF Yule Y Kappa IS Jaccard Support Lambda Gini J-measure Mutual Info 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2.3.6 Effect of Support-based Pruning Without Support Pruning (All Pairs) Effect of Support-based Pruning.5% support 5% All Pairs (4.4%).5 <= support <=.5 (6.45%).9.9.8.7.8.7 Jaccard.6.5.4 Jaccard.6.5.4.3.3.2..2. - -.8 -.6 -.4 -.2.2.4.6.8 Correlation - -.8 -.6 -.4 -.2.2.4.6.8 Correlation Red cells indicate correlation between the pair of measures >.85 4.4% pairs have correlation >.85 Scatter Plot between Correlation & Jaccard Measure 6.45% pairs have correlation >.85 Scatter Plot between Correlation & Jaccard Measure: Effect of Support-based Pruning Subjective Interestingness Measure.5% support 3%.5 <= support <=.3 (76.42%) 76.42% pairs have correlation >.85 Jaccard.9.8.7.6.5.4.3.2. -.4 -.2.2.4.6.8 Correlation Scatter Plot between Correlation & Jaccard Measure Objective measure: Rank patterns based on statistics computed from data e.g., 2 measures of association (support, confidence, Laplace, Gini, mutual information, Jaccard, etc). Subjective measure: Rank patterns according to user s interpretation u A pattern is subjectively interesting if it contradicts the expectation of a user (Silberschatz & Tuzhilin) u A pattern is subjectively interesting if it is actionable (Silberschatz & Tuzhilin) Interestingness via Unexpectedness Need to model expectation of users (domain knowledge) Need to combine expectation of users with evidence from data (i.e., extracted patterns) + - + - Pattern expected to be frequent Pattern expected to be infrequent Pattern found to be frequent Pattern found to be infrequent - + Ex pected Patterns Unexpected Patterns Interestingness via Unexpectedness Web Data (Cooley et al 2) Domain knowledge in the form of site structure Given an itemset F = {X, X 2,, X k} (X i : Web pages) u L: number of links connecting the pages u lfactor = L / (k k-) u cfactor = (if graph is connected), (disconnected graph) Structure evidence = cfactor lfactor Usage evidence P( X! X!...! X ) 2 k = P( X X... X ) Use Dempster-Shafer theory to combine domain knowledge and evidence from data 2 k 2

.3.6 Additional links http://en.wikipedia.org/wiki/data_mining http://www.dataminingarticles.com/associati on-rules-algorithms.html http://www.kdnuggets.com/ 22