Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Similar documents
Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL360

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Data Mining Concepts & Techniques

DATA MINING - 1DL360

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Association Analysis Part 2. FP Growth (Pei et al 2000)

CS 584 Data Mining. Association Rule Mining 2

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

CS 484 Data Mining. Association Rule Mining 2

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Association Rules. Fundamentals

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Association Analysis 1

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

Data mining, 4 cu Lecture 5:

Data Mining and Analysis: Fundamental Concepts and Algorithms

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Chapters 6 & 7, Frequent Pattern Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CS5112: Algorithms and Data Structures for Applications

CSE 5243 INTRO. TO DATA MINING

FP-growth and PrefixSpan

Chapter 4: Frequent Itemsets and Association Rules

CSE 5243 INTRO. TO DATA MINING

COMP 5331: Knowledge Discovery and Data Mining

Selecting a Right Interestingness Measure for Rare Association Rules

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Association Rule Mining on Web

CS 412 Intro. to Data Mining

Frequent Itemset Mining

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Unit II Association Rules

Associa'on Rule Mining

Knowledge Discovery and Data Mining I

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Association Analysis. Part 1

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Introduction to Data Mining

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Data Mining and Analysis: Fundamental Concepts and Algorithms

Association Analysis. Part 2

Temporal Data Mining

ST3232: Design and Analysis of Experiments

Frequent Itemset Mining

Machine Learning: Pattern Mining

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

Interesting Patterns. Jilles Vreeken. 15 May 2015

Frequent Itemset Mining

OPPA European Social Fund Prague & EU: We invest in your future.

Mining Infrequent Patter ns

sor exam What Is Association Rule Mining?

Frequent Itemset Mining

Sequential Pattern Mining

Handling a Concept Hierarchy

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Association (Part II)

CPT+: A Compact Model for Accurate Sequence Prediction

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

Frequent Pattern Mining: Exercises

Interestingness Measures for Data Mining: A Survey

Four Paradigms in Data Mining

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Data Mining Concepts & Techniques

Approximate counting: count-min data structure. Problem definition

CS246 Final Exam. March 16, :30AM - 11:30AM

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

1 Frequent Pattern Mining

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Effective and efficient correlation analysis with application to market basket analysis and network community detection

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Discovering Binding Motif Pairs from Interacting Protein Groups

DATA MINING - 1DL105, 1DL111

Density-Based Clustering

Selecting the Right Interestingness Measure for Association Patterns

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

3. Problem definition

Set Notation and Axioms of Probability NOT NOT X = X = X'' = X

On Condensed Representations of Constrained Frequent Patterns

Effective Elimination of Redundant Association Rules

Transcription:

7.3.7 Descriptive data analysis Data Mining MTAT.3.83 (6EAP) Frequent itemsets and association rules Jaak Vilo 27 Spring Aims to summarise the main qualitative traits of data. Used mainly for discovering underlying processes and relations in data. Facts are presented together with understandable explanations. Used for example in medical, physical and social sciences. Jaak Vilo and other authors UT: Data Mining 2 2 Example: Items in baskets E.g. set of items B C A F H F E C H E D B A C H F E F A D H B E C F B D A H C E G A E B H E Anne Britt Margus Gert Gert Toomas Anne Britt Gert Anne Laura Toomas Gert Laura Britt Laura Gert Anne Britt Margus Britt Laura 2 3 4 4 5 2 4 6 5 4 6 2 6 4 2 3 2 6 Jaak Vilo and other authors UT: Data Mining 2 3 Jaak Vilo and UT: other Data authors Mining 2 4 2 3 6 8 6 5 3 8 5 4 2 3 8 6 5 6 4 8 2 5 3 6 2 4 Example item id s Sort or renumber in any way 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 Jaak Vilo and other authors UT: Data Mining 2 5 Jaak Vilo and other authors UT: Data Mining 2 6

7.3.7 Example Example 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 frequency : 3 2: 4 3: 4 4: 3 5: 4 6: 5 8: 4 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 Jaak Vilo and other authors UT: Data Mining 2 7 Jaak Vilo and other authors UT: Data Mining 2 8 Example Example 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 2 3 6 8 3 5 6 8 2 4 5 3 6 8 5 6 2 4 8 2 3 4 5 6 T-ID ITEMS 2 3 4 5 6 7 8 2 3 4 5 6 7 Jaak Vilo and other authors UT: Data Mining 2 9 Jaak Vilo and other authors UT: Data Mining 2 Questions: Which items occur frequently together? Are there relationships between items? A&B -> C Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar Chapter 6: http://wwwusers.cs.umn.edu/~kumar/dmbook/ch6.pdf Jaak Vilo and other authors UT: Data Mining 2 Tan,Steinbach, Kumar Introduction to Data Mining 4/8/24 2 2

7.3.7 Association Rule Mining Definition: Frequent Itemset Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Market-Basket transactions TID Items Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Example of Association Rules {Diaper} {Beer}, {Milk, Bread} {Eggs,Coke}, {Beer, Bread} {Milk}, Implication means co-occurrence, not causality! Itemset A collection of one or more items u Example: {Milk, Bread, Diaper} k-itemset u An itemset that contains k items Support count (s) Frequency of occurrence of an itemset E.g. s({milk, Bread,Diaper}) = 2 Support Fraction of transactions that contain an itemset E.g. s({milk, Bread, Diaper}) = 2/5 Frequent Itemset An itemset whose support is greater than or equal to a minsup threshold Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Definition: Association Rule Association Rule Mining Task Association Rule An implication expression of the form X Y, where X and Y are itemsets Example: {Milk, Diaper} {Beer} Rule Evaluation Metrics Support (s) u Fraction of transactions that contain both X and Y Confidence (c) u Measures how often items in Y appear in transactions that contain X Example: { Milk, Diaper} Þ Beer (Milk,Diaper,Beer) 2 s = s = =.4 T 5 s (Milk,Diaper,Beer) 2 c = = =.67 s (Milk,Diaper) 3 TID Items Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Given a set of transactions T, the goal of association rule mining is to find all rules having support minsup threshold confidence minconf threshold Brute-force approach: List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds Þ Computationally prohibitive! 2 goals Computational efficiency Efficient counting Rank the results by relevance Avoiding spurious results (that are there by pure chance) Reporting the most interesting/surprising/important facts Mining Association Rules TID Items Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Observations: Example of Rules: {Milk,Diaper} {Beer} (s=.4, c=.67) {Milk,Beer} {Diaper} (s=.4, c=.) {Diaper,Beer} {Milk} (s=.4, c=.67) {Beer} {Milk,Diaper} (s=.4, c=.67) {Diaper} {Milk,Beer} (s=.4, c=.5) {Milk} {Diaper,Beer} (s=.4, c=.5) All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} Rules originating from the same itemset have identical support but can have different confidence Thus, we may decouple the support and confidence requirements 3

7.3.7 Mining Association Rules Two-step approach:. Frequent Itemset Generation Generate all itemsets whose support ³ minsup 2. Rule Generation Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Frequent itemset generation is still computationally expensive Example how many itemsets? 2 3 6 8 6 5 3 8 5 4 2 3 8 6 5 6 4 8 2 5 3 6 2 4 8 items # possible itemsets? Jaak Vilo and other authors UT: Data Mining 2 2 Frequent Itemset Generation A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2 d possible candidate itemsets Frequent Itemset Generation Brute-force approach: Each itemset in the lattice is a candidate frequent itemset Count the support of each candidate by scanning the database Transactions List of Candidates Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke N M 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke w Match each transaction against every candidate Complexity ~ O(NMw) => Expensive since M = 2 d!!! Computational Complexity Given d unique items: Total number of itemsets = 2 d Total number of possible association rules: d - éæd ö d -kæd - k öù R = å êç åç k= j= ú ëè k ø è j øû d d + = 3-2 + If d=6, R = 62 rules Example counting frequent substrings How many words of length k, alphabet Σ? How to count the frequency of every possible substring in protein sequences? How to count all words in a book? Substring vs regular expressions 4

7.3.7 Frequent Itemset Generation Strategies Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M Reduce the number of transactions (N) Reduce size of N as the size of itemset increases Used by DHP and vertical-based mining algorithms Reduce the number of comparisons (NM) Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction Reducing Number of Candidates Apriori principle: If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure: " X, Y : ( X Í Y ) Þ s( X ) ³ s( Y ) Support of an itemset never exceeds the support of its subsets This is known as the anti-monotone property of support Illustrating Apriori Principle Illustrating Apriori Principle Found to be Infrequent A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Pruned supersets ABCD ABCE ABDE ACDE BCDE ABCDE Item Count Bread 4 Coke 2 Milk 4 Beer 3 Diaper 4 Eggs Minimum Support = 3 If every subset is considered, 6 C + 6 C 2 + 6 C 3 = 4 With support-based pruning, 6 + 6 + = 3 Items (-itemsets) Itemset Count {Bread,Milk} 3 {Bread,Beer} 2 {Bread,Diaper} 3 {Milk,Beer} 2 {Milk,Diaper} 3 {Beer,Diaper} 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Itemset Count {Bread,Milk,Diaper} 3 Illustrating Apriori Principle Apriori Algorithm Item Count Bread 4 Coke 2 Milk 4 Beer 3 Diaper 4 Eggs Minimum Support = 3 If every subset is considered, 6 C + 6 C2 + 6 C3 = 4 With support-based pruning, 6 + 6 + = 3 Items (-itemsets) Itemset Count {Bread,Milk} 3 {Bread,Beer} 2 {Bread,Diaper} 3 {Milk,Beer} 2 {Milk,Diaper} 3 {Beer,Diaper} 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Itemset Count {Bread,Milk,Diaper} 3 Method: Let k= Generate frequent itemsets of length Repeat until no new frequent itemsets are identified u Generate length (k+) candidate itemsets from length k frequent itemsets u Prune candidate itemsets containing subsets of length k that are infrequent u Count the support of each candidate by scanning the DB u Eliminate candidates that are infrequent, leaving only those that are frequent 5

7.3.7 Reducing Number of Comparisons Candidate counting: Scan the database of transactions to determine the support of each candidate itemset To reduce the number of comparisons, store the candidates in a hash structure u Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets N Transactions Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Hash Structure Buckets k Generate Hash Tree Suppose you have 5 candidate itemsets of length 3: { 4 5}, { 2 4}, {4 5 7}, { 2 5}, {4 5 8}, { 5 9}, { 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} You need: Hash function Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node) Hash function 3,6,9,4,7 2,5,8 4 5 2 4 4 5 7 2 5 4 5 8 2 3 4 5 6 7 3 4 5 3 5 6 3 6 3 5 7 6 8 9 5 9 3 6 7 3 6 8 Association Rule Discovery: Hash tree Association Rule Discovery: Hash tree Hash Function Candidate Hash Tree Hash Function Candidate Hash Tree,4,7 3,6,9,4,7 3,6,9 2,5,8 2 3 4 5 6 7 2,5,8 2 3 4 5 6 7 Hash on, 4 or 7 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Hash on 2, 5 or 8 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Association Rule Discovery: Hash tree Subset Operation Hash Function Candidate Hash Tree Given a transaction t, what are the possible subsets of size 3? Transaction, t 2 3 5 6,4,7 2,5,8 Hash on 3, 6 or 9 3,6,9 4 5 3 6 2 4 4 5 7 2 5 4 5 8 5 9 2 3 4 5 6 7 3 4 5 3 5 6 3 6 7 3 5 7 6 8 9 3 6 8 Level Level 2 2 3 5 6 2 3 5 6 2 3 5 6 3 5 6 5 6 2 3 5 6 2 5 6 2 3 2 5 2 6 Level 3 3 5 3 6 5 6 2 3 5 2 3 6 Subsets of 3 items 3 5 6 3 5 6 2 5 6 3 5 6 6

7.3.7 Subset Operation Using Hash Tree Subset Operation Using Hash Tree 2 3 5 6 transaction Hash Function 2 3 5 6 transaction Hash Function + 2 3 5 6 2 3 4 5 6 7 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 2 + 3 5 6 3 + 5 6 5 + 6 + 2 3 5 6 2 3 4 5 6 7 2 + 3 5 6 3 + 5 6,4,7 2,5,8 3,6,9 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 4 5 3 6 2 4 2 5 5 9 4 5 7 4 5 8 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Subset Operation Using Hash Tree Factors Affecting Complexity 2 + 3 5 6 3 + 5 6 5 + 6 4 5 3 6 2 4 4 5 7 + 2 3 5 6 2 5 4 5 8 5 9 2 3 5 6 2 3 4 5 6 7 transaction 2 + 3 5 6 3 + 5 6 3 4 5 3 5 6 3 6 7 3 5 7 3 6 8 6 8 9 Hash Function,4,7 2,5,8 3,6,9 Match transaction against out of 5 candidates Choice of minimum support threshold lowering support threshold results in more frequent itemsets this may increase number of candidates and max length of frequent itemsets Dimensionality (number of items) of the data set more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width transaction width increases with denser data sets This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction increases with its width) 7

7.3.7 Illustrating Apriori Principle Compact Representation of Frequent Itemsets Found to be Infrequent A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Pruned supersets ABCD ABCE ABDE ACDE BCDE ABCDE Some itemsets are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A8 A9 A B B2 B3 B4 B5 B6 B7 B8 B9 B C C2 C3 C4 C5 C6 C7 C8 C9 C 2 3 4 5 6 7 8 9 2 3 4 5 æö Number of frequent itemsets = 3 åç k= è k ø Need a compact representation Maximal Frequent Itemset Closed Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Maximal Itemsets Infrequent Itemsets A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCD E Border An itemset is closed if none of its immediate supersets has the same support as the itemset {A,B} 2 {B,C,D} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} Itemset Support {A} 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 {C,D} 3 Itemset Support {A,B,C} 2 {A,B,D} 3 {A,C,D} 2 {B,C,D} 3 {A,B,C,D} 2 Maximal vs Closed Itemsets Maximal vs Closed Frequent Itemsets ABC 2 ABCD 3 BCE 4 ACDE 5 DE Transaction Ids 24 23 234 245 345 A B C D E 2 AB 24 AC 24 AD 4 23 2 3 24 34 45 AE BC BD BE CD CE DE Minimum support = 2 A B C D E Closed but not maximal 24 23 234 245 345 2 AB 24 AC 24 AD 4 23 2 3 24 34 45 AE BC BD BE CD CE DE Closed and maximal 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Not supported by any transactions 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE # Closed = 9 # Maximal = 4 8

7.3.7 Maximal vs Closed Itemsets Maximal Frequent Itemset Frequent Itemsets Closed Frequent Itemsets An itemset is maximal frequent if none of its immediate supersets is frequent Maximal Itemsets A B C D E AB AC AD AE BC BD BE CD CE DE Maximal Frequent Itemsets ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Infrequent Itemsets ABCD ABCE ABDE ACDE BCDE ABCD E Border Alternative Methods for Frequent Itemset Generation Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice General-to-specific vs Specific-to-general Traversal of Itemset Lattice Equivalent Classes Frequent itemset border........ Frequent itemset border.... A B C D AC BD CD AB AD BC ABC ABD ACD BCD A B C D AB AC AD BC BD CD ABC ABD ACD BCD {a,a 2,...,a n } {a,a 2,...,a n } Frequent itemset border {a,a 2,...,a n } ABCD ABCD (a) General-to-specific (b) Specific-to-general (c) Bidirectional (a) Prefix tree (b) Suffix tree Alternative Methods for Frequent Itemset Generation Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Breadth-first vs Depth-first Representation of Database horizontal vs vertical data layout (a) Breadth first (b) Depth first Horizontal Data Layout A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D B Vertical Data Layout A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 9 9

7.3.7 FP-growth Algorithm Why FP, not Apriori? Use a compressed representation of the database using an FP-tree Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets Frequent pattern growth Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. Mining frequent patterns without candidate generation. Data Mining and Knowledge Discovery 8:53-87, 24. Slides: http://www.cis.hut.fi/opinnot/t-6.62/28/fptree.pdf Apriori works well except when: Lots of frequent patterns Big set of items Low minimum support threshold Long patterns Why: Candidate sets become huge 4 frequent patterns of length 8 length 2 candidates Discovering pattern of length requires at least 2 candidates (nr of subsets) Repeated database scans costly (long patterns) FP-tree construction After reading TID=: {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} A: B: After reading TID=2: A: B: B: C: D: FP-Tree Construction {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Transaction Database B:5 A:7 C: D: B:3 C:3 Header table Item A B C D E Pointer C:3 D: D: D: E: E: D: Pointers are used to assist frequent itemset generation E:

7.3.7 FP-growth All frequent itemsets of this example D: C:3 B:5 A:7 D: C: D: D: B: C: D: Conditional Pattern base for D: P = {(A:,B:,C:), (A:,B:), (A:,C:), (A:), (B:,C:)} Recursively apply FPgrowth on P Frequent Itemsets found (with sup > ): AD, BD, CD, ACD, BCD E: {e}, {d,e}, {a,d,e}, {c,e}, {a,e} D: {d}, {c,d}, {b,c,d}, {a,c,d}, {b,d}, {a,b,d}, {a,d} C: {c}, {b,c}, {a,b,c}, {a,c} B: {b}, {a,b} A: {a} FP-Tree Construction (example) Tree Projection {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Transaction Database B:5 A:7 C: D: B:3 C:3 Set enumeration tree: Possible Extension: E(A) = {B,C,D,E} A B C D E AB AC AD AE BC BD BE CD CE DE Header table Item A B C D E Pointer C:3 D: D: D: E: E: D: Pointers are used to assist frequent itemset generation E: Possible Extension: E(ABC) = {D,E} ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE

7.3.7 Tree Projection Items are listed in lexicographic order Each node P stores the following information: Itemset for node P List of possible lexicographic extensions of P: E(P) Pointer to projected database of its ancestor node Bitvector containing information about which transactions in the projected database contain the itemset Projected Database Original Database: {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} {B,C,E} Projected Database for node A: {B} 2 {} 3 {C,D,E} 4 {D,E} 5 {B,C} 6 {B,C,D} 7 {} 8 {B,C} 9 {B,D} {} For each transaction T, projected transaction at node A is T Ç E(A) ECLAT For each item, store a list of transaction ids (tids) Horizontal Data Layout A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D B Vertical Data Layout A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 9 TID-list ECLAT Determine support of any k-itemset by intersecting tid-lists of two of its (k-) subsets. A B AB 4 2 5 5 5 Ù 7 6 7 8 7 8 8 9 3 traversal approaches: top-down, bottom-up and hybrid Advantage: very fast support counting Disadvantage: intermediate tid-lists may become too large for memory Rule Generation Given a frequent itemset L, find all non-empty subsets f Ì L such that f L f satisfies the minimum confidence requirement If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABC AB CD, AC BD, AD BC, BC AD, BD AC, CD AB, If L = k, then there are 2 k 2 candidate association rules (ignoring L Æ and Æ L) Rule Generation How to efficiently generate rules from frequent itemsets? In general, confidence does not have an antimonotone property c(abc D) can be larger or smaller than c(ab D) But confidence of rules generated from the same itemset has an anti-monotone property e.g., L = {A,B,C,D}: c(abc D) ³ c(ab CD) ³ c(a BCD) u Confidence is anti-monotone w.r.t. number of items on the RHS of the rule 2

7.3.7 Rule Generation for Apriori Algorithm Rule Generation for Apriori Algorithm Lattice of rules Low Confidence Rule ABCD=>{ } BCD=>A ACD=>B ABD=>C ABC=>D CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD Candidate rule is generated by merging two rules that share the same prefix in the rule consequent join(cd=>ab,bd=>ac) would produce the candidate rule D => ABC CD=>AB BD=>AC Pruned Rules D=>ABC C=>ABD B=>ACD A=>BCD Prune rule D=>ABC if its subset AD=>BC does not have high confidence D=>ABC Effect of Support Distribution Many real data sets have skewed support distribution Effect of Support Distribution How to set the appropriate minsup threshold? If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products) Support distribution of a retail data set If minsup is set too low, it is computationally expensive and the number of itemsets is very large Using a single minimum support threshold may not be effective Multiple Minimum Support Multiple Minimum Support How to apply multiple minimum supports? MS(i): minimum support for item i e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=.%, MS(Salmon)=.5% MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli)) =.% Item MS(I) Sup(I) A.%.25% B.2%.26% C.3%.29% A B C AB AC AD AE BC BD ABC ABD ABE ACD ACE ADE Challenge: Support is no longer anti-monotone u Suppose: Support(Milk, Coke) =.5% and Support(Milk, Coke, Broccoli) =.5% u {Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent D.5%.5% E 3% 4.2% D E BE CD CE DE BCD BCE BDE CDE 3

7.3.7 Multiple Minimum Support Multiple Minimum Support (Liu 999) Item MS(I) Sup(I) A.%.25% B.2%.26% C.3%.29% D.5%.5% E 3% 4.2% A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Order the items according to their minimum support (in ascending order) e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=.%, MS(Salmon)=.5% Ordering: Broccoli, Salmon, Coke, Milk Need to modify Apriori such that: L : set of frequent items F : set of items whose support is ³ MS() where MS() is min i ( MS(i) ) C 2 : candidate itemsets of size 2 is generated from F instead of L Multiple Minimum Support (Liu 999) Pattern Evaluation Modifications to Apriori: In traditional Apriori, u A candidate (k+)-itemset is generated by merging two frequent itemsets of size k u The candidate is pruned if it contains any infrequent subsets of size k Pruning step has to be modified: u Prune only if subset contains the first item u e.g.: Candidate={Broccoli, Coke, Milk} (ordered according to minimum support) u {Broccoli, Coke} and {Broccoli, Milk} are frequent but {Coke, Milk} is infrequent Candidate is not pruned because {Coke,Milk} does not contain the first item, i.e., Broccoli. Association rule algorithms tend to produce too many rules many of them are uninteresting or redundant Redundant if {A,B,C} {D} and {A,B} {D} have same support & confidence Interestingness measures can be used to prune/rank the derived patterns In the original formulation of association rules, support & confidence are the only measures used Application of Interestingness Measure Laylines and patterns in data Interestingness Measures 4

7.3.7 Stores built according to mystical plan 37 random points, 8 4-point alignments What is interesting and not random? { diaper, bread } -> { beer } Support count (3), confidence (.75) lhs+rhs: 3 / lhs: 4 What would make it surprising (statistically significant) and useful? Evaluation of Associated Patterns Objective measure of interestingness (statistics) 2x2 contingency table approach Diaper, Milk support 4 / 34 Visualization Beer Not Beer Template based approaches Report those matching user templates Subjective interestingness Domain information, concept hierarchy, profit margins, etc Diaper, Milk 3 4 Not Diaper, Milk 2 3 3 2 34 5

7.3.7 2x2 contingency table approach Diaper, Milk -> Beer support 3/34, confidence 3/4 2x2 contingency table approach Diaper, Milk -> Beer support 3/34, confidence 3/4 Beer Not Beer Beer Not Beer Diaper, Milk 3 4 Diaper, Milk 3 4 Not Diaper, Milk 2 3 3 2 34 Not Diaper, Milk 2 3 3 2 34 Computing Interestingness Measure Drawback of Confidence Given a rule X Y, information needed to compute rule interestingness can be obtained from a contingency table Contingency table for X Y f : support of X and Y Y Y X f f f + X f f f o+ f + f + T f : support of X and Y f : support of X and Y f : support of X and Y Used to define various measures support, confidence, lift, Gini, J-measure, etc. Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Þ Although confidence is high, rule is misleading Þ P(Coffee Tea) =.9375 Drawback of Confidence Statistical Independence Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75 but P(Coffee) =.9 Þ Although confidence is high, rule is misleading Þ P(Coffee Tea) =.9375 Population of students 6 students know how to swim (S) 7 students know how to bike (B) 42 students know how to swim and bike (S,B) P(S and B) = 42/ =.42 P(S) P(B) =.6.7 =.42 P(S and B) = P(S) P(B) => Statistical independence P(S and B) > P(S) P(B) => Positively correlated P(S and B) < P(S) P(B) => Negatively correlated 6

7.3.7 Statistical-based Measures Measures that take into account statistical dependence Lift = P(Y X) P(Y ) Interest = P(X,Y ) P(X)P(Y ) PS = P(X,Y ) P(X)P(Y ) ϕ coefficient = Lift == Interest (old name) P(X,Y ) P(X)P(Y ) P(X)[ P(X)]P(Y )[ P(Y )] P(X Y ) = P(X,Y ) P(Y ) P(Y X) = P(X,Y ) P(X) P(X Y ) = P(Y X)P(X) P(Y ) Diaper, Milk -> Beer support 3/34, confidence 3/4 -- Lift (3/4) / (3/34) =.96 EXAMPLE Beer Not Beer Diaper, Milk 3 4 Not Diaper, Milk 2 3 3 2 34 X Y 2 2 2 2 2 2 2 P(X=)= 2/6 P(X=2)= 4/6 P(Y=)= 3/6 P(Y=2)= 3/6 P(Y= X=2) = /4 P(Y=2 X=2)= 3/4 Lift P(Y=2 X=2)/P(Y=2) = 6/4 =.5 Jaak Vilo and UT: other Data authors Mining 2 Example: Lift (Interest) Drawback of Lift & Interest Coffee Coffee Tea 5 5 2 Tea 75 5 8 9 Y Y X X 9 9 9 Y Y X 9 9 X 9 Association Rule: Tea Coffee Confidence= P(Coffee Tea) =.75. Lift = (.)(.) = Lift =.9 (.9)(.9) =. but P(Coffee) =.9 Þ Lift =.75/.9=.8333 (<, therefore is negatively associated) Lift = P(Y X) = P(X,Y ) P(Y ) P(X)P(Y ) Statistical independence: If P(X,Y)=P(X)P(Y) => Interest = 7

7.3.7 Simpson s Paradox (Pearl 2) Simpson s Paradox (Pearl 2) Overall E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 2 4 5% No Drug ( C) 6 24 4 4% Total 36 44 8 Overall E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 2 4 5% No Drug ( C) 6 24 4 4% Total 36 44 8 3 Males E (recovered) E (not recov.) Total Recov. Rate Drug (C) 8 2 3 6% No Drug ( C) 7 3 7% Total 25 5 4 Females E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 8 2% No Drug ( C) 9 2 3 3% Total 29 4 4 Males E (recovered) E (not recov.) Total Recov. Rate Drug (C) 8 2 3 6% No Drug ( C) 7 3 7% Total 25 5 4 Females E (recovered) E (not recov.) Total Recov. Rate Drug (C) 2 8 2% No Drug ( C) 9 2 3 3% Total 29 4 5 Confounding factors l There may be some other hidden factors l Stratified data may show different results l Statistics 6 2x2 A -> B TRUE NOT Predicted B TP FP (marginal frequency) Not Predicted B FN TN Positives Negatives TP True Positives FP False Positives FN False Negatives TN True Negatives http://en.wikipedia.org/wiki/contingency_table http://en.wikipedia.org/wiki/confusion_matrix Computing Interestingness Measure Given a rule X Y, information needed to compute rule interestingness can be obtained from a contingency table Contingency table for Y X Y Y X f f f + X f f f o+ f + f + T f : support of X and Y f : support of X and Y f : support of X and Y f : support of X and Y 8

7.3.7 There are lots of measures proposed in the literature Some measures are good for certain applications, but not for others What criteria should we use to determine whether a measure is good or bad? What about Aprioristyle support based pruning? How does it affect these measures? Properties of A Good Measure Comparing Different Measures Piatetsky-Shapiro: 3 properties a good measure M must satisfy: M(A,B) = if A and B are statistically independent M(A,B) increase monotonically with P(A,B) when P(A) and P(B) remain unchanged examples of contingency tables: Rankings of contingency tables using various measures: Example f f f f E 823 83 424 37 E2 833 2 622 46 E3 948 94 27 298 E4 3954 38 5 296 E5 2886 363 32 443 E6 5 2 5 6 E7 4 2 3 E8 4 2 2 2 E9 72 72 5 54 E 6 2483 4 7452 M(A,B) decreases monotonically with P(A) [or P(B)] when P(A,B) and P(B) [or P(A)] remain unchanged Property under Variable Permutation Property under Row/Column Scaling B B A p q A r s Symmetric measures: Does M(A,B) = M(B,A)? A A B p r B q s support, lift, collective strength, cosine, Jaccard, etc Asymmetric measures: confidence, conviction, Laplace, J-measure, etc Grade-Gender Example (Mosteller, 968): Male Female High 2 3 5 Low 4 5 3 7 Male Female High 4 3 34 Low 2 4 42 6 7 76 2x x Mosteller: Underlying association should be independent of the relative number of male and female students in the samples 9

7.3.7 Property under Inversion Operation Example: f-coefficient. Transaction Transaction N A B C D (a) (b) E (c) F f-coefficient is analogous to correlation coefficient for continuous variables Y Y X 6 7 X 2 3 7 3.6 -.7.7 f =.7.3.7.3 =.5238 Y Y X 2 3 X 6 7 f Coefficient is the same for both tables 3 7.2 -.3.3 f =.7.3.7.3 =.5238 Property under Null Addition Different Measures have Different Properties B B A p q A r s Invariant measures: support, cosine, Jaccard, etc Non-invariant measures: B B A p q A r s + k correlation, Gini, mutual information, odds ratio, etc Sym bol Measure Range P P2 P3 O O2 O3 O3' O4 F Correlation - Yes Yes Yes Yes No Yes Yes No l Lambda Yes No No Yes No No* Yes No a Odds ratio Yes* Yes Yes Yes Yes Yes* Yes No Q Yule's Q - Yes Yes Yes Yes Yes Yes Yes No Y Yule's Y - Yes Yes Yes Yes Yes Yes Yes No k Cohen's - Yes Yes Yes Yes No No Yes No M Mutual Inf ormation Yes Yes Yes Yes No No* Yes No J J-Measure Yes No No No No No No No G Gini Index Yes No No No No No* Yes No s Support No Yes No Yes No No No No c Confidence No Yes No Yes No No No Yes L Laplace No Yes No Yes No No No No V Conviction.5 No Yes No Yes** No No Yes No I Interest Yes* Yes Yes Yes No No No No IS IS (cosine).. No Yes Yes Yes No No No Yes PS Piatetsky-Shapiro's -.25.25 Yes Yes Yes Yes No Yes Yes No F Certainty factor - Yes Yes Yes No No No Yes No AV Added value.5 Yes Yes Yes No No No No No S Collective strength No Yes Yes Yes No Yes* Yes No z Jaccard.. No Yes Yes Yes No No No Yes 2 æ 2 ö K Klosgen's ç æ ö - ç2-3 -!! Yes Yes Yes No No No No No Tan,Steinbach, Kumar è 3 øè Introduction 3 ø to Data 3 3 Mining 4/8/24 # 25 25 25 25 Karl Ehatäht 25 25 25 25 Karl Ehatäht 2

7.3.7 Support-based Pruning Effect of Support-based Pruning Most of the association rule mining algorithms use support measure to prune rules and itemsets Study effect of support pruning on correlation of itemsets Generate random contingency tables Compute support and pairwise correlation for each table Apply support-based pruning and examine the tables that are removed 9 8 7 6 5 4 3 2 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -. All Itempairs..2.3.4.5.6.7.8.9 Correlation Effect of Support-based Pruning Effect of Support-based Pruning 3 25 2 Support <. 3 25 2 Support <.3 Investigate how support-based pruning affects other measures 5 5 5 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -...2 Correlation.3.4.5.6.7.8.9 5 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -...2 Correlation Support <.5.3.4.5.6.7.8.9 Steps: Generate contingency tables Rank each table according to the different measures Support-based pruning eliminates mostly negatively correlated itemsets 3 25 2 5 5 - -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -...2.3.4.5.6.7.8.9 Compute the pair-wise correlation between the measures Correlation Effect of Support-based Pruning Effect of Support-based Pruning Without Support Pruning (All Pairs).5% support 5% All Pairs (4.4%).5 <= support <=.5 (6.45%) Conviction Interest Odds ratio Col Strength.9 Conviction Odds ratio.9 Correlation.8 Col Strength.8 Interest PS.7 Laplace Confidence.7 CF Yule Y Reliability Kappa Klosgen Jaccard.6.5.4 Correlation Klosgen Reliability PS Yule Q Jaccard.6.5.4 Yule Q Confidence.3 CF Yule Y.3 Laplace.2 Kappa.2 IS IS Support. Jaccard. Jaccard Support Lambda Gini - -.8 -.6 -.4 -.2.2.4.6.8 Correlation Lambda Gini - -.8 -.6 -.4 -.2.2.4.6.8 Correlation J-measure J-measure Mutual Info 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Red cells indicate correlation between the pair of measures >.85 Scatter Plot between Correlation & Jaccard Measure Mutual Info 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 6.45% pairs have correlation >.85 Scatter Plot between Correlation & Jaccard Measure: 4.4% pairs have correlation >.85 2

7.3.7 Effect of Support-based Pruning Subjective Interestingness Measure.5% support 3%.5 <= support <=.3 (76.42%) Support Interest Reliability Conviction Yule Q Odds ratio Confidence CF Yule Y Kappa Correlation Col Strength IS Jaccard Laplace PS Klosgen Lambda Mutual Info Gini J-measure 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 76.42% pairs have correlation >.85 Jaccard.9.8.7.6.5.4.3.2. -.4 -.2.2.4.6.8 Correlation Scatter Plot between Correlation & Jaccard Measure Objective measure: Rank patterns based on statistics computed from data e.g., 2 measures of association (support, confidence, Laplace, Gini, mutual information, Jaccard, etc). Subjective measure: Rank patterns according to user s interpretation u A pattern is subjectively interesting if it contradicts the expectation of a user (Silberschatz & Tuzhilin) u A pattern is subjectively interesting if it is actionable (Silberschatz & Tuzhilin) Interestingness via Unexpectedness Need to model expectation of users (domain knowledge) Problems with mining itemsets + Pattern expected to be frequent - Pattern expected to be infrequent Pattern found to be frequent Pattern found to be infrequent Explosion of itemsets. Closed and maximal itemsets help a little, but not enough for large datasets. + - - Expected Patterns + Unexpected Patterns How to find the structure of the dataset? Need to combine expectation of users with evidence from data (i.e., extracted patterns) Markus Lippus Ask for optimal itemsets. Compression of itemsets KRIMP - compression using minimum description length principle. Key idea: minimize the function: where L(H) is the length of the model L(D H) is the length of the database when described using H. Resulting model is the most optimal set of itemsets. Compression of itemsets Find itemsets that describe the database - for every transaction there are non-overlapping itemsets in the code table that cover it. The more often an itemset occurs in database, the shorter the code used for it in the code table. Encode the dataset using the code table. Vreeken J. et. al. 26 Markus Lippus Markus Lippus 22

7.3.7 KRIMP Number of possible models is huge - heuristics must be used. Mines all candidates Greedily selects from candidates optimizing compression and avoiding overlap. Provides optimal itemsets. Still requires a lot of memory. SLIM - KRIMP 2. In principle similar to KRIMP. Instead of mining all itemsets at once, generates and tests itemsets on the fly. Reduced memory requirements. Converges very slowly as it also considers bad itemsets. Vreeken J. et. al. 22 Markus Lippus Markus Lippus Improvements as a method Reductions in the size of resulting set of itemsets up to 7 times. Less patterns. Specific, relatively long itemsets. Pros: Mitigates the problem of huge amounts of irrelevant itemsets. SLIM can deal with larger databases. Cons: SLIM takes long to converge - mitigated by SLIMMER (code unavailable) https://people.mmci.uni-saarland.de/~jilles/pres/ida4-slimmer-poster.pdf Markus Lippus Additional links http://en.wikipedia.org/wiki/data_mining http://www.dataminingarticles.com/associati on-rules-algorithms.html http://www.kdnuggets.com/ 23