ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Similar documents
Association Rules. Fundamentals

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

CS 484 Data Mining. Association Rule Mining 2

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Data Mining Concepts & Techniques

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining and Analysis: Fundamental Concepts and Algorithms

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Lecture Notes for Chapter 6. Introduction to Data Mining

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CS 584 Data Mining. Association Rule Mining 2

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

DATA MINING - 1DL360

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

DATA MINING - 1DL360

CSE 5243 INTRO. TO DATA MINING

Chapters 6 & 7, Frequent Pattern Mining

CSE 5243 INTRO. TO DATA MINING

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Association Analysis Part 2. FP Growth (Pei et al 2000)

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Frequent Itemset Mining

Frequent Itemset Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Machine Learning: Pattern Mining

Association Analysis. Part 1

CS 412 Intro. to Data Mining

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Frequent Itemset Mining

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

FP-growth and PrefixSpan

Unit II Association Rules

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Associa'on Rule Mining

Frequent Itemset Mining

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Introduction to Data Mining

Temporal Data Mining

Association Analysis. Part 2

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Association Rule Mining on Web

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Frequent Pattern Mining: Exercises

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

Association Analysis 1

Knowledge Discovery and Data Mining I

Effective Elimination of Redundant Association Rules

Discovering Threshold-based Frequent Closed Itemsets over Probabilistic Data

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Handling a Concept Hierarchy

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

Four Paradigms in Data Mining

CS5112: Algorithms and Data Structures for Applications

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Interesting Patterns. Jilles Vreeken. 15 May 2015

On Condensed Representations of Constrained Frequent Patterns

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Data Analytics Beyond OLAP. Prof. Yanlei Diao

COMP 5331: Knowledge Discovery and Data Mining

Sequential Pattern Mining

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

Data Mining Concepts & Techniques

Density-Based Clustering

CS246 Final Exam. March 16, :30AM - 11:30AM

CPT+: A Compact Model for Accurate Sequence Prediction

Chapter 4: Frequent Itemsets and Association Rules

EFFICIENT MINING OF WEIGHTED QUANTITATIVE ASSOCIATION RULES AND CHARACTERIZATION OF FREQUENT ITEMSETS

Pattern Space Maintenance for Data Updates. and Interactive Mining

1 Frequent Pattern Mining

FREQUENT PATTERN SPACE MAINTENANCE: THEORIES & ALGORITHMS

OPPA European Social Fund Prague & EU: We invest in your future.

Distributed Mining of Frequent Closed Itemsets: Some Preliminary Results

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

On the Complexity of Mining Itemsets from the Crowd Using Taxonomies

sor exam What Is Association Rule Mining?

Un nouvel algorithme de génération des itemsets fermés fréquents

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

Maintaining Frequent Itemsets over High-Speed Data Streams

Approximate counting: count-min data structure. Problem definition

Data Warehousing & Data Mining

FARMER: Finding Interesting Rule Groups in Microarray Datasets

A Global Constraint for Closed Frequent Pattern Mining

Discovering Non-Redundant Association Rules using MinMax Approximation Rules

Data Warehousing & Data Mining

DATA MINING - 1DL105, 1DL111

CS738 Class Notes. Steve Revilak

Transcription:

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING, LIG M2 SIF DMV course 207/208

Market basket analysis Analyse supermarket s transaction data Transaction = «market basket» of a customer Find which items are often bought together Ex: {bread, chocolate, butter} Ex: {hamburger bread, tomato} {steak} Applications Product placement Cross selling (suggestion of other products) Promotions 2

Funny example Most famous itemset : {beer, diapers} Found in a chain of American supermarkets Further study : Mostly bought on Friday evenings Who? 3

Example Transactions Items (products bought) bread, butter, chocolate, vine, pencil 2 bread,butter, chocolate, pencil 3 chocolate 4 butter,chocolate 5 bread, butter, chocolate, vine 6 bread,butter, chocolate {bread, butter, chocolate} sold together in 4/6 = 66% of transactions {butter, chocolate} {bread} is true in 4/5 = 80% of cases {chocolate} {bread, butter} is true in 4/6 = 66% of cases 4

Definitions A = {a,,a n } : items, A : item base Any I A : itemset k-itemset: itemset with k items T = (t,,t m ), i t i A : transaction database tid(t j )= j : transaction index support(x I) = number of transactions containing itemset X tidlist(x I) = list of tids of transactions containing itemset X An itemset X is frequent if support(x) minsup Confidence of association rule X Y : c (X Y = ) support(x Y) support(x) An association rule with confidence c holds if c minconf 5

Example, rewritten Transactions Items (products bought) bread, butter, chocolate, vine, pencil 2 bread,butter, chocolate, pencil 3 chocolate 4 butter,chocolate 5 bread, butter, chocolate, vine 6 bread,butter, chocolate {bread, butter, chocolate} sold support together = 4 in 4/6 = 66% = 4/6 of transactions = 66% (relative) (absolute) {butter, {chocolate} chocolate} {bread, butter} {bread} confidence is true = 4/6 in = 66% 4/5 = 80% of cases {butter, chocolate} {bread} {chocolate} confidence = 4/5 {bread, = 80% butter} is true in 5/6 = 83% of cases 6

Computing association rules Two steps:. Compute frequent itemsets Discover itemsets with support minsup Very expensive computationally! 2. Compute which association rules hold Partition each itemset and discover rules with confidence minconf Much faster than discovering itemsets 7

How to compute frequent itemsets? Brute force approach Generate and Test method Generate all possible itemsets randomly Compute their support But highly combinatorial problem : How many possible itemsets for 000 items? 000 items = 2 000 possible itemsets Infeasible in practice 8

The Apriori algorithm Levelwise search [Agrawal et al., 93] Discover frequent -itemsets, 2-itemsets, Apriori property : If an itemset is not frequent, then all its supersets are not frequent Ex: If {vine, pencil} is not frequent, then of course {vine, pencil, chocolate} will not be frequent Downward closure property Anti-monotonicity property 9

ABCDE A B C D E X X X X X 2 X X X X 3 X 4 X X ABCD 5 ABCE 2 ABDE ACDE BCDE 5 X X X X 6 X X X ABC ABD ACD BCD ABE ACE BCE CDE BDE ADE 256 5 5 5 2 2 2 AB AC BC AD BD CD AE BE CE DE 256 256 2456 5 5 5 2 2 2 A B C D E 256 2456 23456 5 2 = 2 0

Apriori algorithm Input: T, minsup F = {Frequent -itemsets} ; for (k=2 ; F k- ; k++) do begin = apriori-gen(f k- ) ; // Candidates generation C k foreach transaction t T do begin C t = subset(c k, t) ; // Support counting foreach candidate c C t c.count++ ; end do F k = { c C k c.count minsup } ; end return k F k ;

Candidate generation apriori-gen: generates candidates k-itemsets from frequent (k-)-itemsets c (size k) = merge of p, q F k- (both have size k-) p and q have exactly k-2 items in common How many combinations of such p,q to build c? 2 k! k ( k ) C k ways (at most) to derive c from F k- ( k 2)!2! 2 This is redundant work: we want a unique (p,q) for c Solution: ordering of the items in itemsets Usually items are represented by integers : classical order of k-2 prefixes of p and q must match 2

ABCDE A B C D E X X X X X 2 X X X X 3 X 4 X X ABCD 5 ABCE 2 ABDE ACDE BCDE 5 X X X X 6 X X X ABC ABD ACD BCD ABE ACE BCE CDE BDE ADE 256 5 5 5 2 2 2 AB AC BC AD BD CD AE BE CE DE 256 256 2456 5 5 5 2 2 2 A B C D E 256 2456 23456 5 2 = 2 3

apriori-gen Input: F k- // Join step insert into C k select p.item,p.item 2,,p.item k-,q.item k- from p,q F k- where p.item = q.item,,p.item k-2 =q.item k-2, // Prune step foreach itemset c C k do foreach (k-)-subset s of c do if (s F k- ) then delete c from C k ; p.item k- <q.item k- Here use of anti-monotony property! return C k 4

Subset For each transaction t, find all the itemsets of C k that are in t Brute force : foreach t T foreach c C k compute if c t Too much computation! The Apriori solution: Partition candidates into different buckets of limited size Store buckets in leaves of a hash tree Find candidates subset of a transaction by traversing hash tree 5

Hash tree construction Max bucket size = 3 C 3 in previous example abc abd abe acd ace bcd bce abc abd abe acd ace Hash on first item h(a) h(b) bcd bce Bucket too big! Bucket too big! Hash on first item h(a) Hash on 2 nd item h(b) h(c) h(b) bcd bce abc abd abe acd ace 6

Hash tree utilisation for subset Transaction 2 : a c d e a c d e c d e h(b) h(a) h(c) h(b) bcd bce abc abd abe acd ace acd ace 7

Complexity apriori_gen, step k Dominated by prune step O(k. C k ) support counting O(m. C k ) with m = T (database size) one iteration is O(m. C k ) Total complexity : O(m. k C k ) Worst case : candidates are all possible itemsets O(m.2 n ) with n = number of items Linear in database size Exponential in number of items Influence of transaction width (database density) on number of traversal of hash tree 8

Association rules computation Once we have the frequent itemsets, we want the association rules. Reminder: we are only interested in rules that have a high confidence value support(x Y) Confidence of X Y: c support(x) Let F be an itemset, with F = k. How many possible rules? 2 k 2 (we eliminate F { } and { } F) What is a naive solution to compute them? For each partitions of F in X Y compute confidence of X Y output X Y if confidence minconf Is it efficient? 9

Monotony of confidence? Transactions Items (products bought) bread, butter, chocolate, vine, pencil 2 bread,butter, chocolate, pencil 3 chocolate 4 butter,chocolate 5 bread, butter, chocolate, vine 6 bread,butter, chocolate 7 butter 8 butter 9 butter {chocolate} {bread, butter} confidence = 4/6 = 66% {chocolate} {butter} confidence = 5/6 = 83% {butter} {chocolate} confidence = 5/8 = 62% CONFIDENCE IS NOT MONOTONE / ANTI-MONOTONE 20

More on monotony of confidence For rules coming from the same itemset, confidence is anti-monotone e.g., L = {A,B,C,D}: c(abc D) c(ab CD) c(a BCD) Confidence is anti-monotone w.r.t. number of items on the RHS of the rule some pruning is possible

Association rule generation algorithm Input: T, minsup, minconf, F all = union of F..F n H = foreach f k F all, k 2 do begin A = (k-)-itemsets a k- such that a k- f k ; foreach a k- A do begin conf = support(f k )/support(a k- ) ; if conf minconf do begin output rule a k- (f k a k- ) ; add (f k a k- ) to H ; end end ap-genrules(f k, H ) ; end

ap-genrules Input: f k, H m : set of m-item consequents if (k>m+) then begin end H m+ = apriori-gen(h m ) ; // Generate all possible m+ itemsets foreach h m+ H m+ do begin end conf = support(f k )/support(f k -h m+ ) ; if conf minconf then else output rule f k h m+ h m+ ; delete h m+ from H m+ ; ap-genrules(f k, H m+ ) ; Pruning by anti-monotony

First improvements of Apriori End of 90 s : Main memory: 64-256 MB Databases: can go over GB Apriori : several passes over database need algorithms that can handle database in memory Partition [Savasere et al. 995] Cut the database in pieces fitting into memory, compute results for each piece and join them Sampling [Toivonen 996] Compute frequent itemsets on a sample of the database DIC [Brin et al. 997] Improves number of passes on database 24

Maximal frequent itemsets Set of maximal frequent itemsets : MFI = { I FI I I I FI} with FI set of frequent itemsets Several orders of magnitudes less MFI than FI Can be searched both bottom-up and topdown Pincer-Search [Lin & Kedem 998] Max-Miner [Bayardo et al. 998] BUT loss of information 25

26

The Eclat algorithm Apriori : DB is in horizontal format Eclat introduces the vertical format Itemset x tid-list(x) [Zaki et al., 97] A B C D E X X X X X 2 X X X X 3 X 4 X X 5 X X X X 6 X X X A 2 5 6 B 2 4 5 6 C 2 3 4 5 6 D 5 E 2 Horizontal format Vertical format 27

Vertical format Support counting can be done with tid-list intersections I,J itemsets : tidlist(i J) = tidlist(i) tidlist(j) No need for costly subset tests, hash tree generation Problem If database is big, tidlists of the many candidates created will be big also, and will not hold in memory Solution Partition the lattice into equivalence classes In Eclat : equivalence relation = sharing the same prefix 28

29 AB 256 A 256 B 2456 C 23456 D 5 E 2 AC 256 BC 2456 AD 5 BD 5 CD 5 AE 2 BE 2 CE 2 ABC 256 ABD 5 ACD 5 BCD 5 ABE 2 ACE 2 BCE 2 ABCD 5 ABCE 2 DE CDE BDE ADE ACDE BCDE ABDE ABCDE = 2 Initial equivalence classes

Equivalence classes inside [A] class ABCDE ABCD ABCE ABDE ACDE 5 2 ABC ABD ABE ACD ACE ADE 256 5 2 5 2 AB AC AD AE 256 256 5 2 A 256 = 2 30

3

32

33

34

35

36

37

38

39

40

4

42

43

44

45

Eclat algorithm Input: T, minsup compute L and L 2 // like apriori Transform T in vertical representation CE 2 = Decompose L 2 in equivalence classes forall E 2 CE 2 do compute_frequent(e 2 ) end forall return k F k ; 46

compute_frequent(e k- ) forall itemsets I and I 2 in E k- do if tidlist(i ) tidlist(i 2 ) minsup then L k L k {I I 2 } end if end forall CE k = Decompose L k in equivalence classes forall E k CE k do compute_frequent(e k ) end forall 47

The FP-growth approach FP-Growth : Frequent Pattern Growth No candidate generation Compress transaction database into FP-tree (Frequent Pattern Tree) Extended prefix-tree Recursive processing of conditional databases Can be one order of magnitude faster than Apriori 48

FP-tree Compact structure for representing DB and frequent itemsets. Composed of : root item-prefix subtrees frequent-item-header array 2. Node = item-name count // number of transactions containing path reaching this node node-link // next node having same item-name 3. Entry in frequent-item-header array = item-name head of node-link // pointer to first node having item-name Both an horizontal (prefix-tree) and a vertical (node links) structure 49

FP-tree example (/2) A B C D E F A B C E C B C A B C D A B C C : 6 B : 5 A : 4 D : 2 E : 2 F : C B A D E C B A E C C B C B A D C B A C C B C B A C B A D C B A D E C B A E Original transaction database minsup = 2 Items ordered by frequency Transactions reordered by item frequency (infrequent item F pruned) Transactions sorted lexicographically 50

FP-tree example (2/2) Frequent-item-header array C:6 B:5 A:4 D:2 E:2 C C B C B A C B A D C B A D E C B A E Transactions sorted lexicographically C:6 B:5 A:4 D:2 Prefix-tree structure E: E: FP-tree 5

Exercise Draw the FP-tree for the following DB : (minsup = 3) A D F A C D E B D B C D B C A B D B D E B C E G C D F A B D 52

FP-Growth FP-growth(FP, prefix) foreach frequent item x in increasing order of frequency do prefix = prefix x Dx = count x = 0 foreach node-link nl x of x do D x = D x {transaction of path reaching x, with count for each item = nl x.count, without x} end count x += nl x.count if count x minsup then output (prefix x) end if FP x = FP-tree constructed from D x FP-growth(FP x, prefix) end 55

FP-Growth example C : 6 B : 5 A : 4 D : 2 E : 2 C:6 B:5 A:4 D:2 E: E: Original FP-tree Start with item E (lowest support) D:2 E: count E = + = 2 C:6 B:5 A:4 E is frequent Output E E: Conditional FP-tree for E D: E: update counts only transactions containing E C:2 B:2 A:2 drop E 56 E:

FP-Growth example (cont.) Conditional FP-tree for E C:2 B:2 A:2 D: D not frequent here do not consider DE Loop on AE, BE, CE The rest is left as exercise For AE : C:2 B:2 A:2 Conditional FP-tree for AE: C:2 B:2 count AE = 2 AE is frequent Output AE Conditional FP-tree for BAE: C:2 For BAE : C:2 B:2 count BAE = 2 BAE is frequent Output BAE For CBAE : C:2 count CBAE = 2 CBAE is frequent 57 Output CBAE

58

Problems of frequent itemsets Large computation time For low support values, huge number of frequent itemsets Lots of redundant information Simple example: Tid Transaction A B C D 2 A B C 3 A B C E minsup = 2 FIS Support Tidlist tidlist A B C 3 {,2,3} A B 3 {,2,3} A C 3 {,2,3} B C 3 {,2,3} A 3 {,2,3} B 3 {,2,3} C 3 {,2,3} REDUNDANT 59

Closed frequent itemsets [Pasquier et al., 99] We have seen that there is loss of information with maximal frequent itemsets Lets consider equivalence classes for frequent itemsets sharing the same tidlist The closed frequent itemsets are the maximums of these equivalence classes Set of closed frequent itemsets : CFI = { I FI I FI tq tidlist(i )=tidlist(i) I I} with FI set of frequent itemsets Sets are ordered by inclusion: MFI CFI FI 60

ABCDE A B C D E X X X X X 2 X X X X 3 X 4 X X ABCD 5 ABCE 2 ABDE ACDE BCDE 5 X X X X 6 X X X ABC ABD ACD BCD ABE ACE BCE CDE BDE ADE 256 5 5 5 2 2 2 AB AC BC AD BD CD AE BE CE DE 256 256 2456 5 5 5 2 2 2 A B C D E 256 2456 23456 5 2 = 2 6

62

Computing closed frequent itemsets Brute force (frequent pattern base) Enumerate all the frequent patterns Output only closed ones Most of the time : inefficient Exception: if FI is very small Closure base Compute only closed patterns with closure operations Can be very efficient 63

Efficient computation First algorithms (Closet, Charm, ) Candidate-based method Try to compute as many non-closed frequent itemsets as possible OR Closure Extension: add an item to an existing closed frequent itemset, and take closure Keep in memory all closed frequent itemsets found so far Need a lot of memory during execution Reverse search (LCM) Depth First Search algorithm so no global memory needed Fast computation time, Little memory usage 64

Closure Extension of Itemset Usual backtracking does not work for closed itemsets, because there are possibly big gap between closed itemsets On the other hand, any closed itemset is obtained from another by add an item and take closure (maximal),2,3,2,3,4,2,3,2,4,3,4 2,3,4,4 2,3 2,4 3,4 - closure of P is the closed itemset having the same denotation to P, and computed by taking intersection of Occ(P) 2 3 4 φ This is an adjacency structure defined on closed itemsets, thus we can perform graph search on it, with using memory

Reverse Search Uno and Arimura found that the closed frequent itemsets are organized in a directed spanning tree : Closed frequent itemsets : transition function they can be visited by DFS from a node of the tree, need of a transition function to compute its children 66

Tree of closed frequent itemsets Search space of CFIS = lattice = DAG DAG Tree : impose order of exploration Order need to: follow enumeration strategy be inexpensive to enforce Order of Arimura and Uno CFIS P, Q Q children of P if all items of Q < maxitem(p) 67

Pseudo-code 68

Database Reductions Conditional database is to reduce database by unnecessary items and transactions, for deeper levels,3,5,3,5,3,5,2,5,6,4,6,2,6 θ = 3 filtering Remove infrequent items, items included in all 3,5 3,5 3,5 5,6 6 6 filtering 3,5 3 5,6 6 2 Unify same transactions FP-tree, prefix tree 3 5 5 6 Remove infrequent items, automatically unified 6 Linear time O( D log D ) time Compact if database is dense and large

Prize for the Award Prize is {beer, diapers} Most Frequent Itemset