D B M G Data Base and Data Mining Group of Politecnico di Torino

Similar documents
D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Association Rules. Fundamentals

Data Mining Concepts & Techniques

Lecture Notes for Chapter 6. Introduction to Data Mining

CS 484 Data Mining. Association Rule Mining 2

CS 584 Data Mining. Association Rule Mining 2

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL360

DATA MINING - 1DL360

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Association Analysis Part 2. FP Growth (Pei et al 2000)

732A61/TDDD41 Data Mining - Clustering and Association Analysis

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Chapters 6 & 7, Frequent Pattern Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Association Rule Mining on Web

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

FP-growth and PrefixSpan

Frequent Itemset Mining

Machine Learning: Pattern Mining

Unit II Association Rules

Association Analysis. Part 1

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

CSE 5243 INTRO. TO DATA MINING

Introduction to Data Mining

Knowledge Discovery and Data Mining I

Temporal Data Mining

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Associa'on Rule Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

CS5112: Algorithms and Data Structures for Applications

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Association Analysis 1

Frequent Itemset Mining

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

1 Frequent Pattern Mining

CS 412 Intro. to Data Mining

Frequent Itemset Mining

Pattern Space Maintenance for Data Updates. and Interactive Mining

Frequent Itemset Mining

Association Analysis. Part 2

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Four Paradigms in Data Mining

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

Data Mining and Analysis: Fundamental Concepts and Algorithms

Handling a Concept Hierarchy

Sequential Pattern Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

FREQUENT PATTERN SPACE MAINTENANCE: THEORIES & ALGORITHMS

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

COMP 5331: Knowledge Discovery and Data Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Approximate counting: count-min data structure. Problem definition

Data Warehousing & Data Mining

sor exam What Is Association Rule Mining?

Data Warehousing & Data Mining

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

OPPA European Social Fund Prague & EU: We invest in your future.

Mining Infrequent Patter ns

CPT+: A Compact Model for Accurate Sequence Prediction

Effective Elimination of Redundant Association Rules

CS738 Class Notes. Steve Revilak

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Frequent Pattern Mining: Exercises

Mining Strong Positive and Negative Sequential Patterns

Data mining, 4 cu Lecture 5:

Density-Based Clustering

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

FARMER: Finding Interesting Rule Groups in Microarray Datasets

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

On Condensed Representations of Constrained Frequent Patterns

On Compressing Frequent Patterns

3. Problem definition

DATA MINING - 1DL105, 1DL111

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

arxiv: v1 [cs.db] 31 Dec 2011

Transcription:

Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Association rules Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket counter TID Items Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk Association rule diapers beer 2% of transactions contains both items 30% of transactions containing diapers also contains beer 2 Politecnico di Torino

Association rule mining A collection of transactions is given a transaction is a set of items items in a transaction are not ordered Association rule A, B C A, B = items in the rule body C = item in the rule head The means co-occurrence not causality Examples cereals, cookies milk age < 40, life-insurance = yes children = yes customer, relationship data, mining 3 Definitions Itemset is a set including one or more items Example: {Beer, Diapers} k-itemset is an itemset that contains k items Support count (#) is the frequency of occurrence of an itemset Example: #{Beer,Diapers} = 2 Support is the fraction of transactions that contain an itemset Example: sup({beer, Diapers}) = 2/5 Frequent itemset is an itemset whose support is greater than or equal to a minsup threshold Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 4 Politecnico di Torino 2

Rule quality metrics Given the association rule A B A, B are itemsets Support is the fraction of transactions containing both A and B #{A,B} T T is the cardinality of the transactional database a priori probability of itemset AB rule frequency in the database Confidence is the frequency of B in transactions containing A sup(a,b) sup(a) conditional probability of finding B having found A strength of the 5 Rule quality metrics: example From itemset {Milk, Diapers} the following rules may be derived TID Items Rule: Milk Diapers support sup=#{milk,diapers}/#trans. =3/5=60% confidence conf=#{milk,diapers}/#{milk}=3/4=75% Rule: Diapers Milk same support s=60% confidence conf=#{milk,diapers}/#{diapers}=3/3 =00% Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 6 Politecnico di Torino 3

Association rule extraction Given a set of transactions T, association rule mining is the extraction of the rules satisfying the constraints support minsup threshold confidence minconf threshold The result is complete (all rules satisfying both constraints) correct (only the rules satisfying both constraints) May add other more complex constraints 7 Association rule extraction Brute-force approach enumerate all possible permutations (i.e., association rules) compute support and confidence for each rule prune the rules that do not satisfy the minsup and minconf constraints Computationally unfeasible Given an itemset, the extraction process may be split first generate frequent itemsets next generate rules from each frequent itemset Example Itemset {Milk, Diapers} sup=60% Rules Milk Diapers (conf=75%) Diapers Milk (conf=00%) 8 Politecnico di Torino 4

Association rule extraction () Extraction of frequent itemsets many different techniques level-wise approaches (Apriori,...) approaches without candidate generation (FP-growth,...) other approaches most computationally expensive step limit extraction time by means of support threshold (2) Extraction of association rules generation of all possible binary partitioning of each frequent itemset possibly enforcing a confidence threshold 9 Frequent Itemset Generation null A B C D E Given d items, there are 2 d possible candidate itemsets AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 0 Politecnico di Torino 5

Frequent Itemset Generation Brute-force approach each itemset in the lattice is a candidate frequent itemset scan the database to count the support of each candidate match each transaction against every candidate Complexity ~ O( T 2 d w) T is number of transactions d is number of items w is transaction length Improving Efficiency Reduce the number of candidates Prune the search space complete set of candidates is 2 d Reduce the number of transactions Prune transactions as the size of itemsets increases reduce T Reduce the number of comparisons Equal to T 2 d Use efficient data structures to store the candidates or transactions 2 Politecnico di Torino 6

The Apriori Principle If an itemset is frequent, then all of its subsets must also be frequent The support of an itemset can never exceed the support of any of its subsets It holds due to the antimonotone property of the support measure Given two arbitrary itemsets A and B if A B then sup(a) sup(b) It reduces the number of candidates 3 The Apriori Principle From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 Found to be Infrequent Pruned supersets 4 Politecnico di Torino 7

Apriori Algorithm [Agr94] Level-based approach at each iteration extracts itemsets of a given length k Two main steps for each level () Candidate generation Join Step generate candidates of length k+ by joining frequent itemsets of length k Prune Step apply Apriori principle: prune length k+ candidate itemsets that contain at least one k-itemset that is not frequent (2) Frequent itemset generation scan DB to count support for k+ candidates prune candidates below minsup 5 Apriori Algorithm [Agr94] Pseudo-code C k : Candidate itemset of size k L k : frequent itemset of size k L = {frequent items}; for (k = ; L k!= ; k++) do begin C k+ = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+ that are contained in t L k+ = candidates in C k+ satisfying minsup end return k L k ; 6 Politecnico di Torino 8

Generating Candidates Sort L k candidates in lexicographical order For each candidate of length k Self-join with each candidate sharing same L k- prefix Prune candidates by applying Apriori principle Example: given L 3 ={abc, abd, acd, ace, bcd} Self-join abcd from abc and abd acde from acd and ace Prune by applying Apriori principle acde is removed because ade is not in L 3 C 4 ={abcd} 7 Apriori Algorithm: Example Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> 8 Politecnico di Torino 9

Generate candidate -itemsets Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> st DB scan C 9 Prune infrequent candidates in C Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> st DB scan C L C All itemsets in set C are frequent according to minsup> 20 Politecnico di Torino 0

Generate candidates from L L C 2 itemsets {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 Count support for candidates in C 2 L C 2 itemsets {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB scan C 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {B,E} {C,D} 3 {C,E} 2 {D,E} 2 22 Politecnico di Torino

Prune infrequent candidates in C 2 L C 2 itemsets {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB scan C 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {B,E} {C,D} 3 {C,E} 2 {D,E} 2 L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 23 L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 Generate candidates from L 2 C 3 itemsets {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 24 Politecnico di Torino 2

L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 Apply Apriori principle on C 3 Prune {A,B,E} C 3 itemsets {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} Its subset {B,E} is infrequent ({B,E} is not in L 2 ) 25 L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 Count support for candidates in C 3 C 3 itemsets {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 3 rd DB scan C 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,C,E} {A,D,E} 2 {B,C,D} 2 {C,D,E} 26 Politecnico di Torino 3

L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 Prune infrequent candidates in C 3 C 3 itemsets {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 3 rd DB scan C 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,C,E} {A,D,E} 2 {B,C,D} 2 {C,D,E} {A,C,E} and {C,D,E} are actually infrequent L 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,D,E} 2 {B,C,D} 2 They are discarded from C 3 27 Generate candidates from L 3 L 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,D,E} 2 {B,C,D} 2 C 4 itemsets {A,B,C,D} 28 Politecnico di Torino 4

Apply Apriori principle on C 4 L 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,D,E} 2 {B,C,D} 2 C 4 itemsets {A,B,C,D} Check if {A,C,D} and {B,C,D} belong to L 3 L 3 contains all 3-itemset subsets of {A,B,C,D} {A,B,C,D} is potentially frequent 29 Count support for candidates in C 4 L 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,D,E} 2 {B,C,D} 2 C 4 itemsets {A,B,C,D} 4 th DB scan C 4 itemsets sup {A,B,C,D} 30 Politecnico di Torino 5

Prune infrequent candidates in C 4 L 3 {A,B,C} 3 {A,B,D} 2 {A,C,D} 2 {A,D,E} 2 {B,C,D} 2 C 4 itemsets {A,B,C,D} 4 th DB scan C 4 itemsets sup {A,B,C,D} L 4 = Ø {A,B,C,D} is actually infrequent {A,B,C,D} is discarded from C 4 3 Final set of frequent itemsets Example DB L {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} L 6 {A,B,C,D} 3 7 {B,C} 8 {A,B,C} {A,B,C} 3 9 {A,B,D} {A,B,D} 2 {A,C,D} 2 0 {B,C,E} {A,D,E} 2 minsup> {B,C,D} 2 L 2 {A,B} 5 {A,C} 4 {A,D} 4 {A,E} 2 {B,C} 6 {B,D} 3 {C,D} 3 {C,E} 2 {D,E} 2 32 Politecnico di Torino 6

Counting Support of Candidates Scan transaction database to count support of each itemset total number of candidates may be large one transaction may contain many candidates Approach [Agr94] candidate itemsets are stored in a hash-tree leaf node of hash-tree contains a list of itemsets and counts interior node contains a hash table subset function finds all candidates contained in a transaction match transaction subsets to candidates in hash tree 33 Performance Issues in Apriori Candidate generation Candidate sets may be huge 2-itemset candidate generation is the most critical step extracting long frequent intemsets requires generating all frequent subsets Multiple database scans n + scans when longest frequent pattern length is n 34 Politecnico di Torino 7

Factors Affecting Performance Minimum support threshold lower support threshold increases number of frequent itemsets larger number of candidates larger (max) length of frequent itemsets Dimensionality (number of items) of the data set more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width transaction width increases in dense data sets may increase max length of frequent itemsets and traversals of hash tree number of subsets in a transaction increases with its width 35 Improving Apriori Efficiency Hash-based itemset counting [Yu95] A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent Transaction reduction [Yu95] A transaction that does not contain any frequent k-itemset is useless in subsequent scans Partitioning [Sav96] Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB 36 Politecnico di Torino 8

Improving Apriori Efficiency Sampling [Toi96] mining on a subset of given data, lower support threshold + a method to determine the completeness Dynamic Itemset Counting [Motw98] add new candidate itemsets only when all of their subsets are estimated to be frequent 37 FP-growth Algorithm [Han00] Exploits a main memory compressed rappresentation of the database, the high compression for dense data distributions less so for sparse data distributions complete representation for frequent pattern mining enforces support constraint Frequent pattern mining by means of FP-growth recursive visit of applies divide-and-conquer approach decomposes mining task into smaller subtasks Only two database scans count item supports + build 38 Politecnico di Torino 9

construction Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order 39 construction Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order (3) Create For each transaction t in DB order transaction t items in decreasing support order same order as insert transaction t in use existing path for common prefix create new branch when path becomes different 40 Politecnico di Torino 20

construction Transaction {A,B} Sorted transaction {B,A} A: B: 4 construction Transaction 2 {B,C,D} Sorted transaction 2 {B,C,D} A: B: B:2 C: 42 Politecnico di Torino 2

construction Transaction 3 {A,C,D,E} Sorted transaction 3 {A,C,D,E} B:2 A: C: C: A: E: 43 construction Transaction 4 {A,D,E} Sorted transaction 4 {A,D,E} A: B:2 C: A: A:2 C: E: E: 44 Politecnico di Torino 22

construction Transaction 5 {A,B,C} Sorted transaction 5 {B,A,C} C: A:2 A: B:2 B:3 C: A:2 C: E: E: 45 construction Transaction 6 {A,B,C,D} Sorted transaction 6 {B,A,C,D} C: C:2 A:2 A:3 B:4 B:3 C: A:2 C: E: E: 46 Politecnico di Torino 23

construction Transaction 7 {B,C} Sorted transaction 7 {B,C} C:2 A:3 B:5 B:4 C:2 C: A:2 C: E: E: 47 construction Transaction 8 {A,B,C} Sorted transaction 8 {B,A,C} C:2 C:3 A:4 A:3 B:6 B:5 C:2 A:2 C: E: E: 48 Politecnico di Torino 24

construction Transaction 9 {A,B,D} Sorted transaction 9 {B,A,D} C:3 A:5 A:4 B:7 B:6 C:2 A:2 C: E: E: 49 construction Transaction 0 {B,C,E} Sorted transaction 0 {B,C,E} C:3 A:5 B:8 B:7 C:3 C:2 E: A:2 C: E: E: 50 Politecnico di Torino 25

Final C:3 B:8 A:5 C:3 E: C: A:2 E: E: Item pointers are used to assist frequent itemset generation 5 FP-growth Algorithm Scan from lowest support item up For each item i in extract frequent itemsets including item i and items preceding it in () build Conditional Pattern Base for item i (i-cpb) Select prefix-paths of item i from (2) recursive invocation of FP-growth on i-cpb 52 Politecnico di Torino 26

Example Consider item D and extract frequent itemsets including D and supported combinations of items A, B, C C:3 A:5 B:8 C:3 E: C: A:2 E: E: 53 Conditional Pattern Base of D () Build D-CPB Select prefix-paths of item D from Frequent itemset: D, sup(d) = 5 C:3 B:8 A:5 C:3 E: C: E: A:2 E: 54 Politecnico di Torino 27

Conditional Pattern Base of D D-CPB Items sup {B,A,C} C:3 A:5 B:8 C:3 E: C: E: A:2 E: 55 Conditional Pattern Base of D D-CPB Items sup {B,A,C} {B,A} C:3 A:5 B:8 C:3 E: C: E: A:2 E: 56 Politecnico di Torino 28

Conditional Pattern Base of D D-CPB Items sup {B,A,C} {B,A} {B,C} C:3 A:5 B:8 C:3 E: C: E: A:2 E: 57 Conditional Pattern Base of D D-CPB Items sup {B,A,C} {B,A} {B,C} {A,C} C:3 A:5 B:8 C:3 E: C: E: A:2 E: 58 Politecnico di Torino 29

Conditional Pattern Base of D D-CPB Items sup {B,A,C} {B,A} {B,C} {A,C} {A} C:3 A:5 B:8 C:3 E: C: E: A:2 E: 59 Conditional Pattern Base of D () Build D-CPB Select prefix-paths of item D from D-CPB Items {B,A,C} {B,A} {B,C} {A,C} {A} sup D-conditional {A} 4 {B} 3 {C} 3 B:2 C: A:4 C: D-conditional B: C: (2) Recursive invocation of FP-growth on D-CPB 60 Politecnico di Torino 30

Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from D-conditional D-CPB Items {B,A,C} {B,A} {B,C} {A,C} {A} sup D-conditional {A} 4 {B} 3 {C} 3 Frequent itemset: DC, sup(dc) = 3 B:2 C: A:4 C: D-conditional B: C: DC-CPB Items {A,B} {A} {B} sup 6 Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from D-conditional DC-CPB Items {A,B} {A} {B} sup DC-conditional {A} 2 {B} 2 A:2 B: DC-conditional B: (2) Recursive invocation of FP-growth on DC-CPB 62 Politecnico di Torino 3

Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional DC-CPB Items {A,B} {A} {B} sup DC-conditional {A} 2 {C} {B} 2 A:2 B: DC-conditional B: Frequent itemset: DCB, sup(dcb) = 2 DCB-CPB Items sup {A} 63 Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional DCB-CPB Items sup {A} Item A is infrequent in DCB-CPB A is removed from DCB-CPB DCB-CPB is empty (2) The search backtracks to DC-CBP 64 Politecnico di Torino 32

Conditional Pattern Base of DCA () Build DCA-CPB Select prefix-paths of item A from DC-conditional DC-CPB Items {A,B} {A} {B} sup DC-conditional {C} {A} 2 {B} 2 A:2 B: DC-conditional B: Frequent itemset: DCA, sup(dca) = 2 DCA-CPB is empty (no transactions) (2) The search backtracks to D-CBP 65 Conditional Pattern Base of DB () Build DB-CPB Select prefix-paths of item B from D-conditional D-CPB Items {B,A,C} {B,A} {B,C} {A,C} {A} sup D-conditional {A} 4 {C} {B} 3 {C} 3 Frequent itemset: DB, sup(db) = 3 B:2 C: A:4 C: D-conditional B: C: DB-CPB Items sup {A} 2 66 Politecnico di Torino 33

Conditional Pattern Base of DB () Build DB-CPB Select prefix-paths of item B from D-conditional DB-CPB Items sup {A} 2 DB-conditional Items sup {A} 2 A:2 DB-conditional (2) Recursive invocation of FP-growth on DB-CPB 67 Conditional Pattern Base of DBA () Build DBA-CPB Select prefix-paths of item A from DB-conditional DB-CPB Items sup {A} 2 DB-conditional Items sup {C} {A} 2 A:2 DB-conditional Frequent itemset: DBA, sup(dba) = 2 (2) The search backtracks to D-CBP DBA-CPB is empty (no transactions) 68 Politecnico di Torino 34

Conditional Pattern Base of DA () Build DA-CPB Select prefix-paths of item A from D-conditional D-CPB Items {B,A,C} {B,A} {B,C} {A,C} {A} sup D-conditional {C} {A} 4 {B} {C} 3 3 B:2 C: A:4 C: D-conditional B: C: DA-CPB is empty (no transactions) Frequent itemset: DA, sup(da) = 4 The search ends 69 Frequent itemsets with prefix D Frequent itemsets including D and supported combinations of items B,A,C Example DB {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 0 {B,C,E} minsup> {A,D} 4 {B,D} 3 {C,D} 3 {A,B,D} 2 {A,C,D} 2 {B,C,D} 2 70 Politecnico di Torino 35

Other approaches Many other approaches to frequent itemset extraction some covered later May exploit a different database representation represent the tidset of each item [Zak00] A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D 0 B A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 0 9 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 7 Compact Representations Some itemsets are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A8 A9 A0 B B2 B3 B4 B5 B6 B7 B8 B9 B0 C C2 C3 C4 C5 C6 C7 C8 C9 C0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Number of frequent itemsets = 3 0 0 k = k A compact representation is needed From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 72 Politecnico di Torino 36

Maximal Frequent Itemset An itemset is frequent maximal if none of its immediate supersets is frequent Maximal Itemsets Infrequent Itemsets From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 Border 73 Closed Itemset An itemset is closed if none of its immediate supersets has the same support as the itemset {A,B} 2 {B,C,D} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} itemset sup {A} 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 {C,D} 3 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 itemset sup {A,B,C} 2 {A,B,D} 3 {A,C,D} 2 {B,C,D} 3 {A,B,C,D} 2 74 Politecnico di Torino 37

Maximal vs Closed Itemsets ABC 2 ABCD 3 BCE 4 ACDE 5 DE null Transaction Ids 24 23 234 245 345 A B C D E 2 24 24 4 23 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE Not supported by any transactions ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 75 Maximal vs Closed Frequent Itemsets Minimum support = 2 null Closed but not maximal 24 23 234 245 345 A B C D E Closed and maximal 2 24 24 4 23 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE # Closed = 9 # Maximal = 4 ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 76 Politecnico di Torino 38

Maximal vs Closed Itemsets From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 77 Effect of Support Threshold Selection of the appropriate minsup threshold is not obvious If minsup is too high itemsets including rare but interesting items may be lost example: pieces of jewellery (or other expensive products) If minsup is too low it may become computationally very expensive the number of frequent itemsets becomes very large 78 Politecnico di Torino 39

Interestingness Measures A large number of pattern may be extracted rank patterns by their interestingness Objective measures rank patterns based on statistics computed from data initial framework [Agr94] only considered support and confidence other statistical measures available Subjective measures rank patterns according to user interpretation [Silb98] interesting if it contradicts the expectation of a user interesting if it is actionable 79 Confidence measure: always reliable? 5000 high school students are given 3750 eat cereals 3000 play basket 2000 eat cereals and play basket Rule play basket eat cereals sup = 40%, conf = 66,7% is misleading because eat cereals has sup 75% (>66,7%) Problem caused by high frequency of rule head negative correlation basket not basket total cereals 2000 750 3750 not cereals 000 250 250 total 3000 2000 5000 80 Politecnico di Torino 40

Correlation or lift Correlation = P( A, B) P( A) P( B) r: A B = conf(r) sup(b) Statistical independence Correlation = Positive correlation Correlation > Negative correlation Correlation < 8 Example Association rule play basket eat cereals has corr = 0.89 negative correlation but rule play basket not (eat cereals) has corr =,34 82 Politecnico di Torino 4

83 Politecnico di Torino 42