D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

Similar documents
D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Association Rules. Fundamentals

Data Mining Concepts & Techniques

Lecture Notes for Chapter 6. Introduction to Data Mining

CS 484 Data Mining. Association Rule Mining 2

CS 584 Data Mining. Association Rule Mining 2

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL360

DATA MINING - 1DL360

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Association Analysis Part 2. FP Growth (Pei et al 2000)

732A61/TDDD41 Data Mining - Clustering and Association Analysis

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Chapters 6 & 7, Frequent Pattern Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Association Rule Mining on Web

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

FP-growth and PrefixSpan

Frequent Itemset Mining

Machine Learning: Pattern Mining

Unit II Association Rules

Association Analysis. Part 1

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

CSE 5243 INTRO. TO DATA MINING

Introduction to Data Mining

Knowledge Discovery and Data Mining I

Temporal Data Mining

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Associa'on Rule Mining

CS5112: Algorithms and Data Structures for Applications

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Frequent Itemset Mining

Association Analysis 1

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Pattern Space Maintenance for Data Updates. and Interactive Mining

CS 412 Intro. to Data Mining

1 Frequent Pattern Mining

Frequent Itemset Mining

Frequent Itemset Mining

Association Analysis. Part 2

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Four Paradigms in Data Mining

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

Data Mining and Analysis: Fundamental Concepts and Algorithms

Handling a Concept Hierarchy

Sequential Pattern Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

FREQUENT PATTERN SPACE MAINTENANCE: THEORIES & ALGORITHMS

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

COMP 5331: Knowledge Discovery and Data Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Approximate counting: count-min data structure. Problem definition

sor exam What Is Association Rule Mining?

Data Warehousing & Data Mining

Data Warehousing & Data Mining

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

OPPA European Social Fund Prague & EU: We invest in your future.

Mining Infrequent Patter ns

CPT+: A Compact Model for Accurate Sequence Prediction

Effective Elimination of Redundant Association Rules

CS738 Class Notes. Steve Revilak

Frequent Pattern Mining: Exercises

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

Mining Strong Positive and Negative Sequential Patterns

Data mining, 4 cu Lecture 5:

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Density-Based Clustering

FARMER: Finding Interesting Rule Groups in Microarray Datasets

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

DATA MINING - 1DL105, 1DL111

On Condensed Representations of Constrained Frequent Patterns

On Compressing Frequent Patterns

3. Problem definition

arxiv: v1 [cs.db] 31 Dec 2011

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Transcription:

Definitions Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Itemset is a set including one or more items Example: {Beer, Diapers} k-itemset is an itemset that contains k items Support count (#) is the frequency of occurrence of an itemset Example: #{Beer,Diapers} = 2 Support is the fraction of transactions that contain an itemset Example: sup({beer, Diapers}) = 2/5 Frequent itemset is an itemset whose support is greater than or equal to a minsup threshold Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 4 Association rules Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket counter Association rule diapers beer Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 2% of transactions contains both items 30% of transactions containing diapers also contains beer 2 Rule quality metrics Given the association rule A B A, B are Support is the fraction of transactions containing both A and B #{A,B} T T is the cardinality of the transactional database a priori probability of itemset AB rule frequency in the database Confidence is the frequency of B in transactions containing A sup(a,b) sup(a) conditional probability of finding B having found A strength of the 5 Association rule mining Rule quality metrics: example A collection of transactions is given a transaction is a set of items items in a transaction are not ordered Association rule A, B C A, B = items in the rule body C = item in the rule head The means co-occurrence not causality Examples cereals, cookies milk From itemset {Milk, Diapers} the following rules may be derived Rule: Milk Diapers support sup=#{milk,diapers}/#trans. =3/5=60% confidence conf=#{milk,diapers}/#{milk}=3/4=75% / Rule: Diapers Milk same support s=60% confidence conf=#{milk,diapers}/#{diapers}=3/3 =00% Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk age < 40, life-insurance = yes children = yes customer, relationship data, mining 3 6 Politecnico di Torino

Association rule extraction Given a set of transactions T, association rule mining is the extraction of the rules satisfying the constraints support minsup threshold confidence minconf threshold The result is complete (all rules satisfying both constraints) correct (only the rules satisfying both constraints) May add other more complex constraints 7 Frequent Itemset Generation null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2 d possible candidate From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 0 Association rule extraction Brute-force approach enumerate all possible permutations (i.e., association rules) compute support and confidence for each rule prune the rules that do not satisfy the minsup and minconf constraints Computationally unfeasible Given an itemset, the extraction process may be split first generate frequent next generate rules from each frequent itemset Example Itemset {Milk, Diapers} sup=60% Rules Milk Diapers (conf=75%) Diapers Milk (conf=00%) Frequent Itemset Generation Brute-force approach each itemset in the lattice is a candidate frequent itemset the database to count the support of each candidate match each transaction against every candidate Complexity ~ O( T 2 d w) T is number of transactions d is number of items w is transaction length Association rule extraction () Extraction of frequent many different techniques level-wise approaches (Apriori,...) approaches without candidate generation (FP-growth,...) other approaches most computationally expensive step limit extraction time by means of support threshold (2) Extraction of association rules generation of all possible binary partitioning of each frequent itemset possibly enforcing a confidence threshold Improving Efficiency Reduce the number of candidates Prune the search space complete set of candidates is 2 d Reduce the number of transactions Prune transactions as the size of increases reduce T Reduce the number of comparisons Equal to T 2 d Use efficient data structures to store the candidates or transactions 9 2 Politecnico di Torino 2

The Apriori Principle If an itemset is frequent, then all of its subsets must also be frequent The support of an itemset can never exceed the support of any of its subsets It holds due to the antimonotone property of the support measure Given two arbitrary A and B if A B then sup(a) sup(b) It reduces the number of candidates Apriori Algorithm [Agr94] Pseudo-code C k : Candidate itemset of size k L k : frequent itemset of size k L = {frequent items}; for (k = ; L k!= ; k++) do begin C k+ = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+ that are contained in t L k+ = candidates in C k+ satisfying minsup end return k L k ; 3 6 The Apriori Principle Generating Candidates From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 null A B C D E Sort L k candidates in lexicographical order For each candidate of length k Self-join with each candidate sharing same L k- prefix Found to be Infrequent AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Prune candidates by applying Apriori principle Example: given L 3 ={abc, abd, acd, ace, bcd} Self-join abcd from abc and abd acde from acd and ace Prune by applying Apriori principle acde is removed because ade is not in L 3 Pruned supersets ABCDE 4 C 4 ={abcd} 7 Apriori Algorithm [Agr94] Level-based approach at each iteration extracts of a given length k Two main steps for each level () Candidate generation Join Step generate candidates of length k+ by joining frequent of length k Prune Step apply Apriori principle: prune length k+ candidate that contain at least one k-itemset that is not frequent (2) Frequent itemset generation DB to count support for k+ candidates prune candidates below minsup 5 Apriori Algorithm: Example Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> Politecnico di Torino 3

Generate candidate - Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> st DB C {C} 7 9 L Count support for candidates in C 2 C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB C 2 {B,E} 22 Prune infrequent candidates in C Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> st DB C {C} 7 L C All in set C are frequent according to minsup> 20 L Prune infrequent candidates in C 2 C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB C 2 {B,E} L 2 23 L Generate candidates from L C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} L 2 Generate candidates from L 2 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E}, {A,D,E} {B,C,D} {C,D,E} 2 24 Politecnico di Torino 4

L 2 Apply Apriori principle on C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E}, {A,D,E} {B,C,D} {C,D,E} L 3 Generate candidates from L 3 C 4 {ABCD} {A,B,C,D} Prune {A,B,E} Its subset {B,E} is infrequent ({B,E} is not in L 2 ) 25 2 L 2 Count support for candidates in C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E}, {A,D,E} {B,C,D} {C,D,E} 3 rd DB C 3 {A,C,E} {C,D,E} 26 L 3 Apply Apriori principle on C 4 C 4 {ABCD} {A,B,C,D} Check if {A,C,D} and {B,C,D} belong to L 3 L 3 contains all 3-itemset subsets of {A,B,C,D} {A,B,C,D} is potentially frequent D M BG 29 L 2 Prune infrequent candidates in C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E}, {A,D,E} {B,C,D} {C,D,E} 3 rd DB C 3 {A,C,E} {C,D,E} L 3 L 3 Count support for candidates in C 4 4 th DB C 4 C 4 {ABCD} {A,B,C,D} {ABCD} {A,B,C,D} {A,C,E} and {C,D,E} are actually infrequent They are discarded from C D M B 3 G 27 30 Politecnico di Torino 5

Prune infrequent candidates in C 4 Performance Issues in Apriori Candidate generation L 3 4 th DB C 4 C 4 {ABCD} {A,B,C,D} {ABCD} {A,B,C,D} {A,B,C,D} is actually infrequent {A,B,C,D} is discarded from C 4 L 4 = Ø Candidate sets may be huge 2-itemset candidate generation is the most critical step extracting long frequent intemsets requires generating all frequent subsets Multiple database s n + s when longest frequent pattern length is n 3 34 Final set of frequent Factors Affecting Performance Example DB L {A,B} L 3 {A,B,C} 0 {B,C,E} minsup> L 2 32 Minimum support threshold lower support threshold increases number of frequent larger number of candidates larger (max) length of frequent Dimensionality (number of items) of the data set more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width transaction width increases in dense data sets may increase max length of frequent and traversals of hash tree number of subsets in a transaction increases with its width 35 Counting Support of Candidates Scan transaction database to count support of each itemset total number of candidates may be large one transaction may contain many candidates Approach [Agr94] candidate are stored in a hash-tree leaf node of hash-tree contains a list of and counts interior node contains a hash table subset function finds all candidates contained in a transaction match transaction subsets to candidates in hash tree 33 Improving Apriori Efficiency Hash-based itemset counting [Yu95] A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent reduction [Yu95] A transaction that does not contain any frequent k-itemset is useless in subsequent s Partitioning [Sav96] Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB 36 Politecnico di Torino 6

Improving Apriori Efficiency Sampling [Toi96] mining on a subset of given data, lower support threshold + a method to determine the completeness Dynamic Itemset Counting [Motw9] add new candidate only when all of their subsets are estimated to be frequent 37 construction Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order (3) Create For each transaction t in DB order transaction t items in decreasing support order same order as insert transaction t in use existing path for common prefix create new branch when path becomes different 40 FP-growth Algorithm [Han00] construction Exploits a main memory compressed rappresentation of the database, the high compression for dense data distributions less so for sparse data distributions complete representation for frequent pattern mining enforces support constraint Frequent pattern mining by means of FP-growth recursive visit of applies divide-and-conquer approach decomposes mining task into smaller subtasks Only two database s count item supports + build {A,B} {B,A} A: B: 3 4 construction construction Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order 39 B: B:2 A: C: 42 Politecnico di Torino 7

construction B:2 A: C: C: A: construction 6 {B,A,C,D} C: C:2 B:4 B:3 A:3 C: C: 43 46 construction construction B:2 A: C: C: A: C:2 B:5 B:4 A:3 C:2 C: C: 44 47 construction construction 5 {B,A,C} C: B:2 B:3 A: C: C: {A,B,C} {B,A,C} C:2 B:6 B:5 A:4 A:3 C:2 C: 45 4 Politecnico di Torino

construction 9 {B,A,D} B:7 B:6 A:5 A:4 C:2 C: FP-growth Algorithm Scan from lowest support item up For each item i in extract frequent including item i and items preceding it in () build Conditional Pattern Base for item i (i-cpb) Select prefix-paths of item i from (2) recursive invocation of FP-growth on i-cpb 49 52 construction 0 {B,C,E} 0 {B,C,E} B: B:7 A:5 C:2 C: Example Consider item D and extract frequent including D and supported combinations of items A, B, C 7 B: A:5 C: 50 53 Final {D} 5 A:5 B: C: Item pointers are used to assist frequent itemset generation 5 Conditional Pattern Base of D () Build Select prefix-paths of item D from 7 Frequent itemset: D, sup(d) = 5 B: A:5 C: 54 Politecnico di Torino 9

Conditional Pattern Base of D {B,A,C} A:5 B: C: Conditional Pattern Base of D {B,A,C} {B,A} {B,C} {A,C} A:5 B: C: 55 5 Conditional Pattern Base of D {B,A,C} {B,A} A:5 B: C: 56 Conditional Pattern Base of D {C} {D} {E} 7 7 5 3 {B,A,C} {B,A} {B,C} {A,C} A:5 B: C: 59 Conditional Pattern Base of D {B,A,C} {B,A} {B,C} A:5 B: C: {B,A,C} {B,A} {B,C} Conditional Pattern Base of D () Build Select prefix-paths of item D from {A,C} 4 {C} 3 3 B:2 C: A:4 C: B: C: (2) Recursive invocation of FP-growth on 57 60 Politecnico di Torino 0

Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from {B,A,C} {B,A} {B,C} {A,C} 4 3 {C} 3 Frequent itemset: DC, sup(dc) = 3 B:2 C: A:4 C: B: C: DC-CPB {A,B} 6 Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional DCB-CPB Item A is infrequent in DCB-CPB A is removed from DCB-CBP DCB-CDB CDB is empty (2) The search backtracks to DC-CBP 64 Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from Conditional Pattern Base of DCA () Build DCA-CPB Select prefix-paths of item A from DC-conditional DC-CPB {A,B} DC-conditional 2 2 B: DC-conditional B: DC-CPB {A,B} DC-conditional {C} 2 2 B: DC-conditional B: Frequent itemset: DCA, sup(dca) = 2 DCA-CPB is empty (no transactions) (2) Recursive invocation of FP-growth on DC-CPB 62 (2) The search backtracks to D-CBP D M BG 65 Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional DC-CPB {A,B} DC-conditional 2 {C} 2 Frequent itemset: DCB, sup(dcb) = 2 B: DC-conditional B: DCB-CPB 63 Conditional Pattern Base of DB () Build DB-CPB Select prefix-paths of item B from {B,A,C} {B,A} {B,C} {A,C} 4 {C} 3 {C} 3 Frequent itemset: DB, sup(db) = 3 B:2 C: A:4 C: B: C: DB-CPB 2 66 Politecnico di Torino

Conditional Pattern Base of DB Frequent with prefix D () Build DB-CPB Select prefix-paths of item B from DB-CPB DB-conditional DB-conditional 2 2 A2 (2) Recursive invocation of FP-growth on DB-CPB 67 Frequent including D and supported combinations of items B,A,C Example DB {A,B} {A,B,C} 0 {B,C,E} minsup> 70 Conditional Pattern Base of DBA () Build DBA-CPB Select prefix-paths of item A from DB-conditional DB-conditional DB-CPB 2 {C} 2 Frequent itemset: DBA, sup(dba) = 2 (2) The search backtracks to D-CBP D M BG DB-conditional A2 DBA-CPB is empty (no transactions) 6 Other approaches Many other approaches to frequent itemset extraction some covered later May exploit a different database representation represent the tidset of each item [Zak00] A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B A,B,C 9 A,C,D 0 B A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 9 7 9 0 9 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 7 Conditional Pattern Base of DA () Build DA-CPB Select prefix-paths of item A from {B,A,C} {B,A} Item {C} sup 4 A:4 B: {B,C} {A,C} {C} 3 3 Frequent itemset: DA, sup(da) = 4 B:2 C: C: C: DA-CPB is empty (no transactions) The search ends 69 Compact Representations Some are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A A9 A0 B B2 B3 B4 B5 B6 B7 B B9 B0 C C2 C3 C4 C5 C6 C7 C C9 C0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Number of frequent = 3 k = k A compact representation is needed From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 72 Politecnico di Torino 2

Maximal Frequent Itemset Maximal vs Closed Frequent Itemsets An itemset is frequent maximal if none of its immediate supersets is frequent Maximal Itemsets Minimum support = 2 null A B C D E Closed but not maximal 24 23 234 245 345 4 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE Closed and maximal 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE # Closed = 9 Infrequent Itemsets Border From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 73 # Maximal = 4 ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 76 Closed Itemset Maximal vs Closed Itemsets An itemset is closed if none of its immediate supersets has the same support as the itemset itemset sup 4 itemset 5 {A,B} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 sup {A,B,C} 2 {A,B,D}, 3 {B,C,D} 3 {A,B,C,D} 2 74 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 77 ABC 2 ABCD 3 BCE 4 ACDE 5 DE Maximal vs Closed Itemsets Not supported by any transactions null 24 23 234 245 345 A B C D E Ids 4 2 3 24 34 45 AB AC AD AE BC BD BE CD CE DE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 75 Effect of Support Threshold Selection of the appropriate minsup threshold is not obvious If minsup is too high including rare but interesting items may be lost example: pieces of jewellery (or other expensive products) If minsup is too low it may become computationally very expensive the number of frequent becomes very large 7 Politecnico di Torino 3

Interestingness Measures A large number of pattern may be extracted rank patterns by their interestingness Objective measures rank patterns based on statistics computed from data initial framework [Agr94] only considered support and confidence other statistical measures available Subjective measures rank patterns according to user interpretation [Silb9] interesting if it contradicts the expectation of a user interesting if it is actionable 79 Example Association rule play basket eat cereals has corr = 0.9 negative correlation but rule play basket not (eat cereals) has corr =,34 2 Confidence measure: always reliable? 5000 high school students are given 3750 eat cereals 3000 play basket 2000 eat cereals and play basket Rule play basket eat cereals sup = 40%, conf = 66,7% is misleading because eat cereals has sup 75% (>66,7%) Problem caused by high frequency of rule head negative correlation basket not basket total cereals 2000 750 3750 not cereals 000 250 250 total 3000 2000 5000 0 3 Correlation or lift Correlation = r: A B P( A, B) conf(r) = P( A) P( B) sup(b) Statistical independence Correlation = Positive correlation Correlation > Negative correlation Correlation < Politecnico di Torino 4