D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Similar documents
D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

Data Mining Concepts & Techniques

Lecture Notes for Chapter 6. Introduction to Data Mining

CS 484 Data Mining. Association Rule Mining 2

CS 584 Data Mining. Association Rule Mining 2

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL360

DATA MINING - 1DL360

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Association Analysis Part 2. FP Growth (Pei et al 2000)

732A61/TDDD41 Data Mining - Clustering and Association Analysis

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Chapters 6 & 7, Frequent Pattern Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Association Rule Mining on Web

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

FP-growth and PrefixSpan

Frequent Itemset Mining

Machine Learning: Pattern Mining

Unit II Association Rules

Association Analysis. Part 1

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Knowledge Discovery and Data Mining I

Temporal Data Mining

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Associa'on Rule Mining

CS5112: Algorithms and Data Structures for Applications

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Association Analysis 1

Frequent Itemset Mining

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

1 Frequent Pattern Mining

Pattern Space Maintenance for Data Updates. and Interactive Mining

CS 412 Intro. to Data Mining

Frequent Itemset Mining

Frequent Itemset Mining

Association Analysis. Part 2

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Four Paradigms in Data Mining

A New Concise and Lossless Representation of Frequent Itemsets Using Generators and A Positive Border

Data Mining and Analysis: Fundamental Concepts and Algorithms

Handling a Concept Hierarchy

Sequential Pattern Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

FREQUENT PATTERN SPACE MAINTENANCE: THEORIES & ALGORITHMS

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

COMP 5331: Knowledge Discovery and Data Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Approximate counting: count-min data structure. Problem definition

Data Warehousing & Data Mining

sor exam What Is Association Rule Mining?

Data Warehousing & Data Mining

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

OPPA European Social Fund Prague & EU: We invest in your future.

Mining Infrequent Patter ns

CPT+: A Compact Model for Accurate Sequence Prediction

CS738 Class Notes. Steve Revilak

Effective Elimination of Redundant Association Rules

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Frequent Pattern Mining: Exercises

Mining Statistically Important Equivalence Classes and Delta-Discriminative Emerging Patterns

Mining Strong Positive and Negative Sequential Patterns

Data mining, 4 cu Lecture 5:

Density-Based Clustering

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

FARMER: Finding Interesting Rule Groups in Microarray Datasets

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

3. Problem definition

DATA MINING - 1DL105, 1DL111

On Compressing Frequent Patterns

On Condensed Representations of Constrained Frequent Patterns

arxiv: v1 [cs.db] 31 Dec 2011

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Transcription:

Association rules Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket counter Association rule diapers beer Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 2% of transactions contains both items 30% of transactions containing diapers also contains beer 2 Association rule mining Definitions A collection of transactions is given a transaction is a set of items items in a transaction are not ordered Association rule A, B C A, B = items in the rule body C = item in the rule head The means co-occurrence not causality Examples cereals, cookies milk age < 40, life-insurance = yes children = yes Itemset is a set including one or more items Example: {Beer, Diapers} k-itemset is an itemset that contains k items Support count (#) is the frequency of occurrence of an itemset Example: #{Beer,Diapers} = 2 Support is the fraction of transactions that contain an itemset Example: sup({beer, Diapers}) = 2/5 Frequent itemset is an itemset whose support is greater than or equal to a minsup threshold Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk customer, relationship data, mining 3 4 Rule quality metrics Rule quality metrics: example Given the association rule A B A, B are Support is the fraction of transactions containing both A and B #{A,B} T T is the cardinality of the transactional database a priori probability of itemset AB rule frequency in the database Confidence is the frequency of B in transactions containing A sup(a,b) sup(a) conditional probability of finding B having found A strength of the From itemset {Milk, Diapers} the following rules may be derived Rule: Milk Diapers support sup=#{milk,diapers}/#trans. =3/5=60% confidence conf=#{milk,diapers}/#{milk}=3/4=75% Rule: Diapers Milk same support s=40% confidence conf=#{milk,diapers}/#{diapers}=3/3 =00% TID Items Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diapers, Milk 4 Beer, Bread, Diapers, Milk 5 Coke, Diapers, Milk 5 6 Politecnico di Torino

Association rule extraction Given a set of transactions T, association rule mining is the extraction of the rules satisfying the constraints support minsup threshold confidence minconf threshold The result is complete (all rules satisfying both constraints) correct (only the rules satisfying both constraints) May add other more complex constraints 7 Association rule extraction Brute-force approach enumerate all possible permutations (i.e., association rules) compute support and confidence for each rule prune the rules that do not satisfy the minsup and minconf constraints Computationally unfeasible Given an itemset, the extraction process may be split first generate frequent next generate rules from each frequent itemset Example Itemset {Milk, Diapers} sup=60% Rules Milk Diapers (conf=75%) Diapers Milk (conf=00%) 8 Association rule extraction () Extraction of frequent many different techniques level-wise approaches (Apriori,...) approaches without candidate generation (FP-growth,...) other approaches most computationally expensive step limit extraction time by means of support threshold (2) Extraction of association rules generation of all possible binary partitioning of each frequent itemset possibly enforcing a confidence threshold 9 Frequent Itemset Generation null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2 d possible candidate From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 0 Frequent Itemset Generation Brute-force approach each itemset in the lattice is a candidate frequent itemset the database to count the support of each candidate match each transaction against every candidate Complexity ~ O( T 2 d w) T is number of transactions d is number of items w is transaction length Improving Efficiency Reduce the number of candidates Prune the search space complete set of candidates is 2 d Reduce the number of transactions Prune transactions as the size of increases reduce T Reduce the number of comparisons Equal to T 2 d Use efficient data structures to store the candidates or transactions 2 Politecnico di Torino 2

If an itemset is frequent, then all of its subsets must also be frequent The support of an itemset can never exceed the support of any of its subsets It holds due to the antimonotone property of the support measure Given two arbitrary A and B if A B then sup(a) sup(b) It reduces the number of candidates The Apriori Principle 3 Found to be Infrequent The Apriori Principle From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 Pruned supersets 4 Apriori Algorithm [Agr94] Level-based approach at each iteration extracts of a given length k Two main steps for each level () Candidate generation Join Step generate candidates of length k+ by joining frequent of length k Prune Step apply Apriori principle: prune length k+ candidate that contain at least one k-itemset that is not frequent (2) Frequent itemset generation DB to count support for k+ candidates prune candates below minsup 5 Apriori Algorithm [Agr94] Pseudo-code C k : Candidate itemset of size k L k : frequent itemset of size k L = {frequent items}; for (k = ; L k!= ; k++) do begin C k+ = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+ that are contained in t L k+ = candidates in C k+ satisfying minsup end return k L k ; 6 Generating Candidates Apriori Algorithm: Example Sort L k candidates in lexicographical order For each candidate of length k Self-join with each candidate sharing same L k- prefix Prune candidates by applying Apriori principle Example: given L 3 ={abc, abd, acd, ace, bcd} Self-join abcd from abc and abd acde from acd and ace Prune by applying Apriori principle acde is removed because ade is not in L 3 C 4 ={abcd} 7 Example DB {A,B} 0 {B,C,E} minsup> 8 Politecnico di Torino 3

Generate candidate - Example DB {A,B} 0 {B,C,E} minsup> st DB C 9 Prune infrequent candidates in C Example DB {A,B} 0 {B,C,E} minsup> st DB C L C All in set C are frequent according to minsup> 20 Generate candidates from L Count support for candidates in C 2 L C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} L C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB C 2 {B,E} 2 22 L Prune infrequent candidates in C 2 C 2 {A,B} {A,C} {A,D} {A,E} {B,C} {B,D} {B,E} {C,D} {C,E} {D,E} 2 nd DB C 2 {B,E} L 2 L 2 Generate candidates from L 2 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 23 24 Politecnico di Torino 4

L 2 Apply Apriori principle on C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} L 2 Count support for candidates in C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 3 rd DB C 3 {A,C,E} {C,D,E} Prune {A,B,E} D B Its subset {B,E} is infrequent ({B,E} is not in L 2 ) 25 M G 26 L 2 Prune infrequent candidates in C 3 C 3 {A,B,C} {A,B,D} {A,B,E} {A,C,D} {A,C,E} {A,D,E} {B,C,D} {C,D,E} 3 rd DB C 3 {A,C,E} {C,D,E} L 3 L 3 Generate candidates from L 3 C 4 {A,B,C,D} {A,C,E} and {C,D,E} are actually infrequent They are discarded from C 3 27 28 Apply Apriori principle on C 4 Count support for candidates in C 4 L 3 C 4 {A,B,C,D} L 3 C 4 {A,B,C,D} 4 th DB C 4 {A,B,C,D} Check if {A,C,D} and {B,C,D} belong to L 3 L 3 contains all 3-itemset subsets of {A,B,C,D} {A,B,C,D} is potentially frequent 29 30 Politecnico di Torino 5

Prune infrequent candidates in C 4 Final set of frequent L 3 C 4 {A,B,C,D} 4 th DB {A,B,C,D} is actually infrequent {A,B,C,D} is discarded from C 4 C 4 {A,B,C,D} L 4 = Ø 3 Example DB L {A,B} L 3 0 {B,C,E} minsup> L 2 32 Counting Support of Candidates Scan transaction database to count support of each itemset total number of candidates may be large one transaction may contain many candidates Approach [Agr94] candidate are stored in a hash-tree leaf node of hash-tree contains a list of and counts interior node contains a hash table subset function finds all candidates contained in a transaction match transaction subsets to candidates in hash tree 33 Performance Issues in Apriori Candidate generation Candidate sets may be huge 2-itemset candidate generation is the most critical step extracting long frequent intemsets requires generating all frequent subsets Multiple database s n + s when longest frequent pattern length is n 34 Factors Affecting Performance Improving Apriori Efficiency Minimum support threshold lower support threshold increases number of frequent larger number of candidates larger (max) length of frequent Dimensionality (number of items) of the data set more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width transaction width increases in dense data sets may increase max length of frequent and traversals of hash tree number of subsets in a transaction increases with its width 35 Hash-based itemset counting [Yu95] A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent reduction [Yu95] A transaction that does not contain any frequent k-itemset is useless in subsequent s Partitioning [Sav96] Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB 36 Politecnico di Torino 6

Improving Apriori Efficiency FP-growth Algorithm [Han00] Sampling [Toi96] mining on a subset of given data, lower support threshold + a method to determine the completeness Dynamic Itemset Counting [Motw98] add new candidate only when all of their subsets are estimated to be frequent Exploits a main memory compressed rappresentation of the database, the high compression for dense data distributions less so for sparse data distributions complete representation for frequent pattern mining enforces support constraint Frequent pattern mining by means of FP-growth recursive visit of applies divide-and-conquer approach decomposes mining task into smaller subtasks Only two database s count item supports + build 37 38 construction Example DB {A,B} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order 39 construction Example DB {A,B} 0 {B,C,E} minsup> () Count item support and prune items below minsup threshold (2) Build by sorting items in decreasing support order (3) Create For each transaction t in DB order transaction t items in decreasing support order same order as insert transaction t in use existing path for common prefix create new branch when path becomes different 40 construction construction {A,B} {B,A} A: B: A: B: B:2 4 42 Politecnico di Torino 7

construction construction B:2 A: A: A: B:2 A: 43 44 construction construction 5 {B,A,C} A: B:2 B:3 6 {B,A,C,D} C:2 A:3 B:4 B:3 45 46 construction construction C:2 A:3 B:5 B:4 C:2 8 {B,A,C} C:2 A:4 A:3 B:6 B:5 C:2 47 48 Politecnico di Torino 8

construction construction 9 {B,A,D} A:4 B:7 B:6 C:2 0 {B,C,E} 0 {B,C,E} B:7 C:2 49 50 Final FP-growth Algorithm Scan from lowest support item up For each item i in extract frequent including item i and items preceding it in () build Conditional Pattern Base for item i (i-cpb) Select prefix-paths of item i from (2) recursive invocation of FP-growth on i-cpb Item pointers are used to assist frequent itemset generation 5 52 Example Consider item D and extract frequent including D and supported combinations of items A, B, C 53 Conditional Pattern Base of D () Build Select prefix-paths of item D from Frequent itemset: D, sup(d) = 5 54 Politecnico di Torino 9

Conditional Pattern Base of D {B,A,C} Conditional Pattern Base of D {B,A,C} {B,A} 55 56 Conditional Pattern Base of D {B,A,C} {B,A} {B,C} Conditional Pattern Base of D {B,A,C} {B,A} {B,C} {A,C} 57 58 Conditional Pattern Base of D {B} {C} {D} {E} 8 7 7 5 3 {B,A,C} {B,A} {B,C} {A,C} 59 Conditional Pattern Base of D () Build Select prefix-paths of item D from {B,A,C} {B,A} {B,C} {A,C} {B} {C} 4 3 3 B:2 A:4 (2) Recursive invocation of FP-growth on B: 60 Politecnico di Torino 0

Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from {B,A,C} {B,A} {B,C} {A,C} 4 {B} 3 {C} 3 Frequent itemset: DC, sup(dc) = 3 B:2 A:4 B: DC-CPB {A,B} {B} 6 Conditional Pattern Base of DC () Build DC-CPB Select prefix-paths of item C from DC-CPB {A,B} {B} DC-conditional 2 {B} 2 B: DC-conditional B: (2) Recursive invocation of FP-growth on DC-CPB 62 Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional Conditional Pattern Base of DCB () Build DCB-CPB Select prefix-paths of item B from DC-conditional DC-CPB {A,B} {B} DC-conditional 2 {C} {B} 2 B: DC-conditional B: DCB-CPB Item A is infrequent in DCB-CPB A is removed from DCB-CBP DCB-CDB is empty Frequent itemset: DCB, sup(dcb) = 2 DCB-CPB (2) The search backtracks to DC-CBP 63 64 Conditional Pattern Base of DCA () Build DCA-CPB Select prefix-paths of item A from DC-conditional DC-CPB {A,B} {B} DC-conditional {C} 2 {B} 2 Frequent itemset: DCA, sup(dca) = 2 B: (2) The search backtracks to D-CBP DC-conditional B: DCA-CPB is empty (no transactions) 65 Conditional Pattern Base of DB () Build DB-CPB Select prefix-paths of item B from {B,A,C} {B,A} {B,C} {A,C} 4 {C} {B} 3 {C} 3 Frequent itemset: DB, sup(db) = 3 B:2 A:4 B: DB-CPB 2 66 Politecnico di Torino

Conditional Pattern Base of DB () Build DB-CPB Select prefix-paths of item B from Conditional Pattern Base of DBA () Build DBA-CPB Select prefix-paths of item A from DB-conditional DB-CPB 2 DB-conditional 2 DB-conditional DB-CPB 2 DB-conditional {C} 2 DB-conditional (2) Recursive invocation of FP-growth on DB-CPB 67 Frequent itemset: DBA, sup(dba) = 2 (2) The search backtracks to D-CBP DBA-CPB is empty (no transactions) 68 Conditional Pattern Base of DA () Build DA-CPB Select prefix-paths of item A from {B,A,C} {B,A} {B,C} {A,C} {C} 4 {B} {C} 3 3 Frequent itemset: DA, sup(da) = 4 B:2 A:4 B: DA-CPB is empty (no transactions) The search ends 69 Frequent with prefix D Frequent including D and supported combinations of items B,A,C Example DB {A,B} 0 {B,C,E} minsup> 70 Other approaches Many other approaches to frequent itemset extraction some covered later May exploit a different database representation represent the tidset of each item [Zak00] A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D 0 B A B C D E 2 2 4 2 3 4 3 5 5 4 5 6 6 7 8 9 7 8 9 8 0 9 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 7 Compact Representations Some are redundant because they have identical support as their supersets TID A A2 A3 A4 A5 A6 A7 A8 A9 A0 B B2 B3 B4 B5 B6 B7 B8 B9 B0 C C2 C3 C4 C5 C6 C7 C8 C9 C0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Number of frequent = 3 k = k A compact representation is needed From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 72 Politecnico di Torino 2

Maximal Frequent Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Maximal Itemsets Infrequent Itemsets From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 Border 73 Closed Itemset An itemset is closed if none of its immediate supersets has the same support as the itemset {A,B} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} itemset sup 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 itemset sup {A,B,C} 2 {A,B,D} 3 {B,C,D} 3 {A,B,C,D} 2 74 Maximal vs Closed Itemsets Maximal vs Closed Frequent Itemsets ABC 2 ABCD 3 BCE 4 ACDE 5 DE Ids null 24 23 234 245 345 A B C D E 2 AB 24 AC 24 AD 4 23 2 3 24 34 45 AE BC BD BE CD CE DE Minimum support = 2 null A B C D E Closed but not maximal 24 23 234 245 345 2 AB 24 AC 24 AD 4 23 2 3 24 34 45 AE BC BD BE CD CE DE Closed and maximal 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE 2 2 24 4 4 2 3 4 ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Not supported by any transactions 2 4 ABCD ABCE ABDE ACDE BCDE ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 75 2 4 ABCD ABCE ABDE ACDE BCDE # Closed = 9 # Maximal = 4 ABCDE From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 76 Maximal vs Closed Itemsets Effect of Support Threshold Selection of the appropriate minsup threshold is not obvious If minsup is too high including rare but interesting items may be lost example: pieces of jewellery (or other expensive products) If minsup is too low it may become computationally very expensive the number of frequent becomes very large From: Tan,Steinbach, Kumar, Introduction to Data Mining, McGraw Hill 2006 77 78 Politecnico di Torino 3

Interestingness Measures Confidence measure: always reliable? A large number of pattern may be extracted rank patterns by their interestingness Objective measures rank patterns based on statistics computed from data initial framework [Agr94] only considered support and confidence other statistical measures available Subjective measures rank patterns according to user interpretation [Silb98] interesting if it contradicts the expectation of a user interesting if it is actionable 79 5000 high school students are given 3750 eat cereals 3000 play basket 2000 eat cereals and play basket Rule play basket eat cereals sup = 40%, conf = 66,7% is misleading because eat cereals has sup 75% (>66,7%) Problem caused by high frequency of rule head negative correlation basket not basket total cereals 2000 750 3750 not cereals 000 250 250 total 3000 2000 5000 80 Correlation or lift Example P( A, B) conf(r) Correlation = P( A) P( B) = sup(b) Statistical independence Correlation = Positive correlation Correlation > Negative correlation Correlation < r: A B Association rule play basket eat cereals has corr = 0.89 negative correlation but rule play basket not(eatcereals) has corr =,34 8 82 83 Politecnico di Torino 4