Introduction to Data Mining

Similar documents
Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

Data Mining Concepts & Techniques

Association Rule Mining on Web

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

CS 484 Data Mining. Association Rule Mining 2

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

CS5112: Algorithms and Data Structures for Applications

Lecture Notes for Chapter 6. Introduction to Data Mining

Machine Learning: Pattern Mining

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL360

Chapters 6 & 7, Frequent Pattern Mining

DATA MINING - 1DL360

Association Analysis. Part 1

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Associa'on Rule Mining

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

COMP 5331: Knowledge Discovery and Data Mining

Communities Via Laplacian Matrices. Degree, Adjacency, and Laplacian Matrices Eigenvectors of Laplacian Matrices

CSE 5243 INTRO. TO DATA MINING

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Handling a Concept Hierarchy

12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters)

Data Mining and Analysis: Fundamental Concepts and Algorithms

Frequent Itemset Mining

CS 584 Data Mining. Association Rule Mining 2

Interesting Patterns. Jilles Vreeken. 15 May 2015

Association Analysis Part 2. FP Growth (Pei et al 2000)

Approximate counting: count-min data structure. Problem definition

Frequent Itemset Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Unit II Association Rules

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Mining Data Streams. The Stream Model. The Stream Model Sliding Windows Counting 1 s

15 Introduction to Data Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Frequent Itemset Mining

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Data mining, 4 cu Lecture 5:

DATA MINING - 1DL105, 1DL111

Data Warehousing & Data Mining

1. Data summary and visualization

Chapter 4: Frequent Itemsets and Association Rules

Mining Infrequent Patter ns

Foundations of Math. Chapter 3 Packet. Table of Contents

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Data Warehousing & Data Mining

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

FUZZY ASSOCIATION RULES: A TWO-SIDED APPROACH

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

MULE: An Efficient Algorithm for Detecting Frequent Subgraphs in Biological Networks

Pattern Structures 1

On the Complexity of Mining Itemsets from the Crowd Using Taxonomies

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

Sequential Pattern Mining

Ch 1. The Language of Algebra

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Frequent Itemset Mining

CS246 Final Exam. March 16, :30AM - 11:30AM

OPPA European Social Fund Prague & EU: We invest in your future.

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

Cost and Preference in Recommender Systems Junhua Chen LESS IS MORE

Density-Based Clustering

Frequent Pattern Mining: Exercises

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Mining Positive and Negative Fuzzy Association Rules

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Algorithms and Data Structures

1 [15 points] Frequent Itemsets Generation With Map-Reduce

BAYESIAN MIXTURE MODELS FOR FREQUENT ITEMSET MINING

FP-growth and PrefixSpan

18.175: Lecture 2 Extension theorems, random variables, distributions

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

****DRAFT**** ALGORITHMS FOR ASSOCIATION RULES

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

Data Structures. Outline. Introduction. Andres Mendez-Vazquez. December 3, Data Manipulation Examples

CS738 Class Notes. Steve Revilak

Mining chains of relations

Transcription:

Introduction to Data Mining Lecture #12: Frequent Itemsets Seoul National University 1

In This Lecture Motivation of association rule mining Important concepts of association rules Naïve approaches for finding frequent itemsets 2

Association Rule Discovery Supermarket shelf management Market-basket model: Goal: Identify items that are bought together by sufficiently many customers Approach: Process the sales data collected with barcode scanners to find dependencies among items A classic rule: If someone buys diaper and milk, then he/she is likely to buy beer Don t be surprised if you find six-packs next to diapers! 3

The Market-Basket Model A large set of items e.g., things sold in a supermarket A large set of baskets Each basket is a small subset of items e.g., the things one customer buys on one day Want to discover association rules People who bought {x,y,z} tend to buy {v,w} Amazon! Input: TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Output: Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer} 4

Applications (1) Items = products; Baskets = sets of products someone bought in one trip to the store Real market baskets: Chain stores keep TBs of data about what customers buy together Tells how typical customers navigate stores, lets them position tempting items Suggests tie-in tricks, e.g., run sale on diapers and raise the price of beer Need the rule to occur frequently, or no $$ s Amazon s people who bought X also bought Y 5

Applications (2) Baskets = sentences; Items = documents containing those sentences How can we interpret items that appear together too often? Items that appear together too often could represent plagiarism Notice items do not have to be in baskets Baskets = patients; Items = biomarkers(genes, proteins), diseases How can we interpret frequent itemset (disease and biomarker)? Frequent itemset consisting of one disease and one or more biomarkers suggests a test for the disease 6

More generally A general many-to-many mapping (association) between two kinds of things But we ask about connections among items, not baskets For example: Finding communities in graphs (e.g., Twitter) 7

Example: Finding communities in graphs (e.g., Twitter) Baskets = nodes; Items = outgoing neighbors s nodes Searching for complete bipartite subgraphs K s,t of a big graph How? t nodes A dense 2-layer graph View each node i as a basket B i of nodes i points to K s,t = a node set Y of size t that occurs in s baskets B i Looking for K s,t all frequent sets of size t that appear s times 8

ROADMAP First: Define Frequent itemsets Association rules: Confidence, Support, Interestingness Then: Algorithms for finding frequent itemsets Finding frequent pairs A-Priori algorithm PCY algorithm + 2 refinements 9

Outline Frequent Itemsets Finding Frequent Itemsets 10

Frequent Itemsets Simplest question: Find sets of items that appear together frequently in baskets Support for itemset I: Number of baskets containing all items in I (Often expressed as a fraction of the total number of baskets) Given a minimum support s, then sets of items that appear in at least s baskets are called frequent itemsets TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Support of {Beer, Bread} = 2 11

Example: Frequent Itemsets Items = {milk, coke, pepsi, beer, juice} Minimum support = 3 baskets B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, b} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Frequent itemsets: {m}, {c}, {b}, {j}, {m,b}, {b,c}, {c,j} 12

Association Rules Association Rules: If-then rules about the contents of baskets {i 1, i 2,,i k } j means: if a basket contains all of i 1,,i k then it is likely to contain j In practice there are many rules, want to find significant/interesting ones! Confidence of this association rule is the probability of j given I = {i 1,,i k } conf( I j) = support( I j) support( I) 13

Interesting Association Rules Not all high-confidence rules are interesting The rule X milk may have high confidence for many itemsets X, because milk is just purchased very often (independent of X) and the confidence will be high Interest of an association rule I j: difference between its confidence and the fraction of baskets that contain j Interest( I j) = conf( I j) Pr[ j] Interesting rules are those with high positive or negative interest values (usually above 0.5) 14

Interesting Association Rules Interest of an association rule I j: difference between its confidence and the fraction of baskets that contain j Interest( I j) = conf( I j) Pr[ j] Interesting rules are those with high positive or negative interest values (usually above 0.5) E.g. examples of the following cases? conf[i -> j] is large, but Pr[j] is small conf[i -> j] is small, but Pr[j] is large 15

Example: Confidence and Interest B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, b} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Association rule: {m, b} c Confidence = 2/4 = 0.5 Interest = 0.5 5/8 = 1/8 Item c appears in 5/8 of the baskets Rule is not very interesting! 16

Finding Association Rules Problem: Find all association rules with support s and confidence c Note: Support of an association rule is the support of the set of items on the left side Hard part: Finding the frequent itemsets! If {i 1, i 2,, i k } j has high support and confidence, then both {i 1, i 2,, i k } and {i 1, i 2,,i k, j} will be frequent conf( I j) = support( I j) support( I) 17

Mining Association Rules Step 1: Find all frequent itemsets I (we will explain this next) Step 2: Rule generation For every subset A of I, generate a rule A I \ A Since I is frequent, A is also frequent Variant 1: Single pass to compute the confidence rule confidence(a,b C,D) = support(a,b,c,d) / support(a,b) Variant 2: Observation: If A,B,C D is below confidence, so is A,B C,D Can generate bigger rules from smaller ones! Output the rules with confidence c 18

Example B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, c, b, n} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Minimum support s = 3, confidence c = 0.75 1) Frequent itemsets: {b,m} {b,c} {c,m} {c,j} {m,c,b} 2) Generate rules: b m: c=4/6 b c: c=5/6 b,c m: c=3/5 m b: c=4/5 b,m c: c=3/4 b c,m: c=3/6 19

Compacting the Output To reduce the number of rules we can post-process them and only output: Maximal frequent itemsets: No superset is frequent E.g., if {A}, {A, B}, {B,C} are frequent, {A,B}, {B,C} are maximal Gives more pruning or Closed itemsets: No immediate superset has the same count (> 0) E.g., if {A}, {A,C} both have support 5, {A} is not closed Stores not only frequent information, but exact counts 20

Example: Maximal/Closed Support Maximal(s=3) Closed A 4 No No B 5 No Yes C 3 No No AB 4 Yes Yes AC 2 No No BC 3 Yes Yes ABC 2 No Yes Frequent, but superset BC also frequent. Frequent, and its only superset, ABC, not freq. Superset BC has same count. Its only superset, ABC, has smaller count. 21

Outline Frequent Itemsets Finding Frequent Itemsets 22

Itemsets: Computation Model Back to finding frequent itemsets Typically, data are kept in flat files rather than in a database system: Stored on disk Stored basket-by-basket Baskets are small but we have many baskets and many items Expand baskets into pairs, triples, etc. as you read baskets Use k nested loops to generate all sets of size k Note: We want to find frequent itemsets. To find them, we have to count them. To count them, we have to generate them. Item Item Item Item Item Item Item Item Item Item Item Item Etc. Items are positive integers, and boundaries between baskets are 1. 23

Computation Model The true cost of mining disk-resident data is usually the number of disk I/Os In practice, association-rule algorithms read the data in passes all baskets read in turn We measure the cost by the number of passes an algorithm makes over the data 1-pass, 2-pass, 24

Main-Memory Bottleneck For many frequent-itemset algorithms, main-memory is the critical resource As we read baskets, we need to count something, e.g., occurrences of pairs of items The number of different things we can count is limited by main memory Swapping counts in/out from/to disk is a disaster (why?) 25

Finding Frequent Pairs The hardest problem often turns out to be finding the frequent pairs of items {i 1, i 2 } Why? Freq. pairs are common, freq. triples are rare Let s first concentrate on pairs, then extend to larger sets The approach: We always need to generate all the itemsets But we would only like to count (keep track of) those itemsets that in the end turn out to be frequent 26

Naïve Algorithm Naïve approach to finding frequent pairs Read file once, counting in main memory the occurrences of each pair: From each basket of n items, generate its n(n-1)/2 pairs by two nested loops Fails if (#items) 2 exceeds main memory Remember: #items can be 100K (Wal-Mart) or 10B (Web pages) Suppose 10 5 items, counts are 4-byte integers Number of pairs of items: 10 5 (10 5-1)/2 5*10 9 Therefore, 2*10 10 (20 gigabytes) of memory needed 27

Counting Pairs in Memory Two approaches: Approach 1: Count all pairs using a matrix Approach 2: Keep a table of triples [i, j, c] = the count of the pair of items {i, j} is c. (where c>0) Note: If integers and item ids are 4 bytes, we need approximately 12 bytes for pairs with count > 0 Plus some additional overhead for the hashtable Approach 1 only requires 4 bytes per pair Approach 2 uses 12 bytes per pair (but only for pairs with count > 0) 28

Comparing the two approaches # of all items 4 bytes per pair 12 per occurring pair # of all items Triangular Matrix (Approach 1) Store all (i,j) where i<j Triples (Approach 2) Store (i,j) whose sup 1 29

Comparing the two approaches Approach 1: Triangular Matrix n = total number items Count pair of items {i, j} only if i<j How can we implement a triangular matrix, using your favorite language (e.g., Python, Java)? 30

Comparing the two approaches Approach 1: Triangular Matrix n = total number items Count pair of items {i, j} only if i<j Can use one-dimensional array to store the tri. matrix Keep pair counts in lexicographic order: {1,2}, {1,3},, {1,n}, {2,3}, {2,4},,{2,n}, {3,4}, Pair {i, j} is at position (i 1)(n i/2) + j i (array index starts from 1) Proof: ii 1 kk=1 (nn kk) + (jj ii) Total number of pairs n(n 1)/2; total bytes ~ 2n 2 Triangular Matrix requires 4 bytes per pair Approach 2 uses 12 bytes per occurring pair (but only for pairs with count > 0) When should we prefer Approach 2 over Approach 1? 31

Comparing the two approaches Approach 1: Triangular Matrix n = total number items Count pair of items {i, j} only if i<j Can use one-dimensional array to store the tri. matrix Problem is if we have too Keep pair counts in lexicographic order: {1,2}, many {1,3},, {1,n}, items {2,3}, {2,4},,{2,n}, so the {3,4}, pairs Pair {i, j} is at position (i 1)(n i/2) + j i (array index starts from 1) do not Proof: ii 1 fit into memory. kk=1 (nn kk) + (jj ii) Total number of pairs n(n 1)/2; total bytes ~ 2n 2 Triangular Matrix requires 4 bytes per pair Can we do better? Approach 2 uses 12 bytes per occurring pair (but only for pairs with count > 0) When should we prefer Approach 2 over Approach 1? If less than 1/3 of possible pairs actually occur 32

What You Need to Know Motivation of association rule mining The beginning of data mining (1993) Important concepts of association rules Support, confidence, interest, maximal frequent itemset, closed itemset Naïve approaches for finding frequent itemsets Fails if (#items) 2 exceeds main memory 33

Questions? 34