Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Similar documents
Introduction to Data Mining

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

CS 484 Data Mining. Association Rule Mining 2

CS5112: Algorithms and Data Structures for Applications

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

CSE 5243 INTRO. TO DATA MINING

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Data Mining Concepts & Techniques

CSE 5243 INTRO. TO DATA MINING

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Chapters 6 & 7, Frequent Pattern Mining

COMP 5331: Knowledge Discovery and Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Association Analysis. Part 1

Association Rule Mining on Web

732A61/TDDD41 Data Mining - Clustering and Association Analysis

DATA MINING - 1DL360

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

DATA MINING - 1DL360

12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters)

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Machine Learning: Pattern Mining

Associa'on Rule Mining

CS 584 Data Mining. Association Rule Mining 2

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

CS246 Final Exam. March 16, :30AM - 11:30AM

Mining Infrequent Patter ns

Data Mining and Analysis: Fundamental Concepts and Algorithms

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

FP-growth and PrefixSpan

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

4/26/2017. More algorithms for streams: Each element of data stream is a tuple Given a list of keys S Determine which tuples of stream are in S

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

COMP 5331: Knowledge Discovery and Data Mining

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Frequent Itemset Mining

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Unit II Association Rules

Handling a Concept Hierarchy

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

CS341 info session is on Thu 3/1 5pm in Gates415. CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 412 Intro. to Data Mining

Association Analysis Part 2. FP Growth (Pei et al 2000)

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

Frequent Itemset Mining

Interesting Patterns. Jilles Vreeken. 15 May 2015

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Approximate counting: count-min data structure. Problem definition

On the Complexity of Mining Itemsets from the Crowd Using Taxonomies

Data mining, 4 cu Lecture 5:

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Data Warehousing & Data Mining

Association Analysis. Part 2

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Chapter 4: Frequent Itemsets and Association Rules

Detecting Anomalous and Exceptional Behaviour on Credit Data by means of Association Rules. M. Delgado, M.D. Ruiz, M.J. Martin-Bautista, D.

DATA MINING - 1DL105, 1DL111

Data Warehousing & Data Mining

NEGATED ITEMSETS OBTAINING METHODS FROM TREE- STRUCTURED STREAM DATA

Mining Positive and Negative Fuzzy Association Rules

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

15 Introduction to Data Mining

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Frequent Itemset Mining

Knowledge Discovery and Data Mining I

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining Data Streams. The Stream Model. The Stream Model Sliding Windows Counting 1 s

OPPA European Social Fund Prague & EU: We invest in your future.

Algorithmic Methods of Data Mining, Fall 2005, Course overview 1. Course overview

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing

Density-Based Clustering

Association Analysis 1

Computational Frameworks. MapReduce

CS425: Algorithms for Web Scale Data

CS738 Class Notes. Steve Revilak

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

EFFICIENT MINING OF WEIGHTED QUANTITATIVE ASSOCIATION RULES AND CHARACTERIZATION OF FREQUENT ITEMSETS

Frequent Pattern Mining: Exercises

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

Transcription:

Frequent Itemsets and Association Rule Mining Vinay Setty vinay.j.setty@uis.no Slides credit: http://www.mmds.org/

Association Rule Discovery Supermarket shelf management Market-basket model: Goal: Identify items that are bought together by sufficiently many customers Approach: Process the sales data collected with barcode scanners to find dependencies among items A classic rule: If someone buys diaper and milk, then he/she is likely to buy beer Don t be surprised if you find six-packs next to diapers! 2

The Market-Basket Model A large set of items e.g., things sold in a supermarket A large set of baskets Each basket is a small subset of items e.g., the things one customer buys on one day Want to discover association rules Input: TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Output: Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer} People who bought {x,y,z} tend to buy {v,w} Amazon! 3

Applications (1) Items = products; Baskets = sets of products someone bought in one trip to the store Real market baskets: Chain stores keep TBs of data about what customers buy together Tells how typical customers navigate stores, lets them position tempting items Suggests tie-in tricks, e.g., run sale on diapers and raise the price of beer Need the rule to occur frequently, or no $$ s Amazon s people who bought X also bought Y 4

Applications (2) Baskets = sentences; Items = documents containing those sentences Items that appear together too often could represent plagiarism Notice items do not have to be in baskets Baskets = patients; Items = drugs & side-effects Has been used to detect combinations of drugs that result in particular side-effects But requires extension: Absence of an item needs to be observed as well as presence 5

More generally A general many-to-many mapping (association) between two kinds of things But we ask about connections among items, not baskets For example: Finding communities in graphs (e.g., Twitter) 6

Example: Finding communities in graphs (e.g., Twitter) Baskets = nodes; Items = outgoing neighbors Searching for complete bipartite subgraphs K s,t of a big graph How? s nodes A dense 2-layer graph t nodes 7 View each node i as a basket B i of nodes i it points to K s,t = a set Y of size t that occurs in s buckets B i Looking for K s,t à set of support s and look at layer t all frequent sets of size t

First: Define Frequent itemsets Association rules: Confidence, Support, Interestingness Then: Algorithms for finding frequent itemsets Finding frequent pairs A-Priori algorithm 8

Frequent Itemsets Simplest question: Find sets of items that appear together frequently in baskets Support for itemset I: Number of baskets containing all items in I (Often expressed as a fraction of the total number of baskets) Given a support threshold s, then sets of items that appear in at least s baskets are called frequent itemsets TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Support of {Beer, Bread} = 2 9

Example: Frequent Itemsets Items = {milk, coke, pepsi, beer, juice} Support threshold = 3 baskets B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, b} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Frequent itemsets: {m}, {c}, {b}, {j}, {m,b}, {b,c}, {c,j}. 10

Association Rules Association Rules: If-then rules about the contents of baskets {i 1, i 2,,i k } j means: if a basket contains all of i 1,,i k then it is likely to contain j In practice there are many rules, want to find significant/interesting ones! Confidence of this association rule is the probability of j given I = {i 1,,i k } conf( I j) = support( I È support( I) j) 11

Interesting Association Rules Not all high-confidence rules are interesting The rule X milk may have high confidence for many itemsets X, because milk is just purchased very often (independent of X) and the confidence will be high Interest of an association rule I j: difference between its confidence and the fraction of baskets that contain j Interest( I j) = conf( I j) - Pr[ Interesting rules are those with high positive or negative interest values (usually above 0.5) 12 j]

Example: Confidence and Interest B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, b} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Association rule: {m, b} c Confidence = 2/4 = 0.5 Interest = 0.5 5/8 = 1/8 Item c appears in 5/8 of the baskets Rule is not very interesting! 13

Finding Association Rules Problem: Find all association rules with support s and confidence c Note: Support of an association rule is the support of the set of items on the left side Hard part: Finding the frequent itemsets! If {i 1, i 2,, i k } j has high support and confidence, then both {i 1, i 2,, i k } and {i 1, i 2,,i k, j} will be frequent 14 conf( I j) = support( I È j) support( I)

Mining Association Rules Step 1: Find all frequent itemsets I (we will explain this next) Step 2: Rule generation For every subset A of I, generate a rule A I \ A Since I is frequent, A is also frequent Variant 1: Single pass to compute the rule confidence confidence(a,b C,D) = support(a,b,c,d) / support(a,b) Variant 2: Observation: If A,B,C D is below confidence, so is A,B C,D Can generate bigger rules from smaller ones! Output the rules above the confidence threshold 15

Example B 1 = {m, c, b} B 2 = {m, p, j} B 3 = {m, c, b, n} B 4 = {c, j} B 5 = {m, p, b} B 6 = {m, c, b, j} B 7 = {c, b, j} B 8 = {b, c} Support threshold s = 3, confidence c = 0.75 1) Frequent itemsets: {b,m} {b,c} {c,m} {c,j} {m,c,b} 2) Generate rules: b m: c=4/6 b c: c=5/6 b,c m: c=3/5 m b: c=4/5 b,m c: c=3/4 b c,m: c=3/6 16

Compacting the Output To reduce the number of rules we can post-process them and only output: Maximal frequent itemsets: No immediate superset is frequent or Gives more pruning Closed itemsets: No immediate superset has the same count (> 0) Stores not only frequent information, but exact counts 17

Example: Maximal/Closed Support Maximal(s=3) Closed A 4 No No B 5 No Yes C 3 No No AB 4 Yes Yes Frequent, but superset BC also frequent. Frequent, and its only superset, ABC, not freq. Superset BC has same count. AC 2 No No BC 3 Yes Yes ABC 2 No Yes 18 Its only superset, ABC, has smaller count.

A-Priori Algorithm (1) A two-pass approach called A-Priori limits the need for main memory Key idea: monotonicity If a set of items I appears at least s times, so does every subset J of I Contrapositive for pairs: If item i does not appear in s baskets, then no pair including i can appear in s baskets So, how does A-Priori find freq. pairs? 20

A-Priori Algorithm (2) Pass 1: Read baskets and count in main memory the occurrences of each individual item Requires only memory proportional to #items Items that appear s times are the frequent items Pass 2: Read baskets again and count in main memory only those pairs where both elements are frequent (from Pass 1) Requires memory proportional to square of frequent items only (for counts) Plus a list of the frequent items (so you know what must be counted) 21

Main-Memory: Picture of A-Priori Item counts Frequent items Main memory Counts of pairs of frequent items (candidate pairs) Pass 1 Pass 2 22

Detail for A-Priori You can use the triangular matrix method with n = number of frequent items Item counts Frequent items Old item #s May save space compared with storing triples Trick: re-number frequent items 1,2, and keep a table relating new numbers to original item numbers Main memory Counts of pairs of Counts frequent of pairs items of frequent items Pass 1 Pass 2 23

Frequent Triples, Etc. For each k, we construct two sets of k-tuples (sets of size k): C k = candidate k-tuples = those that might be frequent sets (support > s) based on information from the pass for k 1 L k = the set of truly frequent k-tuples All items Count the items All pairs of items from L 1 Count the pairs To be explained C 1 Filter L 1 Construct C 2 Filter L 2 Construct C 3 24

Example Hypothetical steps of the A-Priori algorithm C 1 = { {b} {c} {j} {m} {n} {p} } Count the support of itemsets in C 1 Prune non-frequent: L 1 = { b, c, j, m } Generate C 2 = { {b,c} {b,j} {b,m} {c,j} {c,m} {j,m} } Count the support of itemsets in C 2 Prune non-frequent: L 2 = { {b,m} {b,c} {c,m} {c,j} } ** Note here we generate new candidates by generating C k from L k-1 and L 1. But that one can be more careful with candidate generation. For example, in C 3 we know {b,m,j} cannot be frequent since {m,j} is not frequent Generate C 3 = { {b,c,m} {b,c,j} {b,m,j} {c,m,j} } Count the support of itemsets in C 3 ** Prune non-frequent: L 3 = { {b,c,m} } 25

Generating Candidates Full Example L 2 Database D TID Items 100 1 3 4 200 2 3 5 300 1 2 3 5 400 2 5 itemset sup {1 3} 2 {2 3} 2 {2 5} 3 {3 5} 2 Scan D C 1 C 2 minsup = 2 itemset sup. {1} 2 {2} 3 {3} 3 {4} 1 {5} 3 itemset sup {1 2} 1 {1 3} 2 {1 5} 1 {2 3} {2 5} {3 5} 2 3 2 L 1 C 2 Scan D itemset sup. {1} 2 {2} 3 {3} 3 {5} 3 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} C 3 itemset {2 3 5} Scan D L 3 itemset sup {2 3 5} 2 Scan D C 4 is empty 26

Pruning Step L 2 C 3 itemset sup {1 3} 2 {2 3} 2 {2 5} 3 {3 5} 2 itemset {2 3 5} itemset {1 3 5} For an itemset of size k, check if all the itemsets of size k-1 are also frequent If any of the k-1 sized itemsets are not frequent prune the itemset of size k Check to see of all these itemsets are frequent {2 3} {3 5} {2 5} {1 3} {3 5} {1 5} Not frequent! 27

A-Priori for All Frequent Itemsets One pass for each k (itemset size) Needs room in main memory to count each candidate k tuple For typical market-basket data and reasonable support (e.g., 1%), k = 2 requires the most memory Many possible extensions: Association rules with intervals: For example: Men over 65 have 2 cars Association rules when items are in a taxonomy Bread, Butter FruitJam BakedGoods, MilkProduct PreservedGoods Lower the support s as itemset gets bigger 28

Frequent Itemsets in < 2 Passes A-Priori takes k passes to find frequent itemsets of size k Can we use fewer passes? Use 2 or fewer passes for all sizes, but may miss some frequent itemsets Random sampling SON (Savasere, Omiecinski, and Navathe) Toivonen (see textbook) 30

Random Sampling (1) Take a random sample of the market baskets Run a-priori or one of its improvements in main memory So we don t pay for disk I/O each time we increase the size of itemsets Reduce support threshold proportionally to match the sample size Main memory Copy of sample baskets Space for counts 31

Random Sampling (2) Optionally, verify that the candidate pairs are truly frequent in the entire data set by a second pass (avoid false positives) But you don t catch sets frequent in the whole but not in the sample Smaller threshold, e.g., s/125, helps catch more truly frequent itemsets But requires more space 32

SON Algorithm (1) Repeatedly read small subsets of the baskets into main memory and run an in-memory algorithm to find all frequent itemsets Note: we are not sampling, but processing the entire file in memory-sized chunks An itemset becomes a candidate if it is found to be frequent in any one or more subsets of the baskets. 33

SON Algorithm (2) On a second pass, count all the candidate itemsets and determine which are frequent in the entire set Key monotonicity idea: an itemset cannot be frequent in the entire set of baskets unless it is frequent in at least one subset. 34

Pass 1 Batch Processing SON Summary Scan data on disk Repeatedly fill memory with new batch of data Run sampling algorithm on each batch Generate candidate frequent itemsets Candidate Itemsets if frequent in some batch Pass 2 Validate candidate itemsets Monotonicity Property Itemset X is frequent overall à frequent in at least one batch 35

SON Distributed Version SON lends itself to distributed data mining Baskets distributed among many nodes Compute frequent itemsets at each node Distribute candidates to all nodes Accumulate the counts of all candidates 36

SON: Map/Reduce Phase 1: Find candidate itemsets Map? Reduce? Phase 2: Find true frequent itemsets Map? Reduce? 37

(Park-Chen-Yu) PCY Idea Improvement upon A-Priori Observe during Pass 1, memory mostly idle Idea Use idle memory for hash-table H Pass 1 hash pairs from b into H Increment counter at hash location At end bitmap of high-frequency hash locations Pass 2 bitmap extra condition for candidate pairs 39

Memory Usage PCY M E M O R Y Candidate Items Hash Table Frequent Items Bitmap Candidate Pairs M E M O R Y Pass 1 Pass 2 40

PCY Algorithm Pass 1 m counters and hash-table T Linear scan of baskets b Increment counters for each item in b Increment hash-table counter for each item-pair in b Mark as frequent, f items of count at least s Summarize T as bitmap (count > s à bit = 1) Pass 2 Counter only for F qualified pairs (X i,x j ): both are frequent pair hashes to frequent bucket (bit=1) Linear scan of baskets b Increment counters for candidate qualified pairs of items in b 41

Multi-Stage PCY Problem False positives from hashing New Idea Multiple rounds of hashing After Pass 1, get list of qualified pairs In Pass 2, hash only qualified pairs Fewer pairs hash to buckets à less false positives (buckets with count >s, yet no pair of count >s) In Pass 3, less likely to qualify infrequent pairs Repetition reduce memory, but more passes Failure memory < O(f+F) 42

Multi-Stage PCY Memory Candidate Items Hash Table 1 Frequent Items Bitmap Hash Table 2 Frequent Items Bitmap 1 Bitmap 2 Candidate Pairs Pass 1 Pass 2 43

Literature Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman, Chapter 6 http://mmds.org http://infolab.stanford.edu/~ullman/mmds/ch6.pdf 44