sor exam What Is Association Rule Mining?

Similar documents
DATA MINING LECTURE 3. Frequent Itemsets Association Rules

COMP 5331: Knowledge Discovery and Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining Concepts & Techniques

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

DATA MINING - 1DL360

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

DATA MINING - 1DL360

CS 484 Data Mining. Association Rule Mining 2

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Unit II Association Rules

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Association Rule Mining on Web

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

CS 584 Data Mining. Association Rule Mining 2

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Chapters 6 & 7, Frequent Pattern Mining

CSE 5243 INTRO. TO DATA MINING

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

CSE 5243 INTRO. TO DATA MINING

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Introduction to Data Mining

Association Analysis Part 2. FP Growth (Pei et al 2000)

Associa'on Rule Mining

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Association Analysis 1

Data Mining and Analysis: Fundamental Concepts and Algorithms

Machine Learning: Pattern Mining

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

CS5112: Algorithms and Data Structures for Applications

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Frequent Itemset Mining

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Data Warehousing & Data Mining

Frequent Itemset Mining

CS 412 Intro. to Data Mining

Data Warehousing & Data Mining

Knowledge Discovery and Data Mining I

Association Analysis. Part 1

COMP 5331: Knowledge Discovery and Data Mining

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Summary. 8.1 BI Overview. 8. Business Intelligence. 8.1 BI Overview. 8.1 BI Overview 12/17/ Business Intelligence

FP-growth and PrefixSpan

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

Temporal Data Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Frequent Itemset Mining

Data mining, 4 cu Lecture 5:

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

Handling a Concept Hierarchy

EFFICIENT MINING OF WEIGHTED QUANTITATIVE ASSOCIATION RULES AND CHARACTERIZATION OF FREQUENT ITEMSETS

NetBox: A Probabilistic Method for Analyzing Market Basket Data

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

Un nouvel algorithme de génération des itemsets fermés fréquents

Interesting Patterns. Jilles Vreeken. 15 May 2015

Frequent Itemset Mining

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Positive Borders or Negative Borders: How to Make Lossless Generator Based Representations Concise

15 Introduction to Data Mining

Association Analysis. Part 2

Unsupervised Learning. k-means Algorithm

Mining Positive and Negative Fuzzy Association Rules

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Data Mining and Analysis: Fundamental Concepts and Algorithms

CS246 Final Exam. March 16, :30AM - 11:30AM

CS738 Class Notes. Steve Revilak

CS4445 B10 Homework 4 Part I Solution

1 Frequent Pattern Mining

Sequential Pattern Mining

CPT+: A Compact Model for Accurate Sequence Prediction

Density-Based Clustering

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Approximate counting: count-min data structure. Problem definition

The Vietnam urban food consumption and expenditure study

3. Problem definition

DATA MINING - 1DL105, 1DL111

Mining Infrequent Patter ns

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

OPPA European Social Fund Prague & EU: We invest in your future.

Chapter 4: Frequent Itemsets and Association Rules

Transcription:

Mining Association Rules Mining Association Rules What is Association rule mining Apriori Algorithm Measures of rule interestingness Advanced Techniques 1 2 What Is Association Rule Mining? i Association rule mining Finding frequent patterns, associations, correlations, or causal structures among sets of items in transaction databases Understand customer buying habits by finding associations and correlations between the different items that customers place in their shopping basket Applications Basket data analysis, cross-marketing, catalog design, loss-leader analysis, s, web log analysis, s, fraud detection (supervisor->examiner) sor exam What Is Association Rule Mining? i Rule form Antecedent Consequent [support, confidence] (support and confidence are user defined measures of interestingness) Examples buys(x, computer ) buys(x, financial management software ) [0.5%, 60%] age(x, 30..39 ) ^ income(x, 42..48K ) buys(x, y(, car )[1%,75%] 3 4

How can Association Rules be used? Probably mom was calling dad at work to buy diapers on way home and he decided to buy a six-pack as well. The retailer could move diapers and beers to separate places and position ii hih highprofit items of interest to young fathers along the path. How can Association Rules be used? Let the rule discovered be {Bagels,...} {Potato Chips} Potato chips as consequent => Can be used to determine what should be done to boost its sales Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels Bagels in antecedent and Potato chips in the consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato Chips 5 6 Given: Basic Concepts (1) database of transactions, (2) each transaction is a list of items purchased by a customer in a visit Find: all rules that correlate the presence of one set of items (itemset) with that of another set of items E.g., 35% of people who buys salmon also buys cheese TX1 TX2 TX3 TX4 Shoes, Socks, Tie, Belt Shoes, Socks, Tie, Belt, Shirt, Hat Shoes, Tie Shoes, Socks, Belt Transaction Shoes Socks Tie Belt Shirt Scarf Hat 1 1 1 1 0 0 0 2 1 1 1 1 1 0 1 3 1 0 1 0 0 0 0 4 1 1 0 1 0 0 0... Support is 50% (2/4) Socks Tie Confidence is 66.67% (2/3) 7 8

Rule basic Measures A B [ s, c ] Support: denotes the frequency of the rule within transactions. A high value means that the rule involves a great part of database. support(a B [ s, c ]) = p(a B) actions transa Confidence: denotes the percentage of transactions containing A which also contain B. It is an estimation of conditioned probability. confidence(a B [ s, c ]) = p(b A) = sup(a,b)/sup(a). (A) Rule C => D support confidence / 9 10 Applications Applications Baskets = documents; items = words in those documents. Lets us find words that appear together unusually frequently, i.e., linked concepts. Baskets = sentences, items = documents containing those sentences. Items that appear together too often could represent plagiarism. Word 1 Word 2 Word 3 Word 4 Doc 1 1 0 1 1 Doc 2 0 0 1 1 Doc 3 1 1 1 0 Doc 1 Doc 2 Doc 3 Doc 4 Sent 1 1 0 1 1 Sent 2 0 0 1 1 Sent 3 1 1 1 0 Word 4 => Word 3 Doc 4 => Doc 3 When word 4 occurs in a document there a big probability of word 3 occurring 11 When a sentence occurs in document 4 there is a big probability of occurring in document 3 12

Applications Baskets = Web pages; items = linked pages. Pairs of pages with many common references may be about the same topic. Baskets = Web pages p i ; items = pages that link to p i Pages with many of the same links may be mirrors or about the same topic. Example Trans. Id Purchased Items 1 A,D 2 AC A,C 3 A,B,C 4 BEF B,E,F Definitions: Itemset: AB A,B or BEF B,E,F Support of an itemset: Sup(A,B)=1 Sup(A,C)=2 Frequent pattern: Given min. sup=2, {A,C} is a frequent pattern wp1 wp2 wp a wp b wp c wp d For minimum support = 50% and minimum confidence = 50%, we have the following rules A => C with 50% support and 66% confidence C => A with 50% support and 100% confidence 13 14 The support probability of each rule is identified by the color of the symbols and the confidence probability of each rule is identified by the shape of the symbols. The items that have the highest confidence are Heineken beer, crackers, chicken, and peppers. Randall Matignon 2007, Data Mining Using SAS Enterprise Miner, Wiley (book). 15 Randall Matignon 2007, Data Mining Using SAS Enterprise Miner, Wiley (book). 16

Mining Association Rules Boolean association rules What is Association rule mining Apriori Algorithm Each transaction is converted to a Boolean vector Measures of rule interestingness Advanced Techniques 17 18 An Example Transaction ID Items Bought Min. support 50% Min. confidence 50% 2000 A,B,C 1000 AC A,C Frequent Itemsett Support 4000 A,D {A} 75% 5000 BEF B,E,F {B} 50% {C} 50% {A,C} 50% For rule A C : support = support({a, C }) = 50% confidence = support({a, C }) / support({a }) = 66.6% Approach: Finding Association Rules 1) Find all itemsets that have high support - These are known as frequent itemsets 2) Generate association rules from frequent itemsets 19 20

Mining Frequent Itemsets (the Key Step) Apriori principle Find the frequent itemsets: the sets of items that have (at least) minimum support A subset of a frequent itemset must also be a frequent itemset Generate length (k+1) candidate itemsets from length k frequent itemsets, and Test the candidates against DB to determine which are in fact frequent Any subset of a frequent itemset must be frequent A transaction containing {beer, diaper, nuts} also contains {beer, diaper} {beer, diaper, nuts} is frequent {beer, diaper} must also be frequent 21 22 Apriori principle Itemset Lattice null A B C D E No superset of any infrequent itemset should be generated or tested Many item combinations can be pruned AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE 23 24

Apriori principle for pruning candidates How to Generate Candidates? If an itemset t is infrequent, then all of its supersets must also be infrequent null A B C D E The items in L k-1 are listed in an order k 1 Step 1: self-joining L k-1 AB AC AD AE BC BD BE CD CE DE Found to be Infrequent ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE insert into C k select pitem p.item 1, pitem p.item 2,, p.item pitem k-1, qitem q.item k-1 from L k-1 p, L k-1 q where p.item 1=q.item 1,, p.item k-2 =q.item k-2, p.item k-1 < q.item k-1 Pruned supersets ABCD ABCE ABDE ACDE BCDE A D E = = < A D F ABCDE 25 26 The Apriori Algorithm Example How to Generate Candidates? Database D TID Items 100 1 3 4 200 2 3 5 300 1 2 3 5 400 2 5 Min. support: 2 transactions itemset sup. C 1 {1} 2 L itemset sup. 1 {2} 3 {1} 2 Scan D {3} 3 {2} 3 {4} 1 {3} 3 {5} 3 {5} 3 C 2 itemset sup C itemset 2 L itemset sup {1 2} 1 {1 2} 2 {1 3} 2 {1 3} 2 {1 3} {2 3} 2 {1 5} 1 Scan D {1 5} {2 3} 2 {2 3} {2 5} 3 {2 5} 3 {2 5} {3 5} 2 {3 5} 2 {3 5} C 3 itemset Scan D L 3 itemset sup {2 3 5} {2 3 5} 2 27 Step 2: pruning for all itemsets c in C k do for all (k-1)-subsets s of c do if (s is not in L k-1 ) then delete c from C k A D E F 28

Example of Generating Candidates L 3 = {abc, abd, acd, ace, bcd} Self-joining: L 3 *L 3 joining abc and abd gives abcd joining i acd and ace gives acde Example of Generating Candidates L 3 = {abc, abd, acd, ace, bcd} Pruning (before counting its support): abcd: check if abc, abd, bcd are in L3 acde: check if acd, ace, ade are in L3 acde is removed because ade is not in L 3 Thus C 4 ={abcd} 4 29 30 The Apriori Algorithm C k : Candidate itemset of size k L k : frequent itemset of size k Join Step: C k is generated by joining i L k-1 with itself Prune Step: Any (k-1)-itemset that is not frequent cannot be a subset of a frequent k-itemset Algorithm: L 1 = {frequent items}; for (k = 1; L k!= ; k++) do begin C k+1 = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+1 that are contained in t L k+1 = candidates in C k+1 with min_support end return L = k L k ; 31 Counting Support of Candidates Why counting supports of candidates a problem? The total number of candidates can be very large One transaction may contain many candidates Method: Candidate itemsets are stored in a hash-tree Match candidates to transactions supporting it 32

Counting Support of Candidate Itemsets Reducing Number of Comparisons Frequent Itemsets are found by counting candidates Naive way: Store the candidate d itemsets in a hash h structure Search for each candidate in each transaction Expensive!!! Hash each transaction to its appropriate buckets N Transactions A B C D A C E B C D A B D E B C E B D M Candidates Count A B 02 1 A C 20 1 A D 201 A E 02 1 B C 30 1 B D 04 1 A B E 02 B C D 20 1 A B D E 01 A B C D E 0 Naïve approach requires O(NM) comparisons 1 Reduce the number of comparisons (NM) by using hash tables to store the 0 candidate itemsets 33 Match transaction only to candidates stored in the hashed bucket(s) Hash Transactions Structure TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs N 3 Milk, Diaper, Beer, Coke k 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Buckets 34 Example I: itemset {cucumber, parsley, onion, tomato, salt, bread, olives, cheese, butter} D: set of transactions 1 {{cucumber, parsley, onion, tomato, salt, bread}, 2 {tomato, cucumber, parsley}, 3 {tomato, cucumber, olives, onion, parsley}, 4 {tomato, cucumber, onion, bread}, 5 {tomato, salt, onion}, 6 {bread, cheese} 7 {tomato, cheese, cucumber} 8 {bread, butter}} For Support = 30% > at least 3 transactions L 1 = {{bread}, {cucumber}, {onion}, {parsley}, {tomato}} C 2 = {{bread, cucumber}, {bread, onion}, {bread, parsley}, {bread, tomato}, {cucumber, onion}, {cucumber, parsley}, {cucumber, tomato}, {onion,,parsley}, {onion, tomato}, {parsley, tomato}} L 2 = {{cucumber, onion}, {cucumber, parsley}, {cucumber, tomato}, {onion, tomato}, {parsley, tomato}} L 3 = {{cucumber, parsley, tomato}} L = L 1 L 2 L 3 from: http://www.cs.bilkent.edu.tr/~guvenir/courses/cs558/association%20rules.ppt 35 from: http://www.cs.bilkent.edu.tr/~guvenir/courses/cs558/association%20rules.ppt 36

C 2 = {{bread, cucumber}, {bread, onion}, {bread, parsley}, {bread, tomato}, {cucumber, onion}, {cucumber, parsley}, {cucumber, tomato}, {onion, parsley}, {onion, tomato}, {parsley, tomato}} The hash tree constructed for C 2 : 2 root bread cucumber onion parsley {bread, cucumber} {cucumber, onion} {onion, parsley} {parsley, tomato} {bread, onion} {cucumber, parsley} {onion, tomato} {bread, parsley} {cucumber, tomato} {bread, tomato} For any itemset c contained in transaction t, the first item of c must be in t. At root, by hashing on every item in t, we ensure that we only ignore itemsets that start with an item not in t. from: http://www.cs.bilkent.edu.tr/~guvenir/courses/cs558/association%20rules.ppt 37 Generating AR from frequent intemsets Confidence fd (A B) = P(B A) = support_count({a, B}) support_count({a}) For every frequent itemset x, generate all non-empty subsets of x For every non-empty subset s of x, output the rule s (x-s) if support_count({x}) min_conf support_count({s}) 38 Rule Generation How to efficiently generate rules from frequent itemsets? In general, confidence does not have an anti-monotone property But confidence of rules generated from the same itemset has an anti-monotone property L = {A,B,C,D}: c(abc D) c(ab CD) c(a BCD) From Frequent Itemsets to Association Rules Q: Given frequent set {A,B,E},,E}, what are possible association rules? A => B, E A, B => E A, E => B B => A, E B, E => A E => A, B => A,B,E, (empty rule), or true => A,B,E, Confidence is non-increasing as number of items in rule consequent increases 39 40

Generating Rules: example Trans-ID Items Min_support: 60% 1 ACD Min_confidence: 75% 2 BCE 3 ABCE 4 BE 5 ABCE Frequent Itemset Support {BCE},{AC} 60% Rule Conf. {BC},{CE},{A}{CE} {A} 60% {BC} =>{E} 100% {BE},{B},{C},{E} 80% {BE} =>{C} 75% {CE} =>{B} 100% {B} =>{CE} 75% {C} =>{BE} 75% {E} =>{BC} 75% 41 Exercice TID Items 1 Bread, Milk, Chips, Mustard 2 Beer, Diaper, Bread, Eggs 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk, Chips 5 Coke, Bread, Diaper, Milk 6 Beer, Bread, Diaper, Milk, Mustard 7 Coke, Bread, Diaper, Milk Converta os dados para o formato booleano e para um suporte de 40%, aplique o algoritmo apriori. Bread Milk Chips Mustard Beer Diaper Eggs Coke 1 1 1 1 0 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 1 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 0 1 42 0.4*7= 2.8 C1 L1 C2 L2 Bread 6 Bread 6 Bread,Milk 5 Bread,Milk 5 Milk 6 Milk 6 Bread,BeerBeer 3 Bread,BeerBeer 3 Chips 2 Beer 4 Bread,Diaper 5 Bread,Diaper 5 Mustard 2 Diaper 6 Bread,Coke 2 Milk,Beer 3 Beer 4 Coke 3 Milk,Beer 3 Milk,Diaper 5 Diaper 6 Milk,Diaper 5 Milk,Coke 3 Eggs 1 Milk,Coke 3 Beer,Diaper 4 Coke 3 Beer,Diaper 4 Diaper,Coke 3 Beer,Coke 1 Diaper,Coke 3 C3 L3 Bread,Milk,Beer 2 Bread,Milk,Diaper 4 Bread,Milk,Diaper 4 Bread,Beer,Diaper 3 Bread,Beer,DiaperBeer 3 Milk,Beer,Diaper 3 Milk,Beer,Diaper 3 Milk,Diaper,Coke 3 Milk,Beer,Coke Milk,Diaper,Coke, 3 8 C C 92 24 8 8 2 3 43 Mining Association Rules What is Association rule mining Apriori Algorithm Measures of rule interestingness Advanced Techniques 44

Interestingness Measurements Objective measures of rule interest Are all of the strong association rules discovered interesting enough to present to the user? How can we measure the interestingness of a rule? Subjective measures A rule (pattern) is interesting if it is unexpected (surprising to the user); and/or actionable (the user can do something with it) Support Confidence or strength Lift or Interest or Correlation Conviction i Leverage or Piatetsky-Shapiro Coverage (only the user can judge the interestingness of a rule) 45 46 Criticism to Support and Confidence Example 1: (Aggarwal & Yu, PODS98) Among 5000 students 3000 play basketball 3750 eat cereal 2000 both play basket ball and eat cereal basketball not basketball sum(row) cereal 2000 1750 3750 75% not cereal 1000 250 1250 25% sum(col.) 3000 2000 5000 play basketball eat cereal [40%, 66.7%] 60% 40% misleading because the overall percentage of students eating cereal is 75% which is higher than 66.7%. play basketball not eat cereal [20%, 33.3%] is more accurate, although with lower support and confidence Lift Lf of a Rule Lift (Correlation, Interest) sup( AB, ) PB ( A ) Lift( A B) sup( A) sup( B) P( B) A and B negatively correlated, if the value is less than 1; otherwise A and B positively correlated X 11110 0 0 0 rule Support Lift X Y 25% 2.00 Y11000000 X Z 37.50% 0.86 Z 0 1 1 1 1 1 1 1 Y Z 12.50% 057 0.57 47 48

Example 1 (cont) Lift of a Rule 2000 play basketball eat cereal [40%, 66.7%] LIFT 5000 0. 89 3000 3750 5000 5000 1000 play basketball not eat cereal [20%, 33.3%] LIFT 5000 1. 33 3000 1250 5000 5000 Problems With Lift Rules that hold 100% of the time may not have the highest possible lift. For example, if 5% of people are Vietnam veterans and 90% of the people p are more than 5 years old, we get a lift of 0.05/(0.05*0.9)=1.11 which is only slightly above 1 for the rule Vietnam veterans -> more than 5 years old. And, lift is symmetric: basketball not basketball sum(row) cereal 2000 1750 3750 not cereal 1000 250 1250 sum(col.) 3000 2000 5000 49 not eat cereal play basketball [20%, 80%] 1000 LIFT 5000 1.33 1250 3000 5000 5000 50 Conviction of a Rule Conviction sup( A ) sup( B ) PA ( ) PB ( ) PA ( )( 1 PB ( )) Conv ( A B ) sup( AB, ) P( AB, ) PA ( ) PAB (, ) Conviction is a measure of the implication and has value 1 if items are unrelated. play basketball eat cereal [40%, 66.7%] eat cereal play basketball conv:0.85 play basketball not eat cereal [20%, 33.3%] 3%] not eat cereal play basketball conv:1.43 3000 3750 1 5000 5000 Conv 0.75 3000 2000 5000 5000 3000 1250 1 5000 5000 Conv 1.125 3000 1000 5000 5000 51 conviction of X=>Y can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. A conviction value of 1.2 shows that the rule would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. 52

Leverage of a Rule Coverage of a Rule Leverage or Piatetsky-Shapiro P SA ( B) sup( AB, ) sup( A) sup( B) coverage( A B ) sup( A ) PS (or Leverage): is the proportion of additional elements covered by both the premise and consequence above the expected if independent. 53 54 Comment Traditional methods such as database queries: support hypothesis verification about a relationship such as the co-occurrence of diapers & beer. Data Mining methods automatically discover significant associations rules from data. Find whatever patterns exist in the database, without the user having to specify in advance what to look for (data driven). Therefore allow finding unexpected correlations ASSOCIATION RULES WITH 55 56

Using Scripts in R Another script Menu File; New script write down the following commands a < runif(100,10,20) hist(a) Menu File;Save as... test.r source( teste.r ) to run the script for (i in 1:5){ } a < runif(10*i, i, 10, 20) hist(a, main=paste("uniform n=", 10*i)) saveplot(filename= paste("uniform n=", 10*i),type="jpeg") 57 58 package: arules library(arules) data(groceries) summary(groceries) length(groceries) LIST(Groceries[1:5]) image(groceries[1:500]) Association Rules in R my_rules < apriori(groceries, parameter = list(supp=0.001, conf=0.5)) inspect(my_rules) inspect(my_rules[1:10]) my_rules_2 < SORT(rule, by = "lift") inspect(sort(rule, by="lift")[1:3])~ summary(my_rules) rules) itemfrequencyplot(groceries, support = 0.1, cex.names = 0.8) itemfrequencyplot(groceries, support = 0.05, cex.names = 0.8) APRIORI EXTENSIONS butter_rules < subset(my_rules,subset = lhs %in% "butter" & lift > 1.2) butter_rules < subset(my_rules,subset = lhs %in% "butter" & lhs %in% "rice" & lift > 1.2) Groceries.sample < sample(groceries,1000,replace=true) itemfrequency(groceries.sample) > 0.002 59 60

Challenges of Frequent Pattern Mining Challenges Multiple scans of transaction database Huge number of candidates Tedious workload of support counting for candidates Improving Apriori: general ideas Reduce number of transaction database scans Shrink number of candidates Facilitate support counting of candidates Improving Apriori s Efficiency Problem with Apriori: every pass goes over whole data. AprioriTID: Generates candidates as apriori but DB is used for counting support only on the first pass. Needs much more memory than Apriori Builds a storage set C^k that stores in memory the frequent sets per transaction AprioriHybrid: Use Apriori in initial passes; Estimate the size sz of C^k; Switch to AprioriTid when C^k k is expected to fit in memory The switch takes time, but it is still better in most cases 61 62 TID 100 200 300 400 Items 1 3 4 2 3 5 1 2 3 5 2 5 C^1 L 1 TID Set-of-itemsets Itemset Support 100 { {1},{3},{4} } {1} 2 200 { {2},{3},{5} } {2} 3 300 { {1},{2},{3},{5} } {3} 3 400 { {2},{5} } {5} 3 Improving Apriori s Efficiency Transaction reduction: A transaction that does not contain any frequent k-itemset is useless in subsequent scans C 2 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} C 3 itemset {2 3 5} C^2 TID Set-of-itemsets 100 { {1 3} } 200 { {2 3},{2 5} {3 5} } 300 { {1 2},{1 3},{1 5}, {2 3}, {2 5}, {3 5} } 400 {{2 5} }}} C^3 L 3 TID Set-of-itemsets 200 { {2 3 5} } 300 { {2 3 5} } L 2 Itemset Support {1 3} 2 {2 3} 3 {2 5} 3 {3 5} 2 Itemset Support {2 3 5} 2 63 Sampling: mining i on a subset of given data. The sample should fit in memory Use lower support threshold to reduce the probability of missing some itemsets. The rest of the DB is used to determine the actual itemset count. 64

Improving Apriori s Efficiency Improving Apriori s Efficiency Partitioning: i i Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB (2 DB scans) Trans. in D (support in a partition is lowered to be proportional to the number of elements in the partition) Divide D into n Nonoverlappin g partitions Phase I Find frequent itemsets local to each partition (parallel alg.) Combine results to form a global set of candidate itemsets Phase II Find global l frequent itemsets among candidates Freq. itemsets in D Dynamic itemset counting: partitions the DB into several blocks each marked by a start point. At each start point, DIC estimates the support of all itemsets that are currently counted and adds new itemsets to the set of candidate itemsets if all its subsets are estimated to be frequent. If DIC adds all frequent itemsets to the set of candidate itemsets during the first scan, it will have counted each itemset s t exact support at some point during the second scan; thus DIC can complete in two scans. 65 66 Association Rules Visualization Association Rules Visualization The coloured column indicates the association rule B C. Different icon colours are used to show different metadata t values of the association rule. 67 68

Association Rules Visualization Association Rules Visualization - Ball graph Size of ball equates to total support Height equates to confidence 69 70 The Ball graph Explained Association Rules Visualization A ball graph consists of a set of nodes and arrows. All the nodes are yellow, green or blue. The blue nodes are active nodes representing the items in the rule in which the user is interested. The yellow nodes are passive representing items related to the active nodes in some way. The green nodes merely assist in visualizing two or more items in either the head or the body of the rule. A circular node represents a frequent (large) data item. The volume of the ball represents the support of the item. Only those items which h occur sufficiently i frequently are shown An arrow between two nodes represents the rule implication between the two items. An arrow will be drawn only when the support of a rule is no less than the minimum support 71 72

Mining Association Rules What is Association rule mining Apriori Algorithm FP-tree Algorithm Additional Measures of rule interestingness Advanced Techniques Multiple-Level Association Rules FoodStuff Frozen Refrigerated Fresh Bakery Etc... Vegetable Fruit Dairy Etc... Banana Apple Orange Etc... Fresh Bakery [20%, 60%] Dairy Bread [6%, 50%] Fruit Bread [1%, 50%] is not valid 73 Items often form hierarchy. Flexible support settings: Items at the lower level are expected to have lower support. Transaction database can be encoded based on dimensions and levels explore shared multi-level mining 74 Multi-Dimensional Association Rules Single-dimensional rules: buys(x, milk ) buys(x, bread ) Quantitative i Association i Rules age(x, 30-34 ) income(x, 24K - 48K ) buys(x, high resolution TV ) Multi-dimensional rules: 2 dimensions or predicates Inter-dimension association rules (no repeated predicates) age(x, 19-25 ) occupation(x, student ) buys(x, coke ) hybrid-dimension association rules (repeated predicates) age(x, 19-25 ) buys(x, popcorn ) buys(x, coke Mining Sequential Patterns 10% of customers bought Foundation and Ringworld in one transaction, followed by Ringworld Engineers in another transaction. 75 76

Given Sequential Pattern Mining A database of customer transactions ordered by increasing transaction time Each transaction is a set of items A sequence is an ordered list of itemsets Example: 10% of customers bought Foundation and Ringworld" in one transaction, followed by Ringworld Engineers" in another transaction. 10% is called the support of the pattern Problem (a transaction may contain more books than those in the pattern) Find all sequential patterns supported by more than a user-specified percentage of data sequences Sequential Patterns Rules (F, R) (RE) (FE) with 3% support and 40% confidence. Confidence: 40% of occurrences of (F, R) (RE) are followed by (FE). (FE- foundation and empire) Problem Decomposition: Find all sequential patterns with minimum i support. Use the sequential patterns to generate rules. 77 78 Applications Applications of sequential pattern mining Attached mailing, e.g., customized mailings for a book club. The AprioriAll Algorithm [AgSr95] A 3-phase algorithm Phase 1: finds all frequent itemsets with minimum support, that is, with a minimum number of customers containing it Phase 2: transforms the DB s.t. each transaction ti is replaced by the set of all frequent itemsets contained in the transaction Customer satisfaction / retention Web log analysis Medical research, e.g., sequences of symptoms Phase 3: finds all sequential patterns 79 80

The AprioriAll Algorithm (cont.) The sequence phase: In each phase it uses the large sequences from previous phase to generate the candidate sequences, and then measures s the support of candidates. Customer sequences min_support = 0.4 (2 customer sequences) Maximal sequences Application Difficulties Wal-Mart knows that customers who buy Barbie dolls (it sells one every 20 seconds) have a 60% likelihood of buying one of three types of candy bars. What does Wal-Mart do with information like that? 'I don't have a clue,' says Wal-Mart's chief of merchandising, Lee Scott. See - KDnuggets 98:01 for many ideas www.kdnuggets.com/news/98/n01.html C4 81 82 Some Suggestions By increasing i the price of Barbie doll and giving i the type of candy bar free, wal-mart can reinforce the buying habits of that particular types of buyer Highest margin candy to be placed near dolls. Special promotions for Barbie dolls with candy at a slightly higher margin. Take a poorly selling product X and incorporate an offer on this which is based on buying Barbie and Candy. If the customer is likely to buy these two products anyway then why not try to increase sales on X? Probably they can not only bundle candy of type A with Barbie dolls, but can also introduce new candy of Type N in this bundle while offering discount on whole bundle. As bundle is going to sell because of Barbie dolls & candy of type A, candy of type N can get free ride to customers houses. And with the fact that you like something, if you see it often, Candy of type N can become popular. References Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques, 2000 Vipin Kumar and Mahesh Joshi, Tutorial on High Performance Data Mining, 1999 Rakesh Agrawal, Ramakrishnan Srikan, Fast Algorithms for Mining Association Rules, Proc VLDB, 1994 (http://www.cs.tau.ac.il/~fiat/dmsem03/fast%20algorithms%20for%20mining%20association% /d /F l h % f % % % 20Rules.ppt) Alípio Jorge, selecção de regras: medidas de interesse e meta queries, (http://www.liacc.up.pt/~amjorge/aulas/madsad/ecd2/ecd2_aulas_ar_3_2003.pdf) 83 84

Thank you!!! 85