Association)Rule Mining Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management
What)is)Association)Rule)Mining? Finding frequent patterns,1associations,1correlations,1or causal structures among sets of1items or objects Proposed)by)Agrawal et)al.)(1993) Applications:)Shopping)basket)analysis,)crossDmarketing,)catalog)design,)lossDleader) analysis,)etc. Assumes)that)all)data)is)categorical Extensively)studied)in)data)mining)community Example:)buys(x,1 pizza )1! buys(x,1 beer ) 2
Market)basket)analysis)in)retail Do)the)demographics) of)the)neighborhood) affect)what)people)by?) What)would)you) expect)to)be)in)the) basket)and)what)not? Are)soft)alcoholic) drinks)often)bought) with)wine)and) chicken?)do)the) brands)matter? 4
Have)a)pop)tart)before)a)hurricane )and)a)beer)when)it s)over! 5
Examples)of)other)use)cases Telecom)business:)what1optional1services1are1purchased1by1 customers1and1how1they1can1be1bundled1to1maximize1revenue Banking:)can1the1knowledge1of1customers1current1use1of1banking1 services1(e.g.,1checking1accounts,1car1loans,1mortgages)1be1used1 to1identify1their1potential1demand1for1other1services Insurance: can1insurance1fraud1cases1be1detected1by1looking1at1 unusual1combinations1of1insurance1claims1or1other1attributes1that1 characterize1the1cases Medicare: do1the1medical1histories1of1patients1indicate1risks1of1 complications1that1may1need1attention1 6
Basic)concepts Set1of1items:1I =1{i 1,1i 2,1,1i m }.1 A1kUitemset is1an1itemset with1k1itemsv1e.g.1{milk,1bread,1butter}1is1a13u itemset Transaction1t:1set1of1items,1such1that1t! I Transaction1database1T =1set1of1transactions1=1{t 1,1t 2,1,1t n } Association1rule:1A)" B where1 A)# I,)B)# I,)A)$ B)=)% Association1rule1is a1pattern1which1states1that1if1a1occurs,1then1b1 occurs1with1a1certain1probability 7
Transaction)data In)market)basket)analysis,)transaction)data)is)commonly) point)of)sale)transactions - Information on what products customers have purchased at one visit - Typically the transaction data is accompanied by information on basket value and customer demographics + other data sources t1: citrus)fruit semi-finished)bread margarine ready)soups t2: tropical)fruit yogurt coffee t3: whole)milk t4: pip)fruit yogurt cream)cheese) meat)spreads t5: other)vegetables whole)milk condensed)milk long)life)bakery)product t6: whole)milk butter yogurt rice abrasive)cleaner 8
Examples)of)association)rules x1=1customer buys(x,11mobile1connection)1" buys(x,1phone1lease1and1internet1 connection) buys(x,1tablet1computer)1" buys(x,1tablet1cover) WARNING: an)interestingdlooking)rule)may)in)fact)turn)out)to)be) a)result)of)an)earlier)marketing)campaign)or)product)bundling 9
What)is)an)interesting)association)rule? Simple1enough1to1be1understood1e.g.,1by1using1background1 information1on1the1customers Unexpected1(i.e.,1not1generated1by1company s1marketing1behavior) Actionable:1e.g.,1you1can1create1a1marketing1campaign1using1this1 information 10
Measures)for)rule)strength Support)of)rule)A)" B is1the1probability1of1observing1a1transaction1 that1contains1all1items,1p({a,b}),1i.e.1in1a1given1transaction1set1we1 have support(a B) number of tuples containing both A and B total number of tuples Confidence)of)rule)A)" B is1the1conditional1probability1of1b1in1 transactions1that1contain1a,1p(b A),1i.e.1in1a1given1transaction1set1this1 corresponds1to confidence(a B) number of tuples containing both A and B number of tuples containing A 11
Venn)diagram Customer1who1buys1tablet1computer1also1 buys1a1cover1for1it A:1Customer1buys a1tablet1computer B:1Customer1buys a tablet1cover Support)=)P({A,B}) Confidence)=)P(B A) 12
Example:)support)and)confidence Transaction ID t1 t2 t3 t4 Example:) Items Bought A,B,C A,C A,D B,E,F Min.1support150% Min.1confidence150% Frequent Itemset Support {A} 75% {B} 50% {C} 50% {A,C} 50% Note:) support)for)itemset )(i.e.)count)of) transactions)with)itemset /)total)count) Support(A1" C)=support({A,C})=50% Confidence(A1" C)=support({A,C})/support({A})=66.6%1 13
Presemo http://presemo.aalto.fi/bread TID Items 1 Bread,1Milk,1Diaper,1Beer 2 Bread,1Milk 3 Bread,1Milk,1Diaper,1Coke 4 Milk,1Diaper,1Beer,1Coke 5 Bread,1Diaper,1Beer,1Eggs Find1the1support1and1confidence1 for1the1rule {Milk,1Diaper}1=>1Beer 14
Association)Rule)Mining)Problem Assume)that)you)have)the)following: - Description of the itemset - Transaction database - Analyst s choices for minimum support and confidence Objective:)Find)all)association)rules)which)satisfy)the) requirements)of)minimum)support)and)confidence) Key)features: - Completeness: find all rules - No target item(s) on the right hand side (differs from decision trees!) 16
Common)strategy)for)association) mining 1.#Frequent#Itemset Generation: Find1all1itemsets that1have1minimum1support1(i.e.1 frequent1itemsets)1 can1be1expensive!! 2.#Rule#Generation: Generate1association1rules1based1on1the1frequent1 itemsets 17
Step)1:)Find)frequent)itemsets Enumeration1of1all1possible1 itemsets for1i1=1{a,b,c,d,e} null a b c d e Maximum1number1of1 potential1frequent1itemsets =12 k1 11! 31 ab ac ad abc abd abe ae acd bc bd be cd ce de ace ade bcd bce bde cde abcd abce abde acde bcde abcde Source:1 Introduction1to1Data1Mining 1by1Tan1et1al. 18
We)could)always)determine)the)support)count)for) every)candidate)itemset in)the)lattice)structure? 19
The)brute)force)approach Compute)support)count)for)every)candidate)itemset DD>)Each) candidate)should)be)compared)against)every)single) transaction! Candidates Transactions N TID 1 2 3 4 5 Items Bread, Milk Bread, Diapers, Beer, Eggs Milk, Diapers, Beer, Coke Bread, Milk, Diapers, Beer Bread, Milk, Diapers, Coke M Computational1force1needed1~1O(N1x1max transaction width x1m)1 20
Number)of)possible)rules)=)3 d D 2 d+1 +1 60000 R 50000 40000 For)six)items,)i.e., d)=)6)! possible)rules)=)602 30000 20000 10000 0 0 2 4 6 8 10 12 21
Strategies)to)generate)frequent) itemsets Reduce)candidates)(M) Use pruning techniques instead of complete search Reduce)transactions)(N) E.g., vertical-based mining algorithms Reduce)number)of)comparisons)(M)x)N) No need to match every candidate against every transaction 22
The)Apriori Principle A)subset)of)a)frequent)itemset must)also)be)a)frequent)itemset " Frequent1itemsets can1be1found1iteratively1by1starting1from11u itemsets1and1progressing1to1kuitemsets Holds1due1to1 antiumonotone1property 1of1support1measure: i.e.1support1(s)1of1an1itemset can1never1be1larger1than1the1support1of1 its1subsets X, Y : ( X Y ) s( X ) s( Y ) 23
null A B C D E AB AC AD AE BC BD BE CD CE DE Found to be Infrequent ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Pruned supersets ABCDE Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 24
Example)adapted)from)Bing)Liu Notation:) TID Items {itemset}:countv1c=candidatesv1f=actually1frequent T100 1,13,14 T1=1transaction1database,1minimum1support1=10.5 T200 2,13,15 Procedure: T300 1,12,13,15 1.1scan1T1" C 1 :1 {1}:2,1{2}:3,1{3}:3,1{4}:1,1{5}:3 " F 1 :1 {1}:2,1{2}:3,1{3}:3,1111111111111{5}:3 T400 2,15 " C 2 :1 {1,2},1{1,3},1{1,5},1{2,3},1{2,5},1{3,5} 2. scan1t1" C 2 :1 {1,2}:1,1{1,3}:2,1{1,5}:1,1{2,3}:2,1{2,5}:3,1{3,5}:2 " F 2 :11 {1,3}:2,111111111111111{2,3}:2,1{2,5}:3,1{3,5}:2 " C 3 : {2,13,5} 3.1scan1T1" C 3 :1{2,)3,)5}:21" F 3:1 {2,)3,)5} 25
Issues)that)affect)performance Choice)of)minimum)support)threshold Lower thresholds increase the number of frequent itemsets Dimensionality:)number)of)items)in)the)dataset Requires more space to store support counts Computational burden Number)of)transactions) Algorithm requires multiple passes -> run time depends on size of database 26
Example:)Effect)of)lowering)support 4 10 5 10 4 5 3.5 Support = 0.1% Support = 0.2% Support = 0.5% 3.5 Support = 0.1% Support = 0.2% Support = 0.5% Number of Candidate Itemsets 3 2.5 2 1.5 1 Number of Frequent Itemsets 3 2.5 2 1.5 1 0.5 0.5 0 0 5 10 15 20 Size of Itemset 0 0 5 10 15 20 Size of Itemset Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 27
Example:)Effect)of)average)transaction)width Number of Candidate Itemsets 10 10 5 9 8 7 6 5 4 3 2 Width = 5 Width = 10 Width = 15 Number of Frequent Itemsets 10 5 10 9 8 7 6 5 4 3 2 Width = 5 Width = 10 Width = 15 1 0 0 5 10 15 20 25 Size of Itemset 1 0 0 5 10 15 20 25 Size of Itemset Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 28
Choice)of)support)threshold Support1distributions1are1often1skewed If1threshold1is1set1too1high,1interesting1rare1itemsets are1missed1 (e.g.,1luxury/expensive1products) If1threshold1is1set1too1low,1rule1mining1becomes1computationally1 expensive1and1amount1of1frequent1itemsets is1too1high Use1of1single1support1threshold1may1not1be1suitable1in1practice 29
Step)2:)Generate)rules)from)frequent) itemsets For1each1frequent1itemset and1every1nonepty subset,1create1the1 rules1that1meet1minimum1confidence1criteria Example:1{I1,I2,I5} - I1, I2 " I5 - I1, I5 " I2 - I2, I5 " I1 - I1 " I2, I5 - I2 " I1, I5 - I5 " I1, I2 for)each frequent1itemset I do for)each subset1c of1i do if)(support(i)1/1support(i U C)1>=1minconf)1then output)the1rule1(i U C)1" C, with)confidence1=1support(i)1/1support1(i U C) and1support1=1support(i) 30
Problem:)number)of)rules)grows) quickly If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABC AB CD, AC BD, AD BC, BC AD, BD AC, CD AB, If1k=size1of1itemset,1there1are12 k 21possible1association1rules1that1 can1be1generated1from1the1set! Note:1rules1with1empty1sets1are1ignored 31
Rule)pruning)with)Apriori Though1confidence1measure1is1not1generally1antimonotone,1it1has1this1 property1when1considering1rules)generated)from)the)same)itemset I1=1{A,B,C,D}: c(abc D) c(ab CD) c(a BCD) When1increasing1the1number1of1items1on1 Right1Hand1Side 1(RHS)1of1 the1rule,1the1confidence1cannot1increase. 32
Lattice of rules Low Confidence Rule ABCD=>{ } BCD=>A ACD=>B ABD=>C ABC=>D CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD Pruned Rules D=>ABC C=>ABD B=>ACD A=>BCD Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 33
Also)other)measures)than)confidence) can)be)used) Confidence)difference Confidence)ratio Information)difference Normalized)ChiDSquare 34
User s Guide. Valuesarecalculatedbasedonthepriorconfidenceandtheposteriorconfidence, defined as defined as Confidence)difference Apriori offers several evalu measures will emphasize d Confidence Difference (Absolute ConfidenceDifferencetoPrior). This User s measureguide. is based Valuesarec on the simple difference of the posterior and prior and confidence values, defined as and where c is the support of the consequent, where a is c is the the support of the of the antecedent, and consequent, r is the a issupport the support of the conjunction of the antecedent andthe theconjunction consequent, of and then antecedent is the number andof the records consequent, in the and training data. training data. Rule Confidence. The default evaluation measure for rules is simply the posterior confidence of the rule, Reduces bias when outcomes are not evenly distributed; Helps to avoid obvious rules from being retained Rule Confidence. The default evaluation measure for rules where c is the support of th of the rule, the conjunction of the ante training data. Rule Confidence. The defau of the rule, 35
Confidence)ratio Confidence Ratio (Difference of Confidence Quotient to 1). This measure is based on the ratio of posterior confidence to prior confidence, Similar to confidence difference: takes uneven outcome distributions into account Should be good at finding rules that predict rare events! 36
Information)difference Information Difference (Information Difference to Prior). This measure is based on the information gain criterion, similar to that used in building C5.0 trees. The calculation is where r is the rule support, a is the antecedent support, c is the consequent support, is the complement of antecedent support, and is the complement of consequent support. If the probability of a particular consequent is considered as a logical value (bit), then information gain is the proportion of that bit that can be determined, based on the antecedents Takes support into account so that rules that cover more records are preferred 37
Normalized)ChiDSquare Normalized Chi-square (Normalized Chi-squared Measure). This measure is based on the chi-squared statistical test for independence of categorical data, and is calculated as Measures association between antecedents and consequents Even more strongly dependent on support than the information difference measure 38
Beyond)basics) Sequential)rule)mining Parallel)algorithms Rule)interestingness)and)visualization )and)a)lot)more) 39
Evaluation)of)rules
Association)vs.)causation Watch)out)for) the)rooster)syndrome Source:1http://scienceornot.net/2012/07/05/confusingUcorrelationUwithUcausationUroosterUsyndrome/ 43
44
Finding)rules)that)matter) Too)many)patterns,)what)to)do? Most)rules)are)uninteresting)or)redundant Need measures for interestingness (originally only support and confidence have been used) Interestingness Measure U Objective U Subjective U U U Method Ranking Filtering Summarizing 45
Presemo Coffee No)coffee Total Tea 15 5 20 No1tea 75 5 80 Total 90 10 100 Tea1UU>1Coffee1? http://presemo.aalto.fi/coffee What)is)the)confidence)of)the)rule? Is)the)rule)reasonable? 46
Lift)/)Interest Coffee No)coffee Total Tea 15 5 20 No1tea 75 5 80 Total 90 10 100 P(coffee1 1tea)1=1 75.01% P(coffee1 1no1tea)1= 93.81% P(coffee)1= 901%!"#$ = &(( => *),(*) = -.(* ().(*) Lift1=1P(coffee1 1tea)1/1P(coffee)1=1 831% Lift1<111=>1negative1association Lift1=111=>1statistical1independence Lift1>111=>1positive1association 48
Drawback)with)Lift Tea1UU>1Coffee1? Coffee No1coffee Total Tea 10 0 10 P(coffee1 1tea)1=1 100.01% No1tea 0 90 90 P(coffee1 1no1tea)1= 0.01% Total 10 90 100 P(coffee)1= 101% Lift1= 10.001 Coffee No1coffee Total Tea 90 0 90 P(coffee1 1tea)1=1 100.01% No1tea 0 10 10 P(coffee1 1no1tea)1= 0.01% Total 90 10 100 P(coffee)1= 901% Lift1= 1.11 49
Coverage)(antecedent)support) &0123452 ( * =,78(() 50
Leverage)(PiatetskyDShapiro) Example: 9:;:<=>: =? @, B? @? B Swim Not1swim Total Bike 420 280 700 Not1bike 180 120 300 Total 600 400 1000 P(Swim1and1Bike)1=1 0.42 P(Swim)1=1 0.6 P(Bike)1=1 0.7 P(Swim)1x1P(Bike)1= 0.42 51
Deployability Deployability: What1percentage1of1the1training1data1satisfies1the1 conditions1of1the1antecedent1but1does1not1satisfy1the1consequent? D:E9FG=HI9IJG- = - @KJ:L:M:KJ-NOEEF<J-IK-#-FQ-<:LF<MN- RO9:-NOEEF<J-IK-#-FQ-<:LF<MN - SOTH:<-FQ-<:LJF<MN In1product1purchase1terms,1it1basically1means1what1percentage1of1the1 total1customer1base1owns1(or1has1purchased)1the1antecedent(s)1but1has1 not1yet1purchased1the1consequent.1 52
Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 53
Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 54
Subjective)measures Rule)is)interesting)if It is unexpected It is actionable Interestingness can be only judged by the user 55
From)rule)discovery)to)profiling) Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 56
Individual)data Factual/Demographic)+)Transactional)Data Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 57
Individual)rules Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 58
Examples)of)validation)operators SimilarityDbased)rule)grouping E.g., group rules by attribute similarity (club together rules of form Product => Store ) Inspect the groups of rules at once instead of evaluating individually TemplateDbased)rule)filtering Expert specifies accepting or rejecting rule templates E.g., accept all rules that have attribute Product in their bodies Redundant)rule)elimination Rules that don t offer additional value or are self-evident 59