Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Similar documents
CS 484 Data Mining. Association Rule Mining 2

Data Mining Concepts & Techniques

Lecture Notes for Chapter 6. Introduction to Data Mining

DATA MINING - 1DL360

DATA MINING - 1DL360

COMP 5331: Knowledge Discovery and Data Mining

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Association Rules. Fundamentals

CS 584 Data Mining. Association Rule Mining 2

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

732A61/TDDD41 Data Mining - Clustering and Association Analysis

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

Association Analysis 1

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Association Analysis Part 2. FP Growth (Pei et al 2000)

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

CSE4334/5334 Data Mining Association Rule Mining. Chengkai Li University of Texas at Arlington Fall 2017

Data Mining and Analysis: Fundamental Concepts and Algorithms

Association Rule Mining on Web

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

CS5112: Algorithms and Data Structures for Applications

Chapters 6 & 7, Frequent Pattern Mining

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

ST3232: Design and Analysis of Experiments

Machine Learning: Pattern Mining

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

sor exam What Is Association Rule Mining?

CSE 5243 INTRO. TO DATA MINING

Frequent Itemset Mining

Exercise 1. min_sup = 0.3. Items support Type a 0.5 C b 0.7 C c 0.5 C d 0.9 C e 0.6 F

CSE 5243 INTRO. TO DATA MINING

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Unit II Association Rules

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Frequent Itemset Mining

Introduction to Data Mining

Set Notation and Axioms of Probability NOT NOT X = X = X'' = X

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Associa'on Rule Mining

Temporal Data Mining

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Math for Liberal Studies

Data mining, 4 cu Lecture 5:

Fractional Factorials

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

Do in calculator, too.

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Frequent Itemset Mining

Design and Analysis of Multi-Factored Experiments

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

CS 412 Intro. to Data Mining

Lecture 3 : Probability II. Jonathan Marchini

Association Analysis. Part 1

Frequent Itemset Mining

Chapter 4: Frequent Itemsets and Association Rules

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

Association Analysis. Part 2

Handling a Concept Hierarchy

NetBox: A Probabilistic Method for Analyzing Market Basket Data

Fractional designs and blocking.

Data Mining and Analysis: Fundamental Concepts and Algorithms

COMP 5331: Knowledge Discovery and Data Mining

Strategy of Experimentation III

Knowledge Discovery and Data Mining I

Sequential Pattern Mining

CHAPTER 5 KARNAUGH MAPS

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Four Paradigms in Data Mining

Lecture 2 31 Jan Logistics: see piazza site for bootcamps, ps0, bashprob

Mining Approximative Descriptions of Sets Using Rough Sets

Experimental design (DOE) - Design

TWO-LEVEL FACTORIAL EXPERIMENTS: REGULAR FRACTIONAL FACTORIALS

Interesting Patterns. Jilles Vreeken. 15 May 2015

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Chapter 1 Problem Solving: Strategies and Principles

Construction of Mixed-Level Orthogonal Arrays for Testing in Digital Marketing

3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value.

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Karnaugh Map & Boolean Expression Simplification

1 Frequent Pattern Mining

Chapter 11: Factorial Designs

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

LECTURE 10: LINEAR MODEL SELECTION PT. 1. October 16, 2017 SDS 293: Machine Learning

CPT+: A Compact Model for Accurate Sequence Prediction

Transcription:

Association)Rule Mining Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

What)is)Association)Rule)Mining? Finding frequent patterns,1associations,1correlations,1or causal structures among sets of1items or objects Proposed)by)Agrawal et)al.)(1993) Applications:)Shopping)basket)analysis,)crossDmarketing,)catalog)design,)lossDleader) analysis,)etc. Assumes)that)all)data)is)categorical Extensively)studied)in)data)mining)community Example:)buys(x,1 pizza )1! buys(x,1 beer ) 2

Market)basket)analysis)in)retail Do)the)demographics) of)the)neighborhood) affect)what)people)by?) What)would)you) expect)to)be)in)the) basket)and)what)not? Are)soft)alcoholic) drinks)often)bought) with)wine)and) chicken?)do)the) brands)matter? 4

Have)a)pop)tart)before)a)hurricane )and)a)beer)when)it s)over! 5

Examples)of)other)use)cases Telecom)business:)what1optional1services1are1purchased1by1 customers1and1how1they1can1be1bundled1to1maximize1revenue Banking:)can1the1knowledge1of1customers1current1use1of1banking1 services1(e.g.,1checking1accounts,1car1loans,1mortgages)1be1used1 to1identify1their1potential1demand1for1other1services Insurance: can1insurance1fraud1cases1be1detected1by1looking1at1 unusual1combinations1of1insurance1claims1or1other1attributes1that1 characterize1the1cases Medicare: do1the1medical1histories1of1patients1indicate1risks1of1 complications1that1may1need1attention1 6

Basic)concepts Set1of1items:1I =1{i 1,1i 2,1,1i m }.1 A1kUitemset is1an1itemset with1k1itemsv1e.g.1{milk,1bread,1butter}1is1a13u itemset Transaction1t:1set1of1items,1such1that1t! I Transaction1database1T =1set1of1transactions1=1{t 1,1t 2,1,1t n } Association1rule:1A)" B where1 A)# I,)B)# I,)A)$ B)=)% Association1rule1is a1pattern1which1states1that1if1a1occurs,1then1b1 occurs1with1a1certain1probability 7

Transaction)data In)market)basket)analysis,)transaction)data)is)commonly) point)of)sale)transactions - Information on what products customers have purchased at one visit - Typically the transaction data is accompanied by information on basket value and customer demographics + other data sources t1: citrus)fruit semi-finished)bread margarine ready)soups t2: tropical)fruit yogurt coffee t3: whole)milk t4: pip)fruit yogurt cream)cheese) meat)spreads t5: other)vegetables whole)milk condensed)milk long)life)bakery)product t6: whole)milk butter yogurt rice abrasive)cleaner 8

Examples)of)association)rules x1=1customer buys(x,11mobile1connection)1" buys(x,1phone1lease1and1internet1 connection) buys(x,1tablet1computer)1" buys(x,1tablet1cover) WARNING: an)interestingdlooking)rule)may)in)fact)turn)out)to)be) a)result)of)an)earlier)marketing)campaign)or)product)bundling 9

What)is)an)interesting)association)rule? Simple1enough1to1be1understood1e.g.,1by1using1background1 information1on1the1customers Unexpected1(i.e.,1not1generated1by1company s1marketing1behavior) Actionable:1e.g.,1you1can1create1a1marketing1campaign1using1this1 information 10

Measures)for)rule)strength Support)of)rule)A)" B is1the1probability1of1observing1a1transaction1 that1contains1all1items,1p({a,b}),1i.e.1in1a1given1transaction1set1we1 have support(a B) number of tuples containing both A and B total number of tuples Confidence)of)rule)A)" B is1the1conditional1probability1of1b1in1 transactions1that1contain1a,1p(b A),1i.e.1in1a1given1transaction1set1this1 corresponds1to confidence(a B) number of tuples containing both A and B number of tuples containing A 11

Venn)diagram Customer1who1buys1tablet1computer1also1 buys1a1cover1for1it A:1Customer1buys a1tablet1computer B:1Customer1buys a tablet1cover Support)=)P({A,B}) Confidence)=)P(B A) 12

Example:)support)and)confidence Transaction ID t1 t2 t3 t4 Example:) Items Bought A,B,C A,C A,D B,E,F Min.1support150% Min.1confidence150% Frequent Itemset Support {A} 75% {B} 50% {C} 50% {A,C} 50% Note:) support)for)itemset )(i.e.)count)of) transactions)with)itemset /)total)count) Support(A1" C)=support({A,C})=50% Confidence(A1" C)=support({A,C})/support({A})=66.6%1 13

Presemo http://presemo.aalto.fi/bread TID Items 1 Bread,1Milk,1Diaper,1Beer 2 Bread,1Milk 3 Bread,1Milk,1Diaper,1Coke 4 Milk,1Diaper,1Beer,1Coke 5 Bread,1Diaper,1Beer,1Eggs Find1the1support1and1confidence1 for1the1rule {Milk,1Diaper}1=>1Beer 14

Association)Rule)Mining)Problem Assume)that)you)have)the)following: - Description of the itemset - Transaction database - Analyst s choices for minimum support and confidence Objective:)Find)all)association)rules)which)satisfy)the) requirements)of)minimum)support)and)confidence) Key)features: - Completeness: find all rules - No target item(s) on the right hand side (differs from decision trees!) 16

Common)strategy)for)association) mining 1.#Frequent#Itemset Generation: Find1all1itemsets that1have1minimum1support1(i.e.1 frequent1itemsets)1 can1be1expensive!! 2.#Rule#Generation: Generate1association1rules1based1on1the1frequent1 itemsets 17

Step)1:)Find)frequent)itemsets Enumeration1of1all1possible1 itemsets for1i1=1{a,b,c,d,e} null a b c d e Maximum1number1of1 potential1frequent1itemsets =12 k1 11! 31 ab ac ad abc abd abe ae acd bc bd be cd ce de ace ade bcd bce bde cde abcd abce abde acde bcde abcde Source:1 Introduction1to1Data1Mining 1by1Tan1et1al. 18

We)could)always)determine)the)support)count)for) every)candidate)itemset in)the)lattice)structure? 19

The)brute)force)approach Compute)support)count)for)every)candidate)itemset DD>)Each) candidate)should)be)compared)against)every)single) transaction! Candidates Transactions N TID 1 2 3 4 5 Items Bread, Milk Bread, Diapers, Beer, Eggs Milk, Diapers, Beer, Coke Bread, Milk, Diapers, Beer Bread, Milk, Diapers, Coke M Computational1force1needed1~1O(N1x1max transaction width x1m)1 20

Number)of)possible)rules)=)3 d D 2 d+1 +1 60000 R 50000 40000 For)six)items,)i.e., d)=)6)! possible)rules)=)602 30000 20000 10000 0 0 2 4 6 8 10 12 21

Strategies)to)generate)frequent) itemsets Reduce)candidates)(M) Use pruning techniques instead of complete search Reduce)transactions)(N) E.g., vertical-based mining algorithms Reduce)number)of)comparisons)(M)x)N) No need to match every candidate against every transaction 22

The)Apriori Principle A)subset)of)a)frequent)itemset must)also)be)a)frequent)itemset " Frequent1itemsets can1be1found1iteratively1by1starting1from11u itemsets1and1progressing1to1kuitemsets Holds1due1to1 antiumonotone1property 1of1support1measure: i.e.1support1(s)1of1an1itemset can1never1be1larger1than1the1support1of1 its1subsets X, Y : ( X Y ) s( X ) s( Y ) 23

null A B C D E AB AC AD AE BC BD BE CD CE DE Found to be Infrequent ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Pruned supersets ABCDE Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 24

Example)adapted)from)Bing)Liu Notation:) TID Items {itemset}:countv1c=candidatesv1f=actually1frequent T100 1,13,14 T1=1transaction1database,1minimum1support1=10.5 T200 2,13,15 Procedure: T300 1,12,13,15 1.1scan1T1" C 1 :1 {1}:2,1{2}:3,1{3}:3,1{4}:1,1{5}:3 " F 1 :1 {1}:2,1{2}:3,1{3}:3,1111111111111{5}:3 T400 2,15 " C 2 :1 {1,2},1{1,3},1{1,5},1{2,3},1{2,5},1{3,5} 2. scan1t1" C 2 :1 {1,2}:1,1{1,3}:2,1{1,5}:1,1{2,3}:2,1{2,5}:3,1{3,5}:2 " F 2 :11 {1,3}:2,111111111111111{2,3}:2,1{2,5}:3,1{3,5}:2 " C 3 : {2,13,5} 3.1scan1T1" C 3 :1{2,)3,)5}:21" F 3:1 {2,)3,)5} 25

Issues)that)affect)performance Choice)of)minimum)support)threshold Lower thresholds increase the number of frequent itemsets Dimensionality:)number)of)items)in)the)dataset Requires more space to store support counts Computational burden Number)of)transactions) Algorithm requires multiple passes -> run time depends on size of database 26

Example:)Effect)of)lowering)support 4 10 5 10 4 5 3.5 Support = 0.1% Support = 0.2% Support = 0.5% 3.5 Support = 0.1% Support = 0.2% Support = 0.5% Number of Candidate Itemsets 3 2.5 2 1.5 1 Number of Frequent Itemsets 3 2.5 2 1.5 1 0.5 0.5 0 0 5 10 15 20 Size of Itemset 0 0 5 10 15 20 Size of Itemset Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 27

Example:)Effect)of)average)transaction)width Number of Candidate Itemsets 10 10 5 9 8 7 6 5 4 3 2 Width = 5 Width = 10 Width = 15 Number of Frequent Itemsets 10 5 10 9 8 7 6 5 4 3 2 Width = 5 Width = 10 Width = 15 1 0 0 5 10 15 20 25 Size of Itemset 1 0 0 5 10 15 20 25 Size of Itemset Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 28

Choice)of)support)threshold Support1distributions1are1often1skewed If1threshold1is1set1too1high,1interesting1rare1itemsets are1missed1 (e.g.,1luxury/expensive1products) If1threshold1is1set1too1low,1rule1mining1becomes1computationally1 expensive1and1amount1of1frequent1itemsets is1too1high Use1of1single1support1threshold1may1not1be1suitable1in1practice 29

Step)2:)Generate)rules)from)frequent) itemsets For1each1frequent1itemset and1every1nonepty subset,1create1the1 rules1that1meet1minimum1confidence1criteria Example:1{I1,I2,I5} - I1, I2 " I5 - I1, I5 " I2 - I2, I5 " I1 - I1 " I2, I5 - I2 " I1, I5 - I5 " I1, I2 for)each frequent1itemset I do for)each subset1c of1i do if)(support(i)1/1support(i U C)1>=1minconf)1then output)the1rule1(i U C)1" C, with)confidence1=1support(i)1/1support1(i U C) and1support1=1support(i) 30

Problem:)number)of)rules)grows) quickly If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABC AB CD, AC BD, AD BC, BC AD, BD AC, CD AB, If1k=size1of1itemset,1there1are12 k 21possible1association1rules1that1 can1be1generated1from1the1set! Note:1rules1with1empty1sets1are1ignored 31

Rule)pruning)with)Apriori Though1confidence1measure1is1not1generally1antimonotone,1it1has1this1 property1when1considering1rules)generated)from)the)same)itemset I1=1{A,B,C,D}: c(abc D) c(ab CD) c(a BCD) When1increasing1the1number1of1items1on1 Right1Hand1Side 1(RHS)1of1 the1rule,1the1confidence1cannot1increase. 32

Lattice of rules Low Confidence Rule ABCD=>{ } BCD=>A ACD=>B ABD=>C ABC=>D CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD Pruned Rules D=>ABC C=>ABD B=>ACD A=>BCD Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 33

Also)other)measures)than)confidence) can)be)used) Confidence)difference Confidence)ratio Information)difference Normalized)ChiDSquare 34

User s Guide. Valuesarecalculatedbasedonthepriorconfidenceandtheposteriorconfidence, defined as defined as Confidence)difference Apriori offers several evalu measures will emphasize d Confidence Difference (Absolute ConfidenceDifferencetoPrior). This User s measureguide. is based Valuesarec on the simple difference of the posterior and prior and confidence values, defined as and where c is the support of the consequent, where a is c is the the support of the of the antecedent, and consequent, r is the a issupport the support of the conjunction of the antecedent andthe theconjunction consequent, of and then antecedent is the number andof the records consequent, in the and training data. training data. Rule Confidence. The default evaluation measure for rules is simply the posterior confidence of the rule, Reduces bias when outcomes are not evenly distributed; Helps to avoid obvious rules from being retained Rule Confidence. The default evaluation measure for rules where c is the support of th of the rule, the conjunction of the ante training data. Rule Confidence. The defau of the rule, 35

Confidence)ratio Confidence Ratio (Difference of Confidence Quotient to 1). This measure is based on the ratio of posterior confidence to prior confidence, Similar to confidence difference: takes uneven outcome distributions into account Should be good at finding rules that predict rare events! 36

Information)difference Information Difference (Information Difference to Prior). This measure is based on the information gain criterion, similar to that used in building C5.0 trees. The calculation is where r is the rule support, a is the antecedent support, c is the consequent support, is the complement of antecedent support, and is the complement of consequent support. If the probability of a particular consequent is considered as a logical value (bit), then information gain is the proportion of that bit that can be determined, based on the antecedents Takes support into account so that rules that cover more records are preferred 37

Normalized)ChiDSquare Normalized Chi-square (Normalized Chi-squared Measure). This measure is based on the chi-squared statistical test for independence of categorical data, and is calculated as Measures association between antecedents and consequents Even more strongly dependent on support than the information difference measure 38

Beyond)basics) Sequential)rule)mining Parallel)algorithms Rule)interestingness)and)visualization )and)a)lot)more) 39

Evaluation)of)rules

Association)vs.)causation Watch)out)for) the)rooster)syndrome Source:1http://scienceornot.net/2012/07/05/confusingUcorrelationUwithUcausationUroosterUsyndrome/ 43

44

Finding)rules)that)matter) Too)many)patterns,)what)to)do? Most)rules)are)uninteresting)or)redundant Need measures for interestingness (originally only support and confidence have been used) Interestingness Measure U Objective U Subjective U U U Method Ranking Filtering Summarizing 45

Presemo Coffee No)coffee Total Tea 15 5 20 No1tea 75 5 80 Total 90 10 100 Tea1UU>1Coffee1? http://presemo.aalto.fi/coffee What)is)the)confidence)of)the)rule? Is)the)rule)reasonable? 46

Lift)/)Interest Coffee No)coffee Total Tea 15 5 20 No1tea 75 5 80 Total 90 10 100 P(coffee1 1tea)1=1 75.01% P(coffee1 1no1tea)1= 93.81% P(coffee)1= 901%!"#$ = &(( => *),(*) = -.(* ().(*) Lift1=1P(coffee1 1tea)1/1P(coffee)1=1 831% Lift1<111=>1negative1association Lift1=111=>1statistical1independence Lift1>111=>1positive1association 48

Drawback)with)Lift Tea1UU>1Coffee1? Coffee No1coffee Total Tea 10 0 10 P(coffee1 1tea)1=1 100.01% No1tea 0 90 90 P(coffee1 1no1tea)1= 0.01% Total 10 90 100 P(coffee)1= 101% Lift1= 10.001 Coffee No1coffee Total Tea 90 0 90 P(coffee1 1tea)1=1 100.01% No1tea 0 10 10 P(coffee1 1no1tea)1= 0.01% Total 90 10 100 P(coffee)1= 901% Lift1= 1.11 49

Coverage)(antecedent)support) &0123452 ( * =,78(() 50

Leverage)(PiatetskyDShapiro) Example: 9:;:<=>: =? @, B? @? B Swim Not1swim Total Bike 420 280 700 Not1bike 180 120 300 Total 600 400 1000 P(Swim1and1Bike)1=1 0.42 P(Swim)1=1 0.6 P(Bike)1=1 0.7 P(Swim)1x1P(Bike)1= 0.42 51

Deployability Deployability: What1percentage1of1the1training1data1satisfies1the1 conditions1of1the1antecedent1but1does1not1satisfy1the1consequent? D:E9FG=HI9IJG- = - @KJ:L:M:KJ-NOEEF<J-IK-#-FQ-<:LF<MN- RO9:-NOEEF<J-IK-#-FQ-<:LF<MN - SOTH:<-FQ-<:LJF<MN In1product1purchase1terms,1it1basically1means1what1percentage1of1the1 total1customer1base1owns1(or1has1purchased)1the1antecedent(s)1but1has1 not1yet1purchased1the1consequent.1 52

Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 53

Source:1 Introduction1to1Data1Mining 1by1Tan,1Steinbach,1Kumar 54

Subjective)measures Rule)is)interesting)if It is unexpected It is actionable Interestingness can be only judged by the user 55

From)rule)discovery)to)profiling) Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 56

Individual)data Factual/Demographic)+)Transactional)Data Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 57

Individual)rules Source:1 Using1Data1Mining1Methods1to1build1customer1profiles 1by1G.1Adomavicius,1A.1Tuzhilin 58

Examples)of)validation)operators SimilarityDbased)rule)grouping E.g., group rules by attribute similarity (club together rules of form Product => Store ) Inspect the groups of rules at once instead of evaluating individually TemplateDbased)rule)filtering Expert specifies accepting or rejecting rule templates E.g., accept all rules that have attribute Product in their bodies Redundant)rule)elimination Rules that don t offer additional value or are self-evident 59