Association Rule Mining on Web

Similar documents
732A61/TDDD41 Data Mining - Clustering and Association Analysis

Association Rules. Fundamentals

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining Concepts & Techniques

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Introduction to Data Mining

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

CS 484 Data Mining. Association Rule Mining 2

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

COMP 5331: Knowledge Discovery and Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining

DATA MINING - 1DL360

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

CSE 5243 INTRO. TO DATA MINING

Chapters 6 & 7, Frequent Pattern Mining

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

CSE 5243 INTRO. TO DATA MINING

DATA MINING - 1DL360

Handling a Concept Hierarchy

COMP 5331: Knowledge Discovery and Data Mining

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

CS 584 Data Mining. Association Rule Mining 2

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

Knowledge Discovery and Data Mining I

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

1 Frequent Pattern Mining

Machine Learning: Pattern Mining

CSE-4412(M) Midterm. There are five major questions, each worth 10 points, for a total of 50 points. Points for each sub-question are as indicated.

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

Association Analysis Part 2. FP Growth (Pei et al 2000)

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Data Mining and Analysis: Fundamental Concepts and Algorithms

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Association)Rule Mining. Pekka Malo, Assist. Prof. (statistics) Aalto BIZ / Department of Information and Service Management

Data Warehousing & Data Mining

Unit II Association Rules

Sequential Pattern Mining

Association Analysis. Part 1

1 [15 points] Frequent Itemsets Generation With Map-Reduce

DATA MINING - 1DL105, 1DL111

15 Introduction to Data Mining

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Data Warehousing & Data Mining

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

CS5112: Algorithms and Data Structures for Applications

Unsupervised Learning. k-means Algorithm

Associa'on Rule Mining

NetBox: A Probabilistic Method for Analyzing Market Basket Data

Course Content. Association Rules Outline. Chapter 6 Objectives. Chapter 6: Mining Association Rules. Dr. Osmar R. Zaïane. University of Alberta 4

sor exam What Is Association Rule Mining?

Data mining, 4 cu Lecture 7:

Data mining, 4 cu Lecture 5:

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Frequent Itemset Mining

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Summary. 8.1 BI Overview. 8. Business Intelligence. 8.1 BI Overview. 8.1 BI Overview 12/17/ Business Intelligence

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

Density-Based Clustering

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

FP-growth and PrefixSpan

Association Analysis 1

Algorithmic Methods of Data Mining, Fall 2005, Course overview 1. Course overview

Detecting Anomalous and Exceptional Behaviour on Credit Data by means of Association Rules. M. Delgado, M.D. Ruiz, M.J. Martin-Bautista, D.

Descriptive data analysis. E.g. set of items. Example: Items in baskets. Sort or renumber in any way. Example item id s

EFFICIENT MINING OF WEIGHTED QUANTITATIVE ASSOCIATION RULES AND CHARACTERIZATION OF FREQUENT ITEMSETS

Data Mining and Knowledge Discovery. Petra Kralj Novak. 2011/11/29

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Processing Count Queries over Event Streams at Multiple Time Granularities

Data Mining Techniques

Quantitative Association Rule Mining on Weighted Transactional Data

CS 412 Intro. to Data Mining

Mining Rank Data. Sascha Henzgen and Eyke Hüllermeier. Department of Computer Science University of Paderborn, Germany

3. Problem definition

Mining Infrequent Patter ns

Chapter 4: Frequent Itemsets and Association Rules

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

CS246 Final Exam, Winter 2011

Removing trivial associations in association rule discovery

CS738 Class Notes. Steve Revilak

Interestingness Measures for Data Mining: A Survey

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

CPT+: A Compact Model for Accurate Sequence Prediction

Transcription:

Association Rule Mining on Web

What Is Association Rule Mining? Association rule mining: Finding interesting relationships among items (or objects, events) in a given data set. Example: Basket data analysis Given a database of transactions, each transaction is a list of items (purchased by a customer in a visit or browsed by a user in a visit) You wonder: which groups of items are customers likely to purchase together on a given trip to a store or which groups of pages are users likely to browse together on a given trip to a web site? You may find: computer financial_management_software [support=2%, confidence=60%] Action: place financial management software close to computer display to increase sales of both of these items.

Data Mining Techniques - Association Rules Supermarket example Transaction ID Items Purchased 1 butter, bread, milk, beer, diaper 2 bread, milk, beer, egg, diaper 3 Coke, Film, bread, butter, milk An association rule will be like If a customer buys diapers, in 60% of cases, he/she also buys beers. This happens in 3% of all transactions. 60%: confidence 3%: support

Association Rule: Basic Concepts Association rule: Rule form: A Β [support, confidence] where A I, B I, A B= and I is a set of items (objects or events). support: probability that a transaction contains A and B P(AB) confidence: conditional probability that a transaction having A also contains B P(B A) Examples: buys(x, diapers ) buys(x, beers ) [0.5%, 60%] major(x, CS ) ^ takes(x, DB ) grade(x, A ) [1%, 75%] Applications? a particular product (What the store should do to boost sales of the particular product) Home Electronics? (What other products should the store stocks up?) Attached mailing in direct marketing

Association Rule: Basic Concepts Given a minimum support threshold (min_sup) and a minimum confidence threshold (min_conf), an association rule is strong if it satisfies both min_sup and min_conf. If min_sup = min_conf = 50%, A C C A A & C B A & B E (50%, 66.6%) (50%, 100%) (25%, 50%) (0%, 0%) Itemset: a set of items. k-itemset: an itemset that contains k items. Strong Strong {computer, financial_management_software} is a 2-itemset. Support count (frequency, count) of an itemset: number of transactions that contain the itemset. Frequent itemset: itemset that satisfies minimum support count. Minimum support count = min_sup total # of transactions in a data set.

Rule Measures: Support and Confidence support(a B) = P(A B) confidence(a B) = P(B A) Transaction ID Items Bought 2000 A,B,C 1000 A,C 4000 A,D 5000 B,E,F A C C A A & C B A & B E

Rule Measures: Support and Confidence support(a B) = P(A B) confidence(a B) = P(B A) Transaction ID Items Bought 2000 A,B,C 1000 A,C 4000 A,D 5000 B,E,F A C C A A & C B A & B E (50%, 66.6%) (50%, 100%) (25%, 50%) (0%, 0%)

How to Mine Association Rules A two-step process: Find all frequent itemsets Generate strong association rules from frequent itermsets. Example: given min_sup=50% and min_conf=50% Transaction ID Items Bought 2000 A,B,C 1000 A,C 4000 A,D 5000 B,E,F Frequent Itemset Support {A} 75% {B} 50% {C} 50% {A, C} 50% Generate strong rules: {A} {C} [support=50%, confidence=66.6%] {C} {A} [support=50%, confidence=100%]

Finding Frequent Itemsets: the Key Step The Apriori principle: Any subset of a frequent itemset must be frequent i.e., if {AB} is a frequent itemset, both {A} and {B} should be a frequent itemset Find the frequent itemsets: the sets of items that have minimum support Iteratively find frequent itemsets with cardinality from 1 to k (k-itemset), that is, find all frequent 1-itemsets. find all frequent 2-itemsets using frequent 1-itemsets. find all frequent k-itemsets using frequent (k-1)-itemsets.

The Apriori Algorithm Based on the Apriori principle, use iterative level-wise approach and candidate generation Pseudo-code: C k : a set of candidate itemsets of size k L k : the set of frequent itemsets of size k (Scan database to find all frequent L 1 = {frequent items}; 1-itemsets) for (k = 1; L k!= ; k++) do begin C k+1 = candidates generated from L k ; Scan database to for each transaction t in database do calculate support for each itemset in increment the count of all candidates in C k+1 C k+1 that are contained in t L k+1 = candidates in C k+1 with min_support end return k L k-1 ; Generation process: L k C k+1 L k+1

The Apriori Algorithm Example Database D TID Items 100 1 3 4 200 2 3 5 300 1 2 3 5 400 2 5 Scan D (minimum support = 0.5) itemset sup. {1} 2 {2} 3 {3} 3 {4} 1 {5} 3 C 1 L 1 itemset sup {1 2} 1 {1 3} 2 {1 5} 1 {2 3} 2 {2 5} 3 {3 5} 2 C 2 C 2 L 2 itemset sup Scan D {1 3} 2 {2 3} 2 {2 5} 3 {3 5} 2 C 3 itemset Scan D L 3 {2 3 5} itemset sup {2 3 5} 2 itemset sup. {1} 2 {2} 3 {3} 3 {5} 3 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5}

Apriori Algorithm (Flow Chart) L 1 = set of frequent 1-itemset (scan DB) k=1 C k : set of candidate k- itemsets L k : set of frequent k- itemsets L k φ? Yes Compute candidate set C k+1 : C k+1 =join L k with L k prune C k+1 No Output L 1,, L k-1 Scan DB to get L k+1 from C k+1 k=k+1 Candidate set generation (get C k+1 from L k ) Prune candidate set based on Apriori principle Scan DB to get L k+1 from C k+1

Generate Association Rules from Frequent Itemsets Naïve algorithm: for each frequent itemset l do Support(l) = Freq(l)/Total for each nonempty subset c of l do if (support(l)/support(l-c) >= min_conf) output the rule (l-c) c, with support =support(l) and confidence=support(l)/support(l-c) Just an example! a frequent itemset: l={i1, I2, I5} nonempty subsets of l are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5} resulting association rules: I1 I 2 I5, I1 I5 I 2, I 2 I5 I1, I1 I 2 I5, I 2 I1 I5, I5 I1 I 2, confidence = 50% confidence = 100% confidence = 100% confidence = 33% confidence = 29% confidence = 100% P(c l-c) If minimum confidence threshold is 70%, only 3 rules are output.

Is Apriori Fast Enough? Performance Bottlenecks The core of the Apriori algorithm: Use frequent k -itemsets to generate candidate frequent (k+1)- itemsets Use database scan and pattern matching to collect counts for the candidate itemsets to generate frequent (k+1)-itemsets from k+1 candidate set The bottleneck of Apriori: candidate generation Huge candidate sets: 10 4 frequent 1-itemset will generate more than 10 7 candidate 2-itemsets To discover a frequent pattern of size 100, e.g., {a 1, a 2,, a 100 }, one needs to generate 2 100 10 30 candidates. Multiple scans of database: Needs n scans, n is the length of the longest pattern

Web Usage Mining - Clustering Clustering User Clusters Discover groups of users exhibiting similar browsing patterns Page Clusters Discover groups of pages having related content

Web Usage Mining - Classification Classification clients from state or government agencies who visit the site tend to be interested in the page company/product1 50% of clients who placed an online order in /company/product2, were in the 20-25 age group and lived on the West Coast

Web Usage Mining - Sequential Patterns Sequential Patterns - 30 % of clients who visited /company/products, had done a search in Yahoo, within the past week on keyword w - 60 % of clients who placed an online order in /company/product1, also placed an online order in /company/product4 within 15 days

Pattern Analysis

Web Usage Mining - Pattern Analysis Not all patterns are interesting Some are downright misleading The goal of pattern analysis is to filter out information that is not useful and interesting.

Web Mining Applications E-commerce Generate user profiles Targeted advertising Fraud Similar image retrieval Building Adaptive Web Sites by User Profiling

Pattern Analysis Example 1 Interestingness measures Use interestingness measures to rank discovered rules or sequential patterns. Pruning Prune the discovered rules and patterns if they are contained by others with higher or comparable interestingness values. Prune uninteresting rules according to domain background knowledge.

Interestingness of Discovered Patterns Interestingness: Different people define interestingness differently. What is interesting depends on the type of knowledge discovered and user s belief. One definition: (not necessarily suitable for all situations) A pattern is interesting if it is easily understood by humans, valid on new or test data with some degree of certainty, potentially useful, and novel or validates some hypothesis that a user seeks to confirm Interestingness measures: Objective: based on statistics and depending on types of patterns Eg: support and confidence for association rules, classification accuracy for classification rules. Subjective: based on user s belief in the data, e.g., unexpectedness, novelty, actionability, or confirmation of a hypothesis, etc.

Interestingness Measures The following measures are used to evaluate an association rule A B, where A and B are itemsets, a sequential pattern A B, where B is the last element in the sequence and A is the subsequence in front of B. Support: P(AB) Confidence: P( B A) IS: P( AB) P( AB) P( A) P( B) CV: P( A) P( B) P( AB) RI: P( AB) P( A) P( B) MI: log 2 P( AB) P( A) P( B) MD: log P( A P( A B)(1 P( A B)(1 P( A B)) B)) C2: P ( B A) P( B) 1+ P( A B) * 1 P( B) 2 IM: log 2 P ( AB) Support( AB)

Pattern Analysis Example 2 Problem of association rule mining (rule quantity problem) A large number of rules are often generated and many of them are similar or redundant. Solutions Constraint-based mining post-pruning rules grouping rules Two grouping algorithms Objective grouping group rules according to rule structure; no domain knowledge is used Subjective grouping domain knowledge is used

Objective Grouping Basic idea Recursively group rules with common items in their antecedents and consequents. At each level of recursive call, select the cluster with the biggest size. Result: a tree of clusters All a d b a other ab d a d: other bcd ae b a d c ab cd abe d ac d

Subjective Grouping Basic idea group rules according to the semantic distance between rules. Domain knowledge used: a tree-structured semantic network of objects: a taxonomy or is-ahierarchy of objects An association rule can relate objects on both leaf and non-leaf levels. Cloth Cloth Shoes Footwear Outerwear Shirts Shoes Hiking Boots Jackets Ski Pants

Tagging the Semantic Tree Objectives Enable calculation of the semantic distance between rules by assigning a Relative Semantic Position (RSP) to each node of the tree. Two objects that are semantically closer to each other should be assigned two closer RSPs Definition of RSP of a node: (hpos, vpos) hpos: horizontal position of the node (position of the node in the balanced tree s in-order traversal sequence) vpos: vertical position of the node (the level of the node in the tree) (8, 1) (4, 2) (2, 3) (6, 3) (12, 2) (10, 3) (14, 3) (1, 4) (3, 4) (9, 4)

Representing Rules with RSPs Replace the objects in a rule with their RSPs {Jaket, Shirt} {Shoes} can be represented as {(1,4), (6, 3}) {(10,3)}. Calculate the mean RSPs of antecedent and consequent The above rule becomes (3.5, 3.5) (10, 3) (8, 1) Jacket (4, 2) (2, 3) (6, 3) Shirt Shoes (1, 4) (3, 4) (9, 4) (12, 2) (10, 3) (14, 3)

Representing Rules with Line Segments A rule can then be represented by a directed line segment in a two-dimensional space. For example, (3.5, 3.5) (10, 3) is represented as: 5 4 (1, 4) (3, 4) (9, 4) vpos 3 2 1 0 (3.5, 3.5) (2, 3) (6, 3) (10, 3) (14, 3) (4, 2) (12, 2) (8, 1) 0 2 4 6 8 10 12 14 16 hpos

Grouping Rules The problem of grouping rules becomes the problem of grouping directed line segments. Objective of clustering group line segments that are close to each other and have similar length and orientation. A standard clustering algorithm can be used with the distance function defined as Distance(s1,s2)= 1-cos(s1,s2) + Ndist(c1,c2)+Ndiff(length(s1),length(s2)) http://www.computer.org/portal/web/csdl/doi/10.1109/icdm.2002.1184048 vpos hpos

Time and Location Time: Monday, December 11 th from 14:00pm to 16:00pm Location: DB 1004 (or TEL 1004)

Coverage of Final Week 1 (Objectives and introduction) Week 2 (CGI, form, HTML and XML) Week 3 (DTD, XML, XSL and servlets) Week 4 (Tomcat, Servlets and its lift cycle) Week 6 (Course project presentation week) Week 7 (Servlet and JSP) Week 8 (Recommendation systems and JDBC) Week 9 (E-commerce and digital signature) Week 10 (Web crawlers, Web search engine and their algorithms; indexer and inverted file ) Week 11 (Information retrieval and its models, Probabilistic information retrieval) Week 12 (System evaluation, Web mining and association rule learning)

Types of Final Exam Questions This exam is not a programming based exam. It will last for 2 hours. Types of questions, such as: multiple choice question answer true or false problem solving You should memorize some basic concepts and measures, and understand all the materials taught in our class. You should focus on the lecture notes.

Data Preprocessing

Data Preprocessing Example Session identification A session on the Web can be defined as a group of user activities that are related to each other to achieve a purpose. Session identification is to divide the object accesses of each user into individual sessions. UID Time ObjectIDs 4570 4/29/2002-8:11:7 o14655738 4570 4/29/2002-8:11:10 o15199366, o2541625, o8272639 4570 4/29/2002-8:11:13 t12, t14, t18 4571 4/17/2002-7:37:14 o6234980 4571 4/17/2002-7:37:45 o6234980 4571 4/17/2002-7:37:52 o6234980, o8735468 4571 4/17/2002-7:37:56 o15291602 4571 4/17/2002-7:38:14 o6330745, o8759058 4571 4/17/2002-7:38:24 o13972781 4571 4/17/2002-7:38:29 o15322672 A sample for session

Data Preprocessing Example(Cont d) Session identification methods Standard timeout method (5, 10, 15, 20, 25 and 30 minutes) In this case, a session consists of a sequence of sets of objects requested by a single user such that no two consecutive requests are separated by an interval more than a predefined threshold. N-gram language modeling method a statistical method that was originally used in speech recognition for predicting the probability of a word sequence.

Language Model in Web Session Identification )... ( ) ( 1 1 1 + = = i n i l i i o o o P s P l i n i i l i o o o P )... ( 1 1 1 1 + = log 2 Perplexity Perplexity(s): Entropy(s): To predict the probability of the object sequence s = o 1 o 2 o l Language model N-gram model = = = l i i i l l o o o P o o o P o o o P o o P o P s P 1 1 1 1 1 2 1 3 1 2 1 ) ( ) ( ) ( ) ( ) ( ) ( Session boundary detection Consider o 1 o 2 o l o l+1 If the difference between Entropy(o 1 o 2 o l) and Entropy(o 1 o 2 o l o l+1 ) is significant, there is a boundary between o l and o l+1 Chain rule

Smoothing Technique: Good-Turing Estimate A maximum likelihood estimate of n-gram probability from a corpus is given by: P( o i oi n+ 1... oi 1) = #( o #( o i n+ 1 i n+ 1... o... o i i 1 ) ) For any n-gram that occurs r times, we should pretend that it occurs r* times as follows: r* ( r + 1) Nr N where N r is the number of n-grams that occur exactly r times in the training data. + r 1

Entropy Evolution beginning of a session end of a session entropy sequence of log entries Perplexity: l l i= 1 1 P ( o o... o ) i i n+ 1 i 1 Entropy: log 2 Perplexity

Empirical Evaluation Objective Evaluating the effectiveness of language modeling based session detection method. Investigating the optimal order of n-gram language models and the influence of different smoothing techniques. Evaluation Asking domain experts to evaluate the discovered association rules according to the unexpectedness and actionability of the rules. Analyzing the entropy evolution curves of different smoothing methods.

Comparisons of Language Modeling and Timeout Methods for Association Rule Learning top 10 top 20 top 30 Average Precision % 75 65 55 45 35 5 10 15 20 25 30 35 40 Timeout threshold top 10 top 20 top 30 Average Precision % 85 75 65 55 45 timeout ABS GT LIN WB Method timeout language models