Handling a Concept Hierarchy

Similar documents
DATA MINING - 1DL105, 1DL111

COMP 5331: Knowledge Discovery and Data Mining

Sequential Pattern Mining

Data Mining Concepts & Techniques

Data mining, 4 cu Lecture 7:

Data Mining Association Rules: Advanced Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

CS 484 Data Mining. Association Rule Mining 2

Mining Infrequent Patter ns

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

732A61/TDDD41 Data Mining - Clustering and Association Analysis

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rule Mining on Web

Association Rules. Fundamentals

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

DATA MINING - 1DL360

Data Mining Concepts & Techniques

Association Rules. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION. Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION

Lecture Notes for Chapter 6. Introduction to Data Mining

Machine Learning: Pattern Mining

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

DATA MINING - 1DL360

CSE 5243 INTRO. TO DATA MINING

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Outline. Fast Algorithms for Mining Association Rules. Applications of Data Mining. Data Mining. Association Rule. Discussion

Chapters 6 & 7, Frequent Pattern Mining

FP-growth and PrefixSpan

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

1 Frequent Pattern Mining

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Assignment 7 (Sol.) Introduction to Data Analytics Prof. Nandan Sudarsanam & Prof. B. Ravindran

Frequent Itemsets and Association Rule Mining. Vinay Setty Slides credit:

COMP 5331: Knowledge Discovery and Data Mining

Reductionist View: A Priori Algorithm and Vector-Space Text Retrieval. Sargur Srihari University at Buffalo The State University of New York

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Introduction to Data Mining

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

Mining Molecular Fragments: Finding Relevant Substructures of Molecules

Frequent Itemset Mining

The Market-Basket Model. Association Rules. Example. Support. Applications --- (1) Applications --- (2)

ASSOCIATION ANALYSIS FREQUENT ITEMSETS MINING. Alexandre Termier, LIG

Frequent Pattern Mining: Exercises

10/19/2017 MIST.6060 Business Intelligence and Data Mining 1. Association Rules

Processing Count Queries over Event Streams at Multiple Time Granularities

EFFICIENT MINING OF WEIGHTED QUANTITATIVE ASSOCIATION RULES AND CHARACTERIZATION OF FREQUENT ITEMSETS

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Association Analysis Part 2. FP Growth (Pei et al 2000)

CS 584 Data Mining. Association Rule Mining 2

CSE 5243 INTRO. TO DATA MINING

Associa'on Rule Mining

Data mining, 4 cu Lecture 5:

Data Warehousing & Data Mining

Data Warehousing & Data Mining

CS5112: Algorithms and Data Structures for Applications

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Unit II Association Rules

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Temporal Data Mining

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Knowledge Discovery and Data Mining I

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

CSE-4412(M) Midterm. There are five major questions, each worth 10 points, for a total of 50 points. Points for each sub-question are as indicated.

Summary. 8.1 BI Overview. 8. Business Intelligence. 8.1 BI Overview. 8.1 BI Overview 12/17/ Business Intelligence

WAM-Miner: In the Search of Web Access Motifs from Historical Web Log Data

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Association Analysis. Part 1

Density-Based Clustering

Detecting Anomalous and Exceptional Behaviour on Credit Data by means of Association Rules. M. Delgado, M.D. Ruiz, M.J. Martin-Bautista, D.

1. Data summary and visualization

Università di Pisa A.A Data Mining II June 13th, < {A} {B,F} {E} {A,B} {A,C,D} {F} {B,E} {C,D} > t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7

Association Analysis. Part 2

15 Introduction to Data Mining

Algorithmic Methods of Data Mining, Fall 2005, Course overview 1. Course overview

Frequent Itemset Mining

3. Problem definition

Unsupervised Learning. k-means Algorithm

Model-based probabilistic frequent itemset mining

CS4445 Data Mining and Knowledge Discovery in Databases. B Term 2014 Solutions Exam 2 - December 15, 2014

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

A unified view of the apriori-based algorithms for frequent episode discovery

Frequent Itemset Mining

NEGATED ITEMSETS OBTAINING METHODS FROM TREE- STRUCTURED STREAM DATA

Basic Data Structures and Algorithms for Data Profiling Felix Naumann

PDF / LA CROSSE TECHNOLOGY WEATHER STATION RESET EBOOK

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: Frequent Itemsets

Descrip<ve data analysis. Example. E.g. set of items. Example. Example. Data Mining MTAT (6EAP)

CPT+: A Compact Model for Accurate Sequence Prediction

Mining Positive and Negative Fuzzy Association Rules

Mining State Dependencies Between Multiple Sensor Data Sources

Mining Data Streams. The Stream Model. The Stream Model Sliding Windows Counting 1 s

Accountability. User Guide

Transcription:

Food Electronics Handling a Concept Hierarchy Bread Milk Computers Home Wheat White Skim 2% Desktop Laptop Accessory TV DVD Foremost Kemps Printer Scanner Data Mining: Association Rules 5 Why should we incorporate concept hierarchy? Rules at lower levels may not have enough support to appear in any frequent itemsets Rules at lower levels of the hierarchy are overly specific e.g., skim milk white bread, 2% milk wheat bread, skim milk wheat bread, etc. are indicative of an association between milk and bread Data Mining: Association Rules How do support and confidence vary as we traverse the concept hierarchy? If X is the parent item for both X and X2, then σ(x) σ(x) + σ(x2) If σ(x Y) minsup, and X is parent of X, Y is parent of Y then σ(x Y) minsup, σ(x Y) minsup σ(x Y) minsup If conf(x Y) minconf, then conf(x Y) minconf Data Mining: Association Rules 7 Approach : Extend current association rule formulation by augmenting each transaction with higher level items Original Transaction: {skim milk, wheat bread} Augmented Transaction: {skim milk, wheat bread, milk, bread, food} Issues: Items that reside at higher levels have much higher support counts if support threshold is low, there are too many frequent patterns involving items from the higher levels Increased dimensionality of the data Data Mining: Association Rules 8 Approach 2: Generate frequent patterns at highest level first Then, generate frequent patterns at the next highest level, and so on Issues: I/O requirements increase dramatically because we need to perform more passes over the data May miss some potentially interesting cross-level association patterns Data Mining: Association Rules 9

Sequence Data Mining Sequential Patterns Sequence Database: A 0 2, 3, 5 A 20, A 23 B 4, 5, B 7 2 B 2 7, 8,, 2 B 28, C 4, 8, 7 Timeline Object A: Object B: 0 5 20 25 30 35 2 3 5 4 5 2 7 8 2 Object C: 7 8 Data Mining: Association Rules 7 Examples of Sequence Data Formal Definition of a Sequence Database Customer Web data Event data Genome sequences Element (Transaction) Sequence Sequence Purchase history of a given customer Browsing activity of a particular Web visitor History of events generated by a given sensor DNA sequence of a particular species Element (Transaction) A set of items bought by a customer at time t A collection of files viewed by a Web visitor after a single mouse click Events triggered by a sensor at time t An element of the DNA sequence Event (Item) Books, diary products, CDs, etc Home page, index page, contact info, etc Types of alarms generated by sensors Bases A,T,G,C E4 Event (Item) Data Mining: Association Rules 72 A sequence is an ordered list of elements (transactions) s = < e e 2 e 3 > Each element contains a collection of events (items) e i = {i, i 2,, i k } Each element is attributed to a specific time or location Length of a sequence, s, is given by the number of elements of the sequence A k-sequence is a sequence that contains k events (items) Data Mining: Association Rules 73 Examples of Sequences Formal Definition of a Subsequence Web sequence: < {Homepage} {Electronics} {Digital Cameras} {Canon Digital Camera} {Shopping Cart} {Order Confirmation} {Return to Shopping} > Sequence of books checked out at a library (or films rented at a video store): < {Fellowship of the Ring} {The Two Towers, Return of the King} > Data Mining: Association Rules 74 A sequence <a a 2 a n > is contained in another sequence <b b 2 b m > (m n) if there exist integers i < i 2 < < i n such that a b i, a 2 b i2,, a n b in < {2,4} {3,5,} {8} > < {,2} {3,4} > < {2,4} {2,4} {3,5} > Subsequence < {2} {3,5} > < {} {2} > < {2} {4} > Contain? The support of a subsequence w is defined as the fraction of data sequences that contain w A sequential pattern is a frequent subsequence (i.e., a subsequence whose support is minsup) Data Mining: Association Rules 75

Sequential Pattern Mining: Definition Given: a database of sequences a user-specified minimum support threshold, minsup Task: Find all subsequences with support minsup Sequential Pattern Mining: Challenge Given a sequence: <{a b} {c d e} {f} {g h i}> Examples of subsequences: <{a} {c d} {f} {g} >, < {c d e} >, < {b} {g} >, etc. How many k-subsequences can be extracted from a given n-sequence? <{a b} {c d e} {f} {g h i}> n = 9 k=4: Y Y Y _ Y <{a} {d e} {i}> Answer : n 9 = = 2 k 4 Data Mining: Association Rules 7 Data Mining: Association Rules 77 Sequential Pattern Mining: Example Extracting Sequential Patterns: Brute-force A,2,4 A 2 2,3 A 3 5 B,2 B 2 2,3,4 C, 2 C 2 2,3,4 C 3 2,4,5 D 2 D 2 3, 4 D 3 4, 5 E, 3 E 2 2, 4, 5 Minsup = 50% Examples of Frequent Subsequences: < {,2} > s=0% < {2,3} > s=0% < {2,4}> s=80% < {3} {5}> s=80% < {} {2} > s=80% < {2} {2} > s=0% < {} {2,3} > s=0% < {2} {2,3} > s=0% < {,2} {2,3} > s=0% Given n events: i, i 2, i 3,, i n Candidate -subsequences: <{i }>, <{i 2 }>, <{i 3 }>,, <{i n }> Candidate 2-subsequences: <{i, i 2 }>, <{i, i 3 }>,, <{i } {i }>, <{i } {i 2 }>,, <{i n- } {i n }> Candidate 3-subsequences: <{i, i 2, i 3 }>, <{i, i 2, i 4 }>,, <{i, i 2 } {i }>, <{i, i 2 } {i 2 }>,, <{i } {i, i 2 }>, <{i } {i, i 3 }>,, <{i } {i } {i }>, <{i } {i } {i 2 }>, Data Mining: Association Rules 78 Data Mining: Association Rules 79 Candidate Generation Algorithm Base case (k=2): Merging two frequent -sequences <{i}> and <{i2}> will produce two candidate 2-sequences: <{i} {i2}> and <{i i2}> General case (k>2): Two frequent (k-)-sequences w and w2 are merged together to produce a candidate k-sequence if the subsequence obtained by removing the first event in w is the same as the subsequence obtained by removing the last event in w2 The resulting candidate after merging is given by the sequence w extended with the last event of w2. If the last two events in w2 belong to the same element, then the last event in w2 becomes part of the last element in w Otherwise, the last event in w2 becomes a separate element appended to the end of w Data Mining: Association Rules 80 Candidate Generation Examples Merging the sequences w = <{} {2 3} {4}> and w 2 = <{2 3} {4 5}> will produce the candidate sequence <{} {2 3} {4 5}> because the last two events in w 2 (4 and 5) belong to the same element Merging the sequences w = <{} {2 3} {4}> and w 2 = <{2 3} {4} {5}> will produce the candidate sequence <{} {2 3} {4} {5}> because the last two events in w 2 (4 and 5) do not belong to the same element We do not have to merge the sequences w = <{} {2 } {4}> and w 2 = <{} {2} {4 5}> to produce the candidate <{} {2 } {4 5}> because if the latter is a viable candidate, then it can be obtained by merging w with <{} {2 } {5}> Data Mining: Association Rules 8

Generalized Sequential Pattern (GSP) GSP Example Step : Make the first pass over the sequence database D to yield all the -element frequent sequences Step 2: Repeat until no new frequent sequences are found Candidate Generation: Merge pairs of frequent subsequences found in the (k-)th pass to generate candidate sequences that contain k items Candidate Pruning: Prune candidate k-sequences that contain infrequent (k-)-subsequences Support Counting: Make a new pass over the sequence database D to find the support for these candidate sequences Candidate Elimination: Eliminate candidate k-sequences whose actual support is less than minsup Frequent 3-sequences < {} {2} {3} > < {} {2 5} > < {} {5} {3} > < {2} {3} {4} > < {2 5} {3} > < {3} {4} {5} > < {5} {3 4} > Candidate Generation < {} {2} {3} {4} > < {} {2 5} {3} > < {} {5} {3 4} > < {2} {3} {4} {5} > < {2 5} {3 4} > Candidate Pruning < {} {2 5} {3} > Data Mining: Association Rules 82 Data Mining: Association Rules 83 Timing Constraints (I) {A B} {C} {D E} <= x g >n g x g : max-gap n g : min-gap <= m s m s : maximum span Mining Sequential Patterns with Timing Constraints Approach : Mine sequential patterns without timing constraints Postprocess the discovered patterns x g = 2, n g = 0, m s = 4 < {2,4} {3,5,} {4,7} {4,5} {8} > < {} {2} {3} {4} {5} > < {} {2,3} {3,4} {4,5} > Subsequence < {} {5} > < {} {4} > < {2} {3} {5} > Contain? Approach 2: Modify GSP to directly prune candidates that violate timing constraints Question: Does the Apriori principle still hold? < {,2} {3} {2,3} {3,4} {2,4} {4,5} > < {,2} {5} > Data Mining: Association Rules 84 Data Mining: Association Rules 85 Apriori Principle for Sequence Data A,2,4 A 2 2,3 A 3 5 B,2 B 2 2,3,4 C, 2 C 2 2,3,4 C 3 2,4,5 D 2 D 2 3, 4 D 3 4, 5 E, 3 E 2 2, 4, 5 Suppose: x g = (max-gap) n g = 0 (min-gap) m s = 5 (maximum span) minsup = 0% <{2} {5}> support = 40% but <{2} {3} {5}> support = 0% Problem exists because of max-gap constraint such problem if max-gap is infinite Data Mining: Association Rules 8 Contiguous Subsequences s is a contiguous subsequence of w = <e >< e 2 > < e k > if any of the following conditions hold:. s is obtained from w by deleting an item from either e or e k 2. s is obtained from w by deleting an item from any element e i that contains 2 or more items 3. s is a contiguous subsequence of s and s is a contiguous subsequence of w (recursive definition) Examples: s = < {} {2} > is a contiguous subsequence of < {} {2 3} >, < { 2} {2} {3} >, and < {3 4} { 2} {2 3} {4} > is not a contiguous subsequence of < {} {3} {2} > and < {2} {} {3} {2} > Data Mining: Association Rules 87

Modified Candidate Pruning Step Without maxgap constraint: A candidate k-sequence is pruned if at least one of its (k-)-subsequences is infrequent With maxgap constraint: A candidate k-sequence is pruned if at least one of its contiguous (k-)-subsequences is infrequent Timing Constraints (II) {A B} {C} {D E} <= x g >n g <= ws <= m s x g = 2, n g = 0, ws =, m s = 5 < {2,4} {3,5,} {4,7} {4,} {8} > < {,2,3,4} {5} {} > < {,2} {2,3} {3,4} {4,5} > x g : max-gap n g : min-gap ws: window size m s : maximum span Subsequence Contain? < {3} {5} > < {,4} {5} > < {,2} {3,4} > Data Mining: Association Rules 88 Data Mining: Association Rules 89 Modified Support Counting Step Given a candidate pattern: <{a, c}> Any data sequences that contain < {a c} >, < {a} {c} > (where time({c}) time({a}) ws) < {c} {a} > (where time({a}) time({c}) ws) will contribute to the support count of the pattern Other Formulation In some domains, we may have only one very long time series Example: monitoring network traffic events for attacks monitoring telecommunication alarm signals Goal is to find frequent sequences of events in the time series This problem is also known as frequent episode mining E4 E4 E4 E5 E5 Pattern: <> <> Data Mining: Association Rules 90 Data Mining: Association Rules 9