Data Mining Association Rules: Advanced Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining

Similar documents
COMP 5331: Knowledge Discovery and Data Mining

DATA MINING - 1DL105, 1DL111

Handling a Concept Hierarchy

Sequential Pattern Mining

Data Mining Concepts & Techniques

Data mining, 4 cu Lecture 7:

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Lecture Notes for Chapter 6. Introduction to Data Mining

COMP 5331: Knowledge Discovery and Data Mining

Association Analysis Part 2. FP Growth (Pei et al 2000)

Data Mining Concepts & Techniques

DATA MINING - 1DL360

732A61/TDDD41 Data Mining - Clustering and Association Analysis

D B M G Data Base and Data Mining Group of Politecnico di Torino

Association Rules. Fundamentals

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

DATA MINING - 1DL360

CS 484 Data Mining. Association Rule Mining 2

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

CSE 5243 INTRO. TO DATA MINING

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

CS 584 Data Mining. Association Rule Mining 2

Data Mining and Analysis: Fundamental Concepts and Algorithms

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Hotelling s Two- Sample T 2

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit

CSE 5243 INTRO. TO DATA MINING

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal

4. Score normalization technical details We now discuss the technical details of the score normalization method.

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

Chapters 6 & 7, Frequent Pattern Mining

Tests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test)

Association Rule Mining on Web

Statics and dynamics: some elementary concepts

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

7.2 Inference for comparing means of two populations where the samples are independent

Basic Association Rules

MATH 2710: NOTES FOR ANALYSIS

Introduction to Probability and Statistics

Mining Infrequent Patter ns

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Introduction to Data Mining

Radial Basis Function Networks: Algorithms

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Bayesian Networks Practice

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

An Introduction To Range Searching

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Principles of Computed Tomography (CT)

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015

¼ ¼ 6:0. sum of all sample means in ð8þ 25

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Cryptography. Lecture 8. Arpita Patra

One-way ANOVA Inference for one-way ANOVA

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data mining, 4 cu Lecture 5:

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Approximating min-max k-clustering

Associa'on Rule Mining

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

arxiv: v1 [physics.data-an] 26 Oct 2012

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

Distributed Rule-Based Inference in the Presence of Redundant Information

Objectives. 6.1, 7.1 Estimating with confidence (CIS: Chapter 10) CI)

Machine Learning: Pattern Mining

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

STK4900/ Lecture 7. Program

Frequent Itemset Mining

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

Fig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem.

Estimation of the large covariance matrix with two-step monotone missing data

A Network-Flow Based Cell Sizing Algorithm

Unit II Association Rules

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

Chapter 9 Practical cycles

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

Availability and Maintainability. Piero Baraldi

Notes on Instrumental Variables Methods

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Estimation of Separable Representations in Psychophysical Experiments

Transcription:

Data Mining Association Rules: Advanced Concets and Algorithms Lecture Notes for Chater 7 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 1

1 Continuous and Categorical Attributes How to aly association analysis formulation to nonasymmetric binary variables? Session Id Country Session Length (sec) Number of Web Pages viewed Gender Browser Tye Buy 1 USA 982 8 Male IE No 2 China 811 1 Female Netscae No 3 USA 2125 45 Female Mozilla Yes 4 Germany 596 4 Male IE Yes 5 Australia 123 9 Male Mozilla No Examle of Association Rule: {Number of Pages [5,1) (Browser=Mozilla)} {Buy = No} Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 2

Handling Categorical Attributes Transform categorical attribute into asymmetric binary variables Introduce a new item for each distinct attributevalue air Examle: relace Browser Tye attribute with Browser Tye = Internet Exlorer Browser Tye = Mozilla Browser Tye = Mozilla Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 3

Handling Categorical Attributes Potential Issues What if attribute has many ossible values Examle: attribute country has more than 2 ossible values Many of the attribute values may have very low suort Potential solution: Aggregate the low-suort attribute values What if distribution of attribute values is highly skewed Examle: 95% of the visitors have Buy = No Most of the items will be associated with (Buy=No) item Potential solution: dro the highly frequent items Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 4

Handling Continuous Attributes Different kinds of rules: Age [21,35) Salary [7k,12k) Buy Salary [7k,12k) Buy Age: µ=28, σ=4 Different methods: Discretization-based Statistics-based Non-discretization based minariori Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 5

Handling Continuous Attributes Use discretization Unsuervised: Equal-width binning Equal-deth binning Clustering Suervised: Attribute values, v Class v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 Anomalous 2 1 2 Normal 15 1 1 1 15 1 bin1 bin2 bin3 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 6

Discretization Issues Size of the discretized intervals affect suort & confidence {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (6K Income 8K)} {Cheat = No} {Refund = No, (K Income 1B)} {Cheat = No} If intervals too small may not have enough suort If intervals too large may not have enough confidence Potential solution: use all ossible intervals Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 7

Discretization Issues Execution time If intervals contain n values, there are on average O(n 2 ) ossible ranges Too many rules {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (51K Income 52K)} {Cheat = No} {Refund = No, (5K Income 6K)} {Cheat = No} Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 8

Aroach by Srikant & Agrawal Prerocess the data Discretize attribute using equi-deth artitioning Use artial comleteness measure to determine number of artitions Merge adjacent intervals as long as suort is less than max-suort Aly existing association rule mining algorithms Determine interesting rules in the outut Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 9

Aroach by Srikant & Agrawal Discretization will lose information Aroximated X X Use artial comleteness measure to determine how much information is lost C: frequent itemsets obtained by considering all ranges of attribute values P: frequent itemsets obtained by considering all ranges over the artitions P is K-comlete w.r.t C if P C,and X C, X P such that: 1. X is a generalization of X and suort (X ) K suort(x) (K 1) 2. Y X, Y X such that suort (Y ) K suort(y) Given K (artial comleteness level), can determine number of intervals (N) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 1

Interestingness Measure {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (51K Income 52K)} {Cheat = No} {Refund = No, (5K Income 6K)} {Cheat = No} Given an itemset: Z = {z 1, z 2,, z k } and its generalization Z = {z 1, z 2,, z k } P(Z): suort of Z E Z (Z): exected suort of Z based on Z E P( z ) P( z ) 1 2 ( Z) = L P( z ') P( z ') 1 2 P( z P( z ) ') k Z ' k P( Z') Z is R-interesting w.r.t. Z if P(Z) R E Z (Z) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 11

Interestingness Measure For S: X Y, and its generalization S : X Y P(Y X): confidence of X Y P(Y X ): confidence of X Y E S (Y X): exected suort of Z based on Z E( Y X ) P( y ) P( y ) P( y ) 1 2 k = L P( Y ' X P( y ') P( y ') P( y ') 1 2 k ') Rule S is R-interesting w.r.t its ancestor rule S if Suort, P(S) R E S (S) or Confidence, P(Y X) R E S (Y X) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 12

Statistics-based Methods Examle: Browser=Mozilla Buy=Yes Age: µ=23 Rule consequent consists of a continuous variable, characterized by their statistics mean, median, standard deviation, etc. Aroach: Withhold the target variable from the rest of the data Aly existing frequent itemset generation on the rest of the data For each frequent itemset, comute the descritive statistics for the corresonding target variable Frequent itemset becomes a rule by introducing the target variable as rule consequent Aly statistical test to determine interestingness of the rule Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 13

Statistics-based Methods How to determine whether an association rule interesting? Comare the statistics for segment of oulation covered by the rule vs segment of oulation not covered by the rule: A B: µ versus A B: µ Statistical hyothesis testing: Null hyothesis: H: µ = µ + Alternative hyothesis: H1: µ > µ + µ ' µ 2 2 s1 s2 + n n Z has zero mean and variance 1 under null hyothesis Z = 1 2 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 14

Statistics-based Methods Examle: r: Browser=Mozilla Buy=Yes Age: µ=23 Rule is interesting if difference between µ and µ is greater than 5 years (i.e., = 5) For r, suose n1 = 5, s1 = 3.5 For r (comlement): n2 = 25, s2 = 6.5 Z = µ ' µ s n 2 1 1 + s n 2 2 2 = 3 23 5 2 2 3.5 6.5 + 5 25 = 3.11 For 1-sided test at 95% confidence level, critical Z-value for rejecting null hyothesis is 1.64. Since Z is greater than 1.64, r is an interesting rule Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 15

Min-Ariori (Han et al) Document-term matrix: Examle: TID W1 W2 W3 W4 W5 D1 2 2 1 D2 1 2 2 D3 2 3 D4 1 1 D5 1 1 1 2 W1 and W2 tends to aear together in the same document Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 16

Min-Ariori Data contains only continuous attributes of the same tye e.g., frequency of words in a document Potential solution: Convert into /1 matrix and then aly existing algorithms lose word frequency information TID W1 W2 W3 W4 W5 D1 2 2 1 D2 1 2 2 D3 2 3 D4 1 1 D5 1 1 1 2 Discretization does not aly as users want association among words not ranges of words Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 17

Min-Ariori How to determine the suort of a word? If we simly sum u its frequency, suort count will be greater than total number of documents! Normalize the word vectors e.g., using L 1 norm Each word has a suort equals to 1. TID W1 W2 W3 W4 W5 D1 2 2 1 D2 1 2 2 D3 2 3 D4 1 1 D5 1 1 1 2 Normalize TID W1 W2 W3 W4 W5 D1.4.33...17 D2...33 1..33 D3.4.5... D4...33..17 D5.2.17.33..33 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 18

Min-Ariori New definition of suort: su( C ) min D( i, j) = i T j C TID W1 W2 W3 W4 W5 D1.4.33...17 D2...33 1..33 D3.4.5... D4...33..17 D5.2.17.33..33 Examle: Su(W1,W2,W3) = + + + +.17 =.17 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 19

Anti-monotone roerty of Suort Examle: TID W1 W2 W3 W4 W5 D1.4.33...17 D2...33 1..33 D3.4.5... D4...33..17 D5.2.17.33..33 Su(W1) =.4 + +.4 + +.2 = 1 Su(W1, W2) =.33 + +.4 + +.17 =.9 Su(W1, W2, W3) = + + + +.17 =.17 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 2

Multi-level Association Rules Food Electronics Bread Milk Comuters Home Wheat White Skim 2% Deskto Lato Accessory TV DVD Foremost Kems Printer Scanner Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 21

Multi-level Association Rules Why should we incororate concet hierarchy? Rules at lower levels may not have enough suort to aear in any frequent itemsets Rules at lower levels of the hierarchy are overly secific e.g., skim milk white bread, 2% milk wheat bread, skim milk wheat bread, etc. are indicative of association between milk and bread Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 22

Multi-level Association Rules How do suort and confidence vary as we traverse the concet hierarchy? If X is the arent item for both X1 and X2, then σ(x) σ(x1) + σ(x2) If σ(x1 Y1) minsu, and X is arent of X1, Y is arent of Y1 then σ(x Y1) minsu, σ(x1 Y) minsu σ(x Y) minsu If conf(x1 Y1) minconf, then conf(x1 Y) minconf Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 23

Multi-level Association Rules Aroach 1: Extend current association rule formulation by augmenting each transaction with higher level items Original Transaction: {skim milk, wheat bread} Augmented Transaction: {skim milk, wheat bread, milk, bread, food} Issues: Items that reside at higher levels have much higher suort counts if suort threshold is low, too many frequent atterns involving items from the higher levels Increased dimensionality of the data Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 24

Multi-level Association Rules Aroach 2: Generate frequent atterns at highest level first Then, generate frequent atterns at the next highest level, and so on Issues: I/O requirements will increase dramatically because we need to erform more asses over the data May miss some otentially interesting cross-level association atterns Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 25

Sequence Data Sequence Database: Object Timestam Events A 1 2, 3, 5 A 2 6, 1 A 23 1 B 11 4, 5, 6 B 17 2 B 21 7, 8, 1, 2 B 28 1, 6 C 14 1, 8, 7 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 26

Examles of Sequence Data Sequence Database Sequence Element (Transaction) Event (Item) Customer Purchase history of a given customer A set of items bought by a customer at time t Books, diary roducts, CDs, etc Web Data Browsing activity of a articular Web visitor A collection of files viewed by a Web visitor after a single mouse click Home age, index age, contact info, etc Event data History of events generated by a given sensor Events triggered by a sensor at time t Tyes of alarms generated by sensors Genome sequences DNA sequence of a articular secies An element of the DNA sequence Bases A,T,G,C Element (Transaction) Sequence E1 E2 E1 E3 E2 E2 E3 E4 Event (Item) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 27

Formal Definition of a Sequence A sequence is an ordered list of elements (transactions) s = < e 1 e 2 e 3 > Each element contains a collection of events (items) e i = {i 1, i 2,, i k } Each element is attributed to a secific time or location Length of a sequence, s, is given by the number of elements of the sequence A k-sequence is a sequence that contains k events (items) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 28

Examles of Sequence Web sequence: < {Homeage} {Electronics} {Digital Cameras} {Canon Digital Camera} {Shoing Cart} {Order Confirmation} {Return to Shoing} > Sequence of initiating events causing the nuclear accident at 3-mile Island: (htt://stellar-one.com/nuclear/staff_reorts/summary_soe_the_initiating_event.htm) < {clogged resin} {outlet valve closure} {loss of feedwater} {condenser olisher outlet valve shut} {booster ums tri} {main waterum tris} {main turbine tris} {reactor ressure increases}> Sequence of books checked out at a library: <{Fellowshi of the Ring} {The Two Towers} {Return of the King}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 29

Formal Definition of a Subsequence A sequence <a 1 a 2 a n > is contained in another sequence <b 1 b 2 b m > (m n) if there exist integers i 1 < i 2 < < i n such that a 1 b i1, a 2 b i1,, a n b in Data sequence < {2,4} {3,5,6} {8} > < {1,2} {3,4} > < {2,4} {2,4} {2,5} > Subsequence < {2} {3,5} > < {1} {2} > < {2} {4} > Contain? Yes No Yes The suort of a subsequence w is defined as the fraction of data sequences that contain w A sequential attern is a frequent subsequence (i.e., a subsequence whose suort is minsu) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 3

Sequential Pattern Mining: Definition Given: a database of sequences a user-secified minimum suort threshold, minsu Task: Find all subsequences with suort minsu Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 31

Sequential Pattern Mining: Challenge Given a sequence: <{a b} {c d e} {f} {g h i}> Examles of subsequences: <{a} {c d} {f} {g} >, < {c d e} >, < {b} {g} >, etc. How many k-subsequences can be extracted from a given n-sequence? <{a b} {c d e} {f} {g h i}> n = 9 k=4: Y Y Y _ Y <{a} {d e} {i}> Answer : n k 9 = 4 = 126 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 32

Sequential Pattern Mining: Examle Object Timestam Events A 1 1,2,4 A 2 2,3 A 3 5 B 1 1,2 B 2 2,3,4 C 1 1, 2 C 2 2,3,4 C 3 2,4,5 D 1 2 D 2 3, 4 D 3 4, 5 E 1 1, 3 E 2 2, 4, 5 Minsu = 5% Examles of Frequent Subsequences: < {1,2} > s=6% < {2,3} > s=6% < {2,4}> s=8% < {3} {5}> s=8% < {1} {2} > s=8% < {2} {2} > s=6% < {1} {2,3} > s=6% < {2} {2,3} > s=6% < {1,2} {2,3} > s=6% Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 33

Extracting Sequential Patterns Given n events: i 1, i 2, i 3,, i n Candidate 1-subsequences: <{i 1 }>, <{i 2 }>, <{i 3 }>,, <{i n }> Candidate 2-subsequences: <{i 1, i 2 }>, <{i 1, i 3 }>,, <{i 1 } {i 1 }>, <{i 1 } {i 2 }>,, <{i n-1 } {i n }> Candidate 3-subsequences: <{i 1, i 2, i 3 }>, <{i 1, i 2, i 4 }>,, <{i 1, i 2 } {i 1 }>, <{i 1, i 2 } {i 2 }>,, <{i 1 } {i 1, i 2 }>, <{i 1 } {i 1, i 3 }>,, <{i 1 } {i 1 } {i 1 }>, <{i 1 } {i 1 } {i 2 }>, Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 34

Generalized Sequential Pattern (GSP) Ste 1: Make the first ass over the sequence database D to yield all the 1- element frequent sequences Ste 2: Reeat until no new frequent sequences are found Candidate Generation: Merge airs of frequent subsequences found in the (k-1)th ass to generate candidate sequences that contain k items Candidate Pruning: Prune candidate k-sequences that contain infrequent (k-1)-subsequences Suort Counting: Make a new ass over the sequence database D to find the suort for these candidate sequences Candidate Elimination: Eliminate candidate k-sequences whose actual suort is less than minsu Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 35

Candidate Generation Base case (k=2): Merging two frequent 1-sequences <{i 1 }> and <{i 2 }> will roduce two candidate 2-sequences: <{i 1 } {i 2 }> and <{i 1 i 2 }> General case (k>2): A frequent (k-1)-sequence w 1 is merged with another frequent (k-1)-sequence w 2 to roduce a candidate k-sequence if the subsequence obtained by removing the first event in w 1 is the same as the subsequence obtained by removing the last event in w 2 The resulting candidate after merging is given by the sequence w 1 extended with the last event of w 2. If the last two events in w 2 belong to the same element, then the last event in w 2 becomes art of the last element in w 1 Otherwise, the last event in w 2 becomes a searate element aended to the end of w 1 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 36

Candidate Generation Examles Merging the sequences w 1 =<{1} {2 3} {4}> and w 2 =<{2 3} {4 5}> will roduce the candidate sequence < {1} {2 3} {4 5}> because the last two events in w 2 (4 and 5) belong to the same element Merging the sequences w 1 =<{1} {2 3} {4}> and w 2 =<{2 3} {4} {5}> will roduce the candidate sequence < {1} {2 3} {4} {5}> because the last two events in w 2 (4 and 5) do not belong to the same element We do not have to merge the sequences w 1 =<{1} {2 6} {4}> and w 2 =<{1} {2} {4 5}> to roduce the candidate < {1} {2 6} {4 5}> because if the latter is a viable candidate, then it can be obtained by merging w 1 with < {1} {2 6} {5}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 37

GSP Examle Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 38

Timing Constraints (I) {A B} {C} {D E} <= x g >n g n g : min-ga x g : max-ga <= m s m s : maximum san x g = 2, n g =, m s = 4 Data sequence < {2,4} {3,5,6} {4,7} {4,5} {8} > < {1} {2} {3} {4} {5}> < {1} {2,3} {3,4} {4,5}> < {1,2} {3} {2,3} {3,4} {2,4} {4,5}> Subsequence < {6} {5} > < {1} {4} > < {2} {3} {5} > < {1,2} {5} > Contain? Yes No Yes No Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 39

Mining Sequential Patterns with Timing Constraints Aroach 1: Mine sequential atterns without timing constraints Postrocess the discovered atterns Aroach 2: Modify GSP to directly rune candidates that violate timing constraints Question: Does Ariori rincile still hold? Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 4

Ariori Princile for Sequence Data Object Timestam Events A 1 1,2,4 A 2 2,3 A 3 5 B 1 1,2 B 2 2,3,4 C 1 1, 2 C 2 2,3,4 C 3 2,4,5 D 1 2 D 2 3, 4 D 3 4, 5 E 1 1, 3 E 2 2, 4, 5 Suose: x g = 1 (max-ga) n g = (min-ga) m s = 5 (maximum san) minsu = 6% <{2} {5}> suort = 4% but <{2} {3} {5}> suort = 6% Problem exists because of max-ga constraint No such roblem if max-ga is infinite Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 41

Contiguous Subsequences s is a contiguous subsequence of w = <e 1 >< e 2 > < e k > if any of the following conditions hold: 1. s is obtained from w by deleting an item from either e 1 or e k 2. s is obtained from w by deleting an item from any element e i that contains more than 2 items 3. s is a contiguous subsequence of s and s is a contiguous subsequence of w (recursive definition) Examles: s = < {1} {2} > is a contiguous subsequence of < {1} {2 3}>, < {1 2} {2} {3}>, and < {3 4} {1 2} {2 3} {4} > is not a contiguous subsequence of < {1} {3} {2}> and < {2} {1} {3} {2}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 42

Modified Candidate Pruning Ste Without maxga constraint: A candidate k-sequence is runed if at least one of its (k-1)-subsequences is infrequent With maxga constraint: A candidate k-sequence is runed if at least one of its contiguous (k-1)-subsequences is infrequent Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 43

Timing Constraints (II) {A B} {C} {D E} <= x g >n g <= ws <= m s x g : max-ga n g : min-ga ws: window size m s : maximum san x g = 2, n g =, ws = 1, m s = 5 Data sequence < {2,4} {3,5,6} {4,7} {4,6} {8} > < {1} {2} {3} {4} {5}> < {1,2} {2,3} {3,4} {4,5}> Subsequence < {3} {5} > < {1,2} {3} > < {1,2} {3,4} > Contain? No Yes Yes Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 44

Modified Suort Counting Ste Given a candidate attern: <{a, c}> Any data sequences that contain < {a c} >, < {a} {c} > ( where time({c}) time({a}) ws) < {c} {a} > (where time({a}) time({c}) ws) will contribute to the suort count of candidate attern Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 45

Other Formulation In some domains, we may have only one very long time series Examle: monitoring network traffic events for attacks monitoring telecommunication alarm signals Goal is to find frequent sequences of events in the time series This roblem is also known as frequent eisode mining E1 E3 E1 E1 E2 E4 E1 E2 E1 E2 E4 E2 E3 E4 E2 E3 E5 E2 E3 E5 E2 E3 E1 Pattern: <E1> <E3> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 46

General Suort Counting Schemes Assume: x g = 2 (max-ga) n g = (min-ga) ws = (window size) m s = 2 (maximum san)

Frequent Subgrah Mining Extend association rule mining to finding frequent subgrahs Useful for Web Mining, comutational chemistry, bioinformatics, satial data sets, etc Homeage Research Artificial Intelligence Databases Data Mining Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 48

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 49 Grah Definitions a b a c c b (a) Labeled Grah q r s t r t q a a c b (b) Subgrah s t a a c b (c) Induced Subgrah r s t r

Reresenting Transactions as Grahs Each transaction is a clique of items Transaction Items Id 1 {A,B,C,D} 2 {A,B,E} 3 {B,C} 4 {A,B,D,E} 5 {B,C,D} TID = 1: B A C D E Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 5

Reresenting Grahs as Transactions b r a c r q d e b r a q d b r a e c r q d G1 G2 G3 (a,b,) (a,b,q) (a,b,r) (b,c,) (b,c,q) (b,c,r) (d,e,r) G1 1 1 G2 1 G3 1 1 G3 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 51

Challenges Node may contain dulicate labels Suort and confidence How to define them? Additional constraints imosed by attern structure Suort and confidence are not the only constraints Assumtion: frequent subgrahs must be connected Ariori-like aroach: Use frequent k-subgrahs to generate frequent (k+1) subgrahs What is k? Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 52

Challenges Suort: number of grahs that contain a articular subgrah Ariori rincile still holds Level-wise (Ariori-like) aroach: Vertex growing: k is the number of vertices Edge growing: k is the number of edges Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 53

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 54 Vertex Growing = 1 q r r q M G = 2 r r r r M G = 3 q r r r r q M G

Edge Growing Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 55

Ariori-like Algorithm Find frequent 1-subgrahs Reeat Candidate generation Use frequent (k-1)-subgrahs to generate candidate k-subgrah Candidate runing Prune candidate subgrahs that contain infrequent (k-1)-subgrahs Suort counting Count the suort of each remaining candidate Eliminate candidate k-subgrahs that are infrequent In ractice, it is not as easy. There are many other issues Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 56

Examle: Dataset (a,b,) (a,b,q) (a,b,r) (b,c,) (b,c,q) (b,c,r) (d,e,r) G1 1 1 G2 1 G3 1 1 G4 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 57

Examle Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 58

Candidate Generation In Ariori: Merging two frequent k-itemsets will roduce a candidate (k+1)-itemset In frequent subgrah mining (vertex/edge growing) Merging two frequent k-subgrahs may roduce more than one candidate (k+1)-subgrah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 59

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 6 Multilicity of Candidates (Vertex Growing) = 1 q r r q M G = 2 r r r r M G =?? 3 q r r r r q M G

Multilicity of Candidates (Edge growing) Case 1: identical vertex labels a a a b e b c e + b c e c a e b e c Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 61

Multilicity of Candidates (Edge growing) Case 2: Core contains identical labels Core: The (k-1) subgrah that is common between the joint grahs Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 62

Multilicity of Candidates (Edge growing) Case 3: Core multilicity a a a a a b a a b a a b a + a a a b a a b a a b a b a Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 63

Adjacency Matrix Reresentation A(1) A(2) B (5) B (7) B (6) B (8) A(3) A(4) A(2) A(1) B (7) B (6) B (5) B (8) A(3) A(4) A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) A(1) 1 1 1 1 A(2) 1 1 1 1 A(3) 1 1 1 1 A(4) 1 1 1 1 B(5) 1 1 1 1 B(6) 1 1 1 1 B(7) 1 1 1 1 B(8) 1 1 1 1 A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) A(1) 1 1 1 1 A(2) 1 1 1 1 A(3) 1 1 1 1 A(4) 1 1 1 1 B(5) 1 1 1 1 B(6) 1 1 1 1 B(7) 1 1 1 1 B(8) 1 1 1 1 The same grah can be reresented in many ways Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 64

Grah Isomorhism A grah is isomorhic if it is toologically equivalent to another grah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 65

Grah Isomorhism Test for grah isomorhism is needed: During candidate generation ste, to determine whether a candidate has been generated During candidate runing ste, to check whether its (k-1)-subgrahs are frequent During candidate counting, to check whether a candidate is contained within another grah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 66

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 67 Grah Isomorhism Use canonical labeling to handle isomorhism Ma each grah into an ordered string reresentation (known as its code) such that two isomorhic grahs will be maed to the same canonical encoding Examle: Lexicograhically largest adjacency matrix 1 1 1 1 1 1 1 1 String: 11111111 1 1 1 1 1 1 1 1 Canonical: 11111111