Data Mining Association Rules: Advanced Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining

Size: px
Start display at page:

Download "Data Mining Association Rules: Advanced Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining"

Transcription

1 Data Mining Association Rules: Advanced Concets and Algorithms Lecture Notes for Chater 7 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 1

2 1 Continuous and Categorical Attributes How to aly association analysis formulation to nonasymmetric binary variables? Session Id Country Session Length (sec) Number of Web Pages viewed Gender Browser Tye Buy 1 USA Male IE No 2 China Female Netscae No 3 USA Female Mozilla Yes 4 Germany Male IE Yes 5 Australia Male Mozilla No Examle of Association Rule: {Number of Pages [5,1) (Browser=Mozilla)} {Buy = No} Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 2

3 Handling Categorical Attributes Transform categorical attribute into asymmetric binary variables Introduce a new item for each distinct attributevalue air Examle: relace Browser Tye attribute with Browser Tye = Internet Exlorer Browser Tye = Mozilla Browser Tye = Mozilla Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 3

4 Handling Categorical Attributes Potential Issues What if attribute has many ossible values Examle: attribute country has more than 2 ossible values Many of the attribute values may have very low suort Potential solution: Aggregate the low-suort attribute values What if distribution of attribute values is highly skewed Examle: 95% of the visitors have Buy = No Most of the items will be associated with (Buy=No) item Potential solution: dro the highly frequent items Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 4

5 Handling Continuous Attributes Different kinds of rules: Age [21,35) Salary [7k,12k) Buy Salary [7k,12k) Buy Age: µ=28, σ=4 Different methods: Discretization-based Statistics-based Non-discretization based minariori Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 5

6 Handling Continuous Attributes Use discretization Unsuervised: Equal-width binning Equal-deth binning Clustering Suervised: Attribute values, v Class v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 Anomalous Normal bin1 bin2 bin3 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 6

7 Discretization Issues Size of the discretized intervals affect suort & confidence {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (6K Income 8K)} {Cheat = No} {Refund = No, (K Income 1B)} {Cheat = No} If intervals too small may not have enough suort If intervals too large may not have enough confidence Potential solution: use all ossible intervals Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 7

8 Discretization Issues Execution time If intervals contain n values, there are on average O(n 2 ) ossible ranges Too many rules {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (51K Income 52K)} {Cheat = No} {Refund = No, (5K Income 6K)} {Cheat = No} Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 8

9 Aroach by Srikant & Agrawal Prerocess the data Discretize attribute using equi-deth artitioning Use artial comleteness measure to determine number of artitions Merge adjacent intervals as long as suort is less than max-suort Aly existing association rule mining algorithms Determine interesting rules in the outut Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 9

10 Aroach by Srikant & Agrawal Discretization will lose information Aroximated X X Use artial comleteness measure to determine how much information is lost C: frequent itemsets obtained by considering all ranges of attribute values P: frequent itemsets obtained by considering all ranges over the artitions P is K-comlete w.r.t C if P C,and X C, X P such that: 1. X is a generalization of X and suort (X ) K suort(x) (K 1) 2. Y X, Y X such that suort (Y ) K suort(y) Given K (artial comleteness level), can determine number of intervals (N) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 1

11 Interestingness Measure {Refund = No, (Income = $51,25)} {Cheat = No} {Refund = No, (51K Income 52K)} {Cheat = No} {Refund = No, (5K Income 6K)} {Cheat = No} Given an itemset: Z = {z 1, z 2,, z k } and its generalization Z = {z 1, z 2,, z k } P(Z): suort of Z E Z (Z): exected suort of Z based on Z E P( z ) P( z ) 1 2 ( Z) = L P( z ') P( z ') 1 2 P( z P( z ) ') k Z ' k P( Z') Z is R-interesting w.r.t. Z if P(Z) R E Z (Z) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 11

12 Interestingness Measure For S: X Y, and its generalization S : X Y P(Y X): confidence of X Y P(Y X ): confidence of X Y E S (Y X): exected suort of Z based on Z E( Y X ) P( y ) P( y ) P( y ) 1 2 k = L P( Y ' X P( y ') P( y ') P( y ') 1 2 k ') Rule S is R-interesting w.r.t its ancestor rule S if Suort, P(S) R E S (S) or Confidence, P(Y X) R E S (Y X) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 12

13 Statistics-based Methods Examle: Browser=Mozilla Buy=Yes Age: µ=23 Rule consequent consists of a continuous variable, characterized by their statistics mean, median, standard deviation, etc. Aroach: Withhold the target variable from the rest of the data Aly existing frequent itemset generation on the rest of the data For each frequent itemset, comute the descritive statistics for the corresonding target variable Frequent itemset becomes a rule by introducing the target variable as rule consequent Aly statistical test to determine interestingness of the rule Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 13

14 Statistics-based Methods How to determine whether an association rule interesting? Comare the statistics for segment of oulation covered by the rule vs segment of oulation not covered by the rule: A B: µ versus A B: µ Statistical hyothesis testing: Null hyothesis: H: µ = µ + Alternative hyothesis: H1: µ > µ + µ ' µ 2 2 s1 s2 + n n Z has zero mean and variance 1 under null hyothesis Z = 1 2 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 14

15 Statistics-based Methods Examle: r: Browser=Mozilla Buy=Yes Age: µ=23 Rule is interesting if difference between µ and µ is greater than 5 years (i.e., = 5) For r, suose n1 = 5, s1 = 3.5 For r (comlement): n2 = 25, s2 = 6.5 Z = µ ' µ s n s n = = 3.11 For 1-sided test at 95% confidence level, critical Z-value for rejecting null hyothesis is Since Z is greater than 1.64, r is an interesting rule Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 15

16 Min-Ariori (Han et al) Document-term matrix: Examle: TID W1 W2 W3 W4 W5 D D D3 2 3 D4 1 1 D W1 and W2 tends to aear together in the same document Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 16

17 Min-Ariori Data contains only continuous attributes of the same tye e.g., frequency of words in a document Potential solution: Convert into /1 matrix and then aly existing algorithms lose word frequency information TID W1 W2 W3 W4 W5 D D D3 2 3 D4 1 1 D Discretization does not aly as users want association among words not ranges of words Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 17

18 Min-Ariori How to determine the suort of a word? If we simly sum u its frequency, suort count will be greater than total number of documents! Normalize the word vectors e.g., using L 1 norm Each word has a suort equals to 1. TID W1 W2 W3 W4 W5 D D D3 2 3 D4 1 1 D Normalize TID W1 W2 W3 W4 W5 D D D D D Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 18

19 Min-Ariori New definition of suort: su( C ) min D( i, j) = i T j C TID W1 W2 W3 W4 W5 D D D D D Examle: Su(W1,W2,W3) = =.17 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 19

20 Anti-monotone roerty of Suort Examle: TID W1 W2 W3 W4 W5 D D D D D Su(W1) = = 1 Su(W1, W2) = =.9 Su(W1, W2, W3) = =.17 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 2

21 Multi-level Association Rules Food Electronics Bread Milk Comuters Home Wheat White Skim 2% Deskto Lato Accessory TV DVD Foremost Kems Printer Scanner Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 21

22 Multi-level Association Rules Why should we incororate concet hierarchy? Rules at lower levels may not have enough suort to aear in any frequent itemsets Rules at lower levels of the hierarchy are overly secific e.g., skim milk white bread, 2% milk wheat bread, skim milk wheat bread, etc. are indicative of association between milk and bread Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 22

23 Multi-level Association Rules How do suort and confidence vary as we traverse the concet hierarchy? If X is the arent item for both X1 and X2, then σ(x) σ(x1) + σ(x2) If σ(x1 Y1) minsu, and X is arent of X1, Y is arent of Y1 then σ(x Y1) minsu, σ(x1 Y) minsu σ(x Y) minsu If conf(x1 Y1) minconf, then conf(x1 Y) minconf Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 23

24 Multi-level Association Rules Aroach 1: Extend current association rule formulation by augmenting each transaction with higher level items Original Transaction: {skim milk, wheat bread} Augmented Transaction: {skim milk, wheat bread, milk, bread, food} Issues: Items that reside at higher levels have much higher suort counts if suort threshold is low, too many frequent atterns involving items from the higher levels Increased dimensionality of the data Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 24

25 Multi-level Association Rules Aroach 2: Generate frequent atterns at highest level first Then, generate frequent atterns at the next highest level, and so on Issues: I/O requirements will increase dramatically because we need to erform more asses over the data May miss some otentially interesting cross-level association atterns Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 25

26 Sequence Data Sequence Database: Object Timestam Events A 1 2, 3, 5 A 2 6, 1 A 23 1 B 11 4, 5, 6 B 17 2 B 21 7, 8, 1, 2 B 28 1, 6 C 14 1, 8, 7 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 26

27 Examles of Sequence Data Sequence Database Sequence Element (Transaction) Event (Item) Customer Purchase history of a given customer A set of items bought by a customer at time t Books, diary roducts, CDs, etc Web Data Browsing activity of a articular Web visitor A collection of files viewed by a Web visitor after a single mouse click Home age, index age, contact info, etc Event data History of events generated by a given sensor Events triggered by a sensor at time t Tyes of alarms generated by sensors Genome sequences DNA sequence of a articular secies An element of the DNA sequence Bases A,T,G,C Element (Transaction) Sequence E1 E2 E1 E3 E2 E2 E3 E4 Event (Item) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 27

28 Formal Definition of a Sequence A sequence is an ordered list of elements (transactions) s = < e 1 e 2 e 3 > Each element contains a collection of events (items) e i = {i 1, i 2,, i k } Each element is attributed to a secific time or location Length of a sequence, s, is given by the number of elements of the sequence A k-sequence is a sequence that contains k events (items) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 28

29 Examles of Sequence Web sequence: < {Homeage} {Electronics} {Digital Cameras} {Canon Digital Camera} {Shoing Cart} {Order Confirmation} {Return to Shoing} > Sequence of initiating events causing the nuclear accident at 3-mile Island: (htt://stellar-one.com/nuclear/staff_reorts/summary_soe_the_initiating_event.htm) < {clogged resin} {outlet valve closure} {loss of feedwater} {condenser olisher outlet valve shut} {booster ums tri} {main waterum tris} {main turbine tris} {reactor ressure increases}> Sequence of books checked out at a library: <{Fellowshi of the Ring} {The Two Towers} {Return of the King}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 29

30 Formal Definition of a Subsequence A sequence <a 1 a 2 a n > is contained in another sequence <b 1 b 2 b m > (m n) if there exist integers i 1 < i 2 < < i n such that a 1 b i1, a 2 b i1,, a n b in Data sequence < {2,4} {3,5,6} {8} > < {1,2} {3,4} > < {2,4} {2,4} {2,5} > Subsequence < {2} {3,5} > < {1} {2} > < {2} {4} > Contain? Yes No Yes The suort of a subsequence w is defined as the fraction of data sequences that contain w A sequential attern is a frequent subsequence (i.e., a subsequence whose suort is minsu) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 3

31 Sequential Pattern Mining: Definition Given: a database of sequences a user-secified minimum suort threshold, minsu Task: Find all subsequences with suort minsu Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 31

32 Sequential Pattern Mining: Challenge Given a sequence: <{a b} {c d e} {f} {g h i}> Examles of subsequences: <{a} {c d} {f} {g} >, < {c d e} >, < {b} {g} >, etc. How many k-subsequences can be extracted from a given n-sequence? <{a b} {c d e} {f} {g h i}> n = 9 k=4: Y Y Y _ Y <{a} {d e} {i}> Answer : n k 9 = 4 = 126 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 32

33 Sequential Pattern Mining: Examle Object Timestam Events A 1 1,2,4 A 2 2,3 A 3 5 B 1 1,2 B 2 2,3,4 C 1 1, 2 C 2 2,3,4 C 3 2,4,5 D 1 2 D 2 3, 4 D 3 4, 5 E 1 1, 3 E 2 2, 4, 5 Minsu = 5% Examles of Frequent Subsequences: < {1,2} > s=6% < {2,3} > s=6% < {2,4}> s=8% < {3} {5}> s=8% < {1} {2} > s=8% < {2} {2} > s=6% < {1} {2,3} > s=6% < {2} {2,3} > s=6% < {1,2} {2,3} > s=6% Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 33

34 Extracting Sequential Patterns Given n events: i 1, i 2, i 3,, i n Candidate 1-subsequences: <{i 1 }>, <{i 2 }>, <{i 3 }>,, <{i n }> Candidate 2-subsequences: <{i 1, i 2 }>, <{i 1, i 3 }>,, <{i 1 } {i 1 }>, <{i 1 } {i 2 }>,, <{i n-1 } {i n }> Candidate 3-subsequences: <{i 1, i 2, i 3 }>, <{i 1, i 2, i 4 }>,, <{i 1, i 2 } {i 1 }>, <{i 1, i 2 } {i 2 }>,, <{i 1 } {i 1, i 2 }>, <{i 1 } {i 1, i 3 }>,, <{i 1 } {i 1 } {i 1 }>, <{i 1 } {i 1 } {i 2 }>, Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 34

35 Generalized Sequential Pattern (GSP) Ste 1: Make the first ass over the sequence database D to yield all the 1- element frequent sequences Ste 2: Reeat until no new frequent sequences are found Candidate Generation: Merge airs of frequent subsequences found in the (k-1)th ass to generate candidate sequences that contain k items Candidate Pruning: Prune candidate k-sequences that contain infrequent (k-1)-subsequences Suort Counting: Make a new ass over the sequence database D to find the suort for these candidate sequences Candidate Elimination: Eliminate candidate k-sequences whose actual suort is less than minsu Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 35

36 Candidate Generation Base case (k=2): Merging two frequent 1-sequences <{i 1 }> and <{i 2 }> will roduce two candidate 2-sequences: <{i 1 } {i 2 }> and <{i 1 i 2 }> General case (k>2): A frequent (k-1)-sequence w 1 is merged with another frequent (k-1)-sequence w 2 to roduce a candidate k-sequence if the subsequence obtained by removing the first event in w 1 is the same as the subsequence obtained by removing the last event in w 2 The resulting candidate after merging is given by the sequence w 1 extended with the last event of w 2. If the last two events in w 2 belong to the same element, then the last event in w 2 becomes art of the last element in w 1 Otherwise, the last event in w 2 becomes a searate element aended to the end of w 1 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 36

37 Candidate Generation Examles Merging the sequences w 1 =<{1} {2 3} {4}> and w 2 =<{2 3} {4 5}> will roduce the candidate sequence < {1} {2 3} {4 5}> because the last two events in w 2 (4 and 5) belong to the same element Merging the sequences w 1 =<{1} {2 3} {4}> and w 2 =<{2 3} {4} {5}> will roduce the candidate sequence < {1} {2 3} {4} {5}> because the last two events in w 2 (4 and 5) do not belong to the same element We do not have to merge the sequences w 1 =<{1} {2 6} {4}> and w 2 =<{1} {2} {4 5}> to roduce the candidate < {1} {2 6} {4 5}> because if the latter is a viable candidate, then it can be obtained by merging w 1 with < {1} {2 6} {5}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 37

38 GSP Examle Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 38

39 Timing Constraints (I) {A B} {C} {D E} <= x g >n g n g : min-ga x g : max-ga <= m s m s : maximum san x g = 2, n g =, m s = 4 Data sequence < {2,4} {3,5,6} {4,7} {4,5} {8} > < {1} {2} {3} {4} {5}> < {1} {2,3} {3,4} {4,5}> < {1,2} {3} {2,3} {3,4} {2,4} {4,5}> Subsequence < {6} {5} > < {1} {4} > < {2} {3} {5} > < {1,2} {5} > Contain? Yes No Yes No Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 39

40 Mining Sequential Patterns with Timing Constraints Aroach 1: Mine sequential atterns without timing constraints Postrocess the discovered atterns Aroach 2: Modify GSP to directly rune candidates that violate timing constraints Question: Does Ariori rincile still hold? Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 4

41 Ariori Princile for Sequence Data Object Timestam Events A 1 1,2,4 A 2 2,3 A 3 5 B 1 1,2 B 2 2,3,4 C 1 1, 2 C 2 2,3,4 C 3 2,4,5 D 1 2 D 2 3, 4 D 3 4, 5 E 1 1, 3 E 2 2, 4, 5 Suose: x g = 1 (max-ga) n g = (min-ga) m s = 5 (maximum san) minsu = 6% <{2} {5}> suort = 4% but <{2} {3} {5}> suort = 6% Problem exists because of max-ga constraint No such roblem if max-ga is infinite Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 41

42 Contiguous Subsequences s is a contiguous subsequence of w = <e 1 >< e 2 > < e k > if any of the following conditions hold: 1. s is obtained from w by deleting an item from either e 1 or e k 2. s is obtained from w by deleting an item from any element e i that contains more than 2 items 3. s is a contiguous subsequence of s and s is a contiguous subsequence of w (recursive definition) Examles: s = < {1} {2} > is a contiguous subsequence of < {1} {2 3}>, < {1 2} {2} {3}>, and < {3 4} {1 2} {2 3} {4} > is not a contiguous subsequence of < {1} {3} {2}> and < {2} {1} {3} {2}> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 42

43 Modified Candidate Pruning Ste Without maxga constraint: A candidate k-sequence is runed if at least one of its (k-1)-subsequences is infrequent With maxga constraint: A candidate k-sequence is runed if at least one of its contiguous (k-1)-subsequences is infrequent Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 43

44 Timing Constraints (II) {A B} {C} {D E} <= x g >n g <= ws <= m s x g : max-ga n g : min-ga ws: window size m s : maximum san x g = 2, n g =, ws = 1, m s = 5 Data sequence < {2,4} {3,5,6} {4,7} {4,6} {8} > < {1} {2} {3} {4} {5}> < {1,2} {2,3} {3,4} {4,5}> Subsequence < {3} {5} > < {1,2} {3} > < {1,2} {3,4} > Contain? No Yes Yes Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 44

45 Modified Suort Counting Ste Given a candidate attern: <{a, c}> Any data sequences that contain < {a c} >, < {a} {c} > ( where time({c}) time({a}) ws) < {c} {a} > (where time({a}) time({c}) ws) will contribute to the suort count of candidate attern Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 45

46 Other Formulation In some domains, we may have only one very long time series Examle: monitoring network traffic events for attacks monitoring telecommunication alarm signals Goal is to find frequent sequences of events in the time series This roblem is also known as frequent eisode mining E1 E3 E1 E1 E2 E4 E1 E2 E1 E2 E4 E2 E3 E4 E2 E3 E5 E2 E3 E5 E2 E3 E1 Pattern: <E1> <E3> Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 46

47 General Suort Counting Schemes Assume: x g = 2 (max-ga) n g = (min-ga) ws = (window size) m s = 2 (maximum san)

48 Frequent Subgrah Mining Extend association rule mining to finding frequent subgrahs Useful for Web Mining, comutational chemistry, bioinformatics, satial data sets, etc Homeage Research Artificial Intelligence Databases Data Mining Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 48

49 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 49 Grah Definitions a b a c c b (a) Labeled Grah q r s t r t q a a c b (b) Subgrah s t a a c b (c) Induced Subgrah r s t r

50 Reresenting Transactions as Grahs Each transaction is a clique of items Transaction Items Id 1 {A,B,C,D} 2 {A,B,E} 3 {B,C} 4 {A,B,D,E} 5 {B,C,D} TID = 1: B A C D E Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 5

51 Reresenting Grahs as Transactions b r a c r q d e b r a q d b r a e c r q d G1 G2 G3 (a,b,) (a,b,q) (a,b,r) (b,c,) (b,c,q) (b,c,r) (d,e,r) G1 1 1 G2 1 G3 1 1 G3 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 51

52 Challenges Node may contain dulicate labels Suort and confidence How to define them? Additional constraints imosed by attern structure Suort and confidence are not the only constraints Assumtion: frequent subgrahs must be connected Ariori-like aroach: Use frequent k-subgrahs to generate frequent (k+1) subgrahs What is k? Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 52

53 Challenges Suort: number of grahs that contain a articular subgrah Ariori rincile still holds Level-wise (Ariori-like) aroach: Vertex growing: k is the number of vertices Edge growing: k is the number of edges Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 53

54 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 54 Vertex Growing = 1 q r r q M G = 2 r r r r M G = 3 q r r r r q M G

55 Edge Growing Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 55

56 Ariori-like Algorithm Find frequent 1-subgrahs Reeat Candidate generation Use frequent (k-1)-subgrahs to generate candidate k-subgrah Candidate runing Prune candidate subgrahs that contain infrequent (k-1)-subgrahs Suort counting Count the suort of each remaining candidate Eliminate candidate k-subgrahs that are infrequent In ractice, it is not as easy. There are many other issues Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 56

57 Examle: Dataset (a,b,) (a,b,q) (a,b,r) (b,c,) (b,c,q) (b,c,r) (d,e,r) G1 1 1 G2 1 G3 1 1 G4 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 57

58 Examle Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 58

59 Candidate Generation In Ariori: Merging two frequent k-itemsets will roduce a candidate (k+1)-itemset In frequent subgrah mining (vertex/edge growing) Merging two frequent k-subgrahs may roduce more than one candidate (k+1)-subgrah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 59

60 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 6 Multilicity of Candidates (Vertex Growing) = 1 q r r q M G = 2 r r r r M G =?? 3 q r r r r q M G

61 Multilicity of Candidates (Edge growing) Case 1: identical vertex labels a a a b e b c e + b c e c a e b e c Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 61

62 Multilicity of Candidates (Edge growing) Case 2: Core contains identical labels Core: The (k-1) subgrah that is common between the joint grahs Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 62

63 Multilicity of Candidates (Edge growing) Case 3: Core multilicity a a a a a b a a b a a b a + a a a b a a b a a b a b a Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 63

64 Adjacency Matrix Reresentation A(1) A(2) B (5) B (7) B (6) B (8) A(3) A(4) A(2) A(1) B (7) B (6) B (5) B (8) A(3) A(4) A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) A(1) A(2) A(3) A(4) B(5) B(6) B(7) B(8) The same grah can be reresented in many ways Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 64

65 Grah Isomorhism A grah is isomorhic if it is toologically equivalent to another grah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 65

66 Grah Isomorhism Test for grah isomorhism is needed: During candidate generation ste, to determine whether a candidate has been generated During candidate runing ste, to check whether its (k-1)-subgrahs are frequent During candidate counting, to check whether a candidate is contained within another grah Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 66

67 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/24 67 Grah Isomorhism Use canonical labeling to handle isomorhism Ma each grah into an ordered string reresentation (known as its code) such that two isomorhic grahs will be maed to the same canonical encoding Examle: Lexicograhically largest adjacency matrix String: Canonical:

COMP 5331: Knowledge Discovery and Data Mining

COMP 5331: Knowledge Discovery and Data Mining COMP 5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified by Dr. Lei Chen based on the slides provided by Tan, Steinbach, Kumar And Jiawei Han, Micheline Kamber, and Jian Pei 1 10

More information

DATA MINING - 1DL105, 1DL111

DATA MINING - 1DL105, 1DL111 1 DATA MINING - 1DL105, 1DL111 Fall 2007 An introductory class in data mining http://user.it.uu.se/~udbl/dut-ht2007/ alt. http://www.it.uu.se/edu/course/homepage/infoutv/ht07 Kjell Orsborn Uppsala Database

More information

Handling a Concept Hierarchy

Handling a Concept Hierarchy Food Electronics Handling a Concept Hierarchy Bread Milk Computers Home Wheat White Skim 2% Desktop Laptop Accessory TV DVD Foremost Kemps Printer Scanner Data Mining: Association Rules 5 Why should we

More information

Sequential Pattern Mining

Sequential Pattern Mining Sequential Pattern Mining Lecture Notes for Chapter 7 Introduction to Data Mining Tan, Steinbach, Kumar From itemsets to sequences Frequent itemsets and association rules focus on transactions and the

More information

Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Data Mining Concepts & Techniques Lecture No. 05 Sequential Pattern Mining Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

Data mining, 4 cu Lecture 7:

Data mining, 4 cu Lecture 7: 582364 Data mining, 4 cu Lecture 7: Sequential Patterns Spring 2010 Lecturer: Juho Rousu Teaching assistant: Taru Itäpelto Sequential Patterns In many data mining tasks the order and timing of events contains

More information

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci

Association Rules Information Retrieval and Data Mining. Prof. Matteo Matteucci Association Rules Information Retrieval and Data Mining Prof. Matteo Matteucci Learning Unsupervised Rules!?! 2 Market-Basket Transactions 3 Bread Peanuts Milk Fruit Jam Bread Jam Soda Chips Milk Fruit

More information

Lecture Notes for Chapter 6. Introduction to Data Mining

Lecture Notes for Chapter 6. Introduction to Data Mining Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004

More information

COMP 5331: Knowledge Discovery and Data Mining

COMP 5331: Knowledge Discovery and Data Mining COMP 5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified by Dr. Lei Chen based on the slides provided by Jiawei Han, Micheline Kamber, and Jian Pei And slides provide by Raymond

More information

Association Analysis Part 2. FP Growth (Pei et al 2000)

Association Analysis Part 2. FP Growth (Pei et al 2000) Association Analysis art 2 Sanjay Ranka rofessor Computer and Information Science and Engineering University of Florida F Growth ei et al 2 Use a compressed representation of the database using an F-tree

More information

Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Data Mining Concepts & Techniques Lecture No. 04 Association Analysis Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

DATA MINING - 1DL360

DATA MINING - 1DL360 DATA MINING - 1DL36 Fall 212" An introductory class in data mining http://www.it.uu.se/edu/course/homepage/infoutv/ht12 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology, Uppsala

More information

732A61/TDDD41 Data Mining - Clustering and Association Analysis

732A61/TDDD41 Data Mining - Clustering and Association Analysis 732A61/TDDD41 Data Mining - Clustering and Association Analysis Lecture 6: Association Analysis I Jose M. Peña IDA, Linköping University, Sweden 1/14 Outline Content Association Rules Frequent Itemsets

More information

D B M G Data Base and Data Mining Group of Politecnico di Torino

D B M G Data Base and Data Mining Group of Politecnico di Torino Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Association rules Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket

More information

Association Rules. Fundamentals

Association Rules. Fundamentals Politecnico di Torino Politecnico di Torino 1 Association rules Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket counter Association rule

More information

DATA MINING LECTURE 3. Frequent Itemsets Association Rules

DATA MINING LECTURE 3. Frequent Itemsets Association Rules DATA MINING LECTURE 3 Frequent Itemsets Association Rules This is how it all started Rakesh Agrawal, Tomasz Imielinski, Arun N. Swami: Mining Association Rules between Sets of Items in Large Databases.

More information

Association Rule. Lecturer: Dr. Bo Yuan. LOGO

Association Rule. Lecturer: Dr. Bo Yuan. LOGO Association Rule Lecturer: Dr. Bo Yuan LOGO E-mail: yuanb@sz.tsinghua.edu.cn Overview Frequent Itemsets Association Rules Sequential Patterns 2 A Real Example 3 Market-Based Problems Finding associations

More information

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example

D B M G. Association Rules. Fundamentals. Fundamentals. Association rules. Association rule mining. Definitions. Rule quality metrics: example Association rules Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Objective extraction of frequent correlations or pattern from a transactional database Tickets at a supermarket

More information

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions.

D B M G. Association Rules. Fundamentals. Fundamentals. Elena Baralis, Silvia Chiusano. Politecnico di Torino 1. Definitions. Definitions Data Base and Data Mining Group of Politecnico di Torino Politecnico di Torino Itemset is a set including one or more items Example: {Beer, Diapers} k-itemset is an itemset that contains k

More information

DATA MINING - 1DL360

DATA MINING - 1DL360 DATA MINING - DL360 Fall 200 An introductory class in data mining http://www.it.uu.se/edu/course/homepage/infoutv/ht0 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology, Uppsala

More information

CS 484 Data Mining. Association Rule Mining 2

CS 484 Data Mining. Association Rule Mining 2 CS 484 Data Mining Association Rule Mining 2 Review: Reducing Number of Candidates Apriori principle: If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due

More information

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6 Data Mining: Concepts and Techniques (3 rd ed.) Chapter 6 Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University 2013 Han, Kamber & Pei. All rights

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Mining Frequent Patterns and Associations: Basic Concepts (Chapter 6) Huan Sun, CSE@The Ohio State University Slides adapted from Prof. Jiawei Han @UIUC, Prof. Srinivasan

More information

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Data Mining Chapter 5 Association Analysis: Basic Concepts Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 2/3/28 Introduction to Data Mining Association Rule Mining Given

More information

CS 584 Data Mining. Association Rule Mining 2

CS 584 Data Mining. Association Rule Mining 2 CS 584 Data Mining Association Rule Mining 2 Recall from last time: Frequent Itemset Generation Strategies Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA

More information

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining

Association Analysis: Basic Concepts. and Algorithms. Lecture Notes for Chapter 6. Introduction to Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Association

More information

Hotelling s Two- Sample T 2

Hotelling s Two- Sample T 2 Chater 600 Hotelling s Two- Samle T Introduction This module calculates ower for the Hotelling s two-grou, T-squared (T) test statistic. Hotelling s T is an extension of the univariate two-samle t-test

More information

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining

Chapter 6. Frequent Pattern Mining: Concepts and Apriori. Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining Chapter 6. Frequent Pattern Mining: Concepts and Apriori Meng Jiang CSE 40647/60647 Data Science Fall 2017 Introduction to Data Mining Pattern Discovery: Definition What are patterns? Patterns: A set of

More information

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms

DATA MINING LECTURE 4. Frequent Itemsets, Association Rules Evaluation Alternative Algorithms DATA MINING LECTURE 4 Frequent Itemsets, Association Rules Evaluation Alternative Algorithms RECAP Mining Frequent Itemsets Itemset A collection of one or more items Example: {Milk, Bread, Diaper} k-itemset

More information

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit Chater 5 Statistical Inference 69 CHAPTER 5 STATISTICAL INFERENCE.0 Hyothesis Testing.0 Decision Errors 3.0 How a Hyothesis is Tested 4.0 Test for Goodness of Fit 5.0 Inferences about Two Means It ain't

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Mining Frequent Patterns and Associations: Basic Concepts (Chapter 6) Huan Sun, CSE@The Ohio State University 10/17/2017 Slides adapted from Prof. Jiawei Han @UIUC, Prof.

More information

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal Econ 379: Business and Economics Statistics Instructor: Yogesh Ual Email: yual@ysu.edu Chater 9, Part A: Hyothesis Tests Develoing Null and Alternative Hyotheses Tye I and Tye II Errors Poulation Mean:

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

Chapters 6 & 7, Frequent Pattern Mining

Chapters 6 & 7, Frequent Pattern Mining CSI 4352, Introduction to Data Mining Chapters 6 & 7, Frequent Pattern Mining Young-Rae Cho Associate Professor Department of Computer Science Baylor University CSI 4352, Introduction to Data Mining Chapters

More information

Tests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test)

Tests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test) Chater 225 Tests for Two Proortions in a Stratified Design (Cochran/Mantel-Haenszel Test) Introduction In a stratified design, the subects are selected from two or more strata which are formed from imortant

More information

Association Rule Mining on Web

Association Rule Mining on Web Association Rule Mining on Web What Is Association Rule Mining? Association rule mining: Finding interesting relationships among items (or objects, events) in a given data set. Example: Basket data analysis

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem Combinatorics of tomost discs of multi-eg Tower of Hanoi roblem Sandi Klavžar Deartment of Mathematics, PEF, Unversity of Maribor Koroška cesta 160, 000 Maribor, Slovenia Uroš Milutinović Deartment of

More information

Data Analytics Beyond OLAP. Prof. Yanlei Diao

Data Analytics Beyond OLAP. Prof. Yanlei Diao Data Analytics Beyond OLAP Prof. Yanlei Diao OPERATIONAL DBs DB 1 DB 2 DB 3 EXTRACT TRANSFORM LOAD (ETL) METADATA STORE DATA WAREHOUSE SUPPORTS OLAP DATA MINING INTERACTIVE DATA EXPLORATION Overview of

More information

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 10 What is Data? Collection of data objects

More information

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University

Association Rules. Acknowledgements. Some parts of these slides are modified from. n C. Clifton & W. Aref, Purdue University Association Rules CS 5331 by Rattikorn Hewett Texas Tech University 1 Acknowledgements Some parts of these slides are modified from n C. Clifton & W. Aref, Purdue University 2 1 Outline n Association Rule

More information

7.2 Inference for comparing means of two populations where the samples are independent

7.2 Inference for comparing means of two populations where the samples are independent Objectives 7.2 Inference for comaring means of two oulations where the samles are indeendent Two-samle t significance test (we give three examles) Two-samle t confidence interval htt://onlinestatbook.com/2/tests_of_means/difference_means.ht

More information

Basic Association Rules

Basic Association Rules Basic Association Rules Downloaded 11/21/17 to 37.44.199.215. Redistribution subject to SIAM license or coyright; see htt://www.siam.org/journals/ojsa.h Abstract Previous aroaches for mining association

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Introduction to Probability and Statistics

Introduction to Probability and Statistics Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based

More information

Mining Infrequent Patter ns

Mining Infrequent Patter ns Mining Infrequent Patter ns JOHAN BJARNLE (JOHBJ551) PETER ZHU (PETZH912) LINKÖPING UNIVERSITY, 2009 TNM033 DATA MINING Contents 1 Introduction... 2 2 Techniques... 3 2.1 Negative Patterns... 3 2.2 Negative

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #12: Frequent Itemsets Seoul National University 1 In This Lecture Motivation of association rule mining Important concepts of association rules Naïve approaches for

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3 Pesquisa Oeracional (2013) 33(1): 123-132 2013 Brazilian Oerations Research Society Printed version ISSN 0101-7438 / Online version ISSN 1678-5142 www.scielo.br/oe SOME RESULTS ABOUT THE CONNECTIVITY OF

More information

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017)

Lecture Notes for Chapter 6. Introduction to Data Mining. (modified by Predrag Radivojac, 2017) Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar (modified by Predrag Radivojac, 27) Association Rule Mining Given a set of transactions, find rules that will predict the

More information

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05

Meelis Kull Autumn Meelis Kull - Autumn MTAT Data Mining - Lecture 05 Meelis Kull meelis.kull@ut.ee Autumn 2017 1 Sample vs population Example task with red and black cards Statistical terminology Permutation test and hypergeometric test Histogram on a sample vs population

More information

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal Econ 379: Business and Economics Statistics Instructor: Yogesh Ual Email: yual@ysu.edu Chater 9, Part A: Hyothesis Tests Develoing Null and Alternative Hyotheses Tye I and Tye II Errors Poulation Mean:

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation Paer C Exact Volume Balance Versus Exact Mass Balance in Comositional Reservoir Simulation Submitted to Comutational Geosciences, December 2005. Exact Volume Balance Versus Exact Mass Balance in Comositional

More information

Bayesian Networks Practice

Bayesian Networks Practice Bayesian Networks Practice Part 2 2016-03-17 Byoung-Hee Kim, Seong-Ho Son Biointelligence Lab, CSE, Seoul National University Agenda Probabilistic Inference in Bayesian networks Probability basics D-searation

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data Elena Baralis and Tania Cerquitelli Politecnico di Torino Data set types Record Tables Document Data Transaction Data Graph World Wide Web Molecular Structures Ordered Spatial Data Temporal Data Sequential

More information

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti

Apriori algorithm. Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK. Presentation Lauri Lahti Apriori algorithm Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK Presentation 12.3.2008 Lauri Lahti Association rules Techniques for data mining and knowledge discovery in databases

More information

An Introduction To Range Searching

An Introduction To Range Searching An Introduction To Range Searching Jan Vahrenhold eartment of Comuter Science Westfälische Wilhelms-Universität Münster, Germany. Overview 1. Introduction: Problem Statement, Lower Bounds 2. Range Searching

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

Principles of Computed Tomography (CT)

Principles of Computed Tomography (CT) Page 298 Princiles of Comuted Tomograhy (CT) The theoretical foundation of CT dates back to Johann Radon, a mathematician from Vienna who derived a method in 1907 for rojecting a 2-D object along arallel

More information

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015 18.783 Ellitic Curves Sring 2015 Problem Set #1 Due: 02/13/2015 Descrition These roblems are related to the material covered in Lectures 1-2. Some of them require the use of Sage, and you will need to

More information

¼ ¼ 6:0. sum of all sample means in ð8þ 25

¼ ¼ 6:0. sum of all sample means in ð8þ 25 1. Samling Distribution of means. A oulation consists of the five numbers 2, 3, 6, 8, and 11. Consider all ossible samles of size 2 that can be drawn with relacement from this oulation. Find the mean of

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen

Encyclopedia of Machine Learning Chapter Number Book CopyRight - Year 2010 Frequent Pattern. Given Name Hannu Family Name Toivonen Book Title Encyclopedia of Machine Learning Chapter Number 00403 Book CopyRight - Year 2010 Title Frequent Pattern Author Particle Given Name Hannu Family Name Toivonen Suffix Email hannu.toivonen@cs.helsinki.fi

More information

Cryptography. Lecture 8. Arpita Patra

Cryptography. Lecture 8. Arpita Patra Crytograhy Lecture 8 Arita Patra Quick Recall and Today s Roadma >> Hash Functions- stands in between ublic and rivate key world >> Key Agreement >> Assumtions in Finite Cyclic grous - DL, CDH, DDH Grous

More information

One-way ANOVA Inference for one-way ANOVA

One-way ANOVA Inference for one-way ANOVA One-way ANOVA Inference for one-way ANOVA IPS Chater 12.1 2009 W.H. Freeman and Comany Objectives (IPS Chater 12.1) Inference for one-way ANOVA Comaring means The two-samle t statistic An overview of ANOVA

More information

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach, Kumar 1 Types of data sets Record Tables Document Data Transaction Data Graph World Wide Web Molecular Structures

More information

Data mining, 4 cu Lecture 5:

Data mining, 4 cu Lecture 5: 582364 Data mining, 4 cu Lecture 5: Evaluation of Association Patterns Spring 2010 Lecturer: Juho Rousu Teaching assistant: Taru Itäpelto Evaluation of Association Patterns Association rule algorithms

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Associa'on Rule Mining

Associa'on Rule Mining Associa'on Rule Mining Debapriyo Majumdar Data Mining Fall 2014 Indian Statistical Institute Kolkata August 4 and 7, 2014 1 Market Basket Analysis Scenario: customers shopping at a supermarket Transaction

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21

Pretest (Optional) Use as an additional pacing tool to guide instruction. August 21 Trimester 1 Pretest (Otional) Use as an additional acing tool to guide instruction. August 21 Beyond the Basic Facts In Trimester 1, Grade 8 focus on multilication. Daily Unit 1: Rational vs. Irrational

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Objectives. 6.1, 7.1 Estimating with confidence (CIS: Chapter 10) CI)

Objectives. 6.1, 7.1 Estimating with confidence (CIS: Chapter 10) CI) Objectives 6.1, 7.1 Estimating with confidence (CIS: Chater 10) Statistical confidence (CIS gives a good exlanation of a 95% CI) Confidence intervals. Further reading htt://onlinestatbook.com/2/estimation/confidence.html

More information

Machine Learning: Pattern Mining

Machine Learning: Pattern Mining Machine Learning: Pattern Mining Information Systems and Machine Learning Lab (ISMLL) University of Hildesheim Wintersemester 2007 / 2008 Pattern Mining Overview Itemsets Task Naive Algorithm Apriori Algorithm

More information

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany Syllabus Fri. 21.10. (1) 0. Introduction A. Supervised Learning: Linear Models & Fundamentals Fri. 27.10. (2) A.1 Linear Regression Fri. 3.11. (3) A.2 Linear Classification Fri. 10.11. (4) A.3 Regularization

More information

STK4900/ Lecture 7. Program

STK4900/ Lecture 7. Program STK4900/9900 - Lecture 7 Program 1. Logistic regression with one redictor 2. Maximum likelihood estimation 3. Logistic regression with several redictors 4. Deviance and likelihood ratio tests 5. A comment

More information

Frequent Itemset Mining

Frequent Itemset Mining ì 1 Frequent Itemset Mining Nadjib LAZAAR LIRMM- UM COCONUT Team (PART I) IMAGINA 17/18 Webpage: http://www.lirmm.fr/~lazaar/teaching.html Email: lazaar@lirmm.fr 2 Data Mining ì Data Mining (DM) or Knowledge

More information

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12 8.3: Algebraic Combinatorics Lionel Levine Lecture date: March 7, Lecture Notes by: Lou Odette This lecture: A continuation of the last lecture: comutation of µ Πn, the Möbius function over the incidence

More information

Fig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem.

Fig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem. 217 IEEE 33rd International Conference on Data Engineering Prediction-Based Task Assignment in Satial Crowdsourcing Peng Cheng #, Xiang Lian, Lei Chen #, Cyrus Shahabi # Hong Kong University of Science

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

A Network-Flow Based Cell Sizing Algorithm

A Network-Flow Based Cell Sizing Algorithm A Networ-Flow Based Cell Sizing Algorithm Huan Ren and Shantanu Dutt Det. of ECE, University of Illinois-Chicago Abstract We roose a networ flow based algorithm for the area-constrained timing-driven discrete

More information

Unit II Association Rules

Unit II Association Rules Unit II Association Rules Basic Concepts Frequent Pattern Analysis Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set Frequent Itemset

More information

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules

DATA MINING LECTURE 4. Frequent Itemsets and Association Rules DATA MINING LECTURE 4 Frequent Itemsets and Association Rules This is how it all started Rakesh Agrawal, Tomasz Imielinski, Arun N. Swami: Mining Association Rules between Sets of Items in Large Databases.

More information

Chapter 9 Practical cycles

Chapter 9 Practical cycles Prof.. undararajan Chater 9 Practical cycles 9. Introduction In Chaters 7 and 8, it was shown that a reversible engine based on the Carnot cycle (two reversible isothermal heat transfers and two reversible

More information

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP)

Descrip9ve data analysis. Example. Example. Example. Example. Data Mining MTAT (6EAP) 3.9.2 Descrip9ve data analysis Data Mining MTAT.3.83 (6EAP) hp://courses.cs.ut.ee/2/dm/ Frequent itemsets and associa@on rules Jaak Vilo 2 Fall Aims to summarise the main qualita9ve traits of data. Used

More information

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0 Advanced Finite Elements MA5337 - WS7/8 Solution sheet This exercise sheets deals with B-slines and NURBS, which are the basis of isogeometric analysis as they will later relace the olynomial ansatz-functions

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

Availability and Maintainability. Piero Baraldi

Availability and Maintainability. Piero Baraldi Availability and Maintainability 1 Introduction: reliability and availability System tyes Non maintained systems: they cannot be reaired after a failure (a telecommunication satellite, a F1 engine, a vessel

More information

Notes on Instrumental Variables Methods

Notes on Instrumental Variables Methods Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of

More information

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 10 What is Data? Collection of data objects

More information

Estimation of Separable Representations in Psychophysical Experiments

Estimation of Separable Representations in Psychophysical Experiments Estimation of Searable Reresentations in Psychohysical Exeriments Michele Bernasconi (mbernasconi@eco.uninsubria.it) Christine Choirat (cchoirat@eco.uninsubria.it) Raffaello Seri (rseri@eco.uninsubria.it)

More information