Talk 2: Graph Mining Tools - SVD, ranking, proximity. Christos Faloutsos CMU

Size: px
Start display at page:

Download "Talk 2: Graph Mining Tools - SVD, ranking, proximity. Christos Faloutsos CMU"

Transcription

1 Talk 2: Graph Mining Tools - SVD, ranking, proximity Christos Faloutsos CMU

2 Outline Introduction Motivation Task 1: Node importance Task 2: Recommendations Task 3: Connection sub-graphs Conclusions Lipari 2010 (C) 2010, C. Faloutsos 2

3 Node importance - Motivation: Given a graph (eg., web pages containing the desirable query word) Q: Which node is the most important? Lipari 2010 (C) 2010, C. Faloutsos 3

4 Node importance - Motivation: Given a graph (eg., web pages containing the desirable query word) Q: Which node is the most important? A1: HITS (SVD = Singular Value Decomposition) A2: eigenvector (PageRank) Lipari 2010 (C) 2010, C. Faloutsos 4

5 Node importance - motivation SVD and eigenvector analysis: very closely related Lipari 2010 (C) 2010, C. Faloutsos 5

6 SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Lipari 2010 (C) 2010, C. Faloutsos 6

7 SVD - Motivation problem #1: text - LSI: find concepts problem #2: compression / dim. reduction Lipari 2010 (C) 2010, C. Faloutsos 7

8 SVD - Motivation problem #1: text - LSI: find concepts Lipari 2010 (C) 2010, C. Faloutsos 8

9 SVD - Motivation Customer-product, for recommendation system: vegetarians meat eaters Lipari 2010 (C) 2010, C. Faloutsos 9

10 SVD - Motivation problem #2: compress / reduce dimensionality Lipari 2010 (C) 2010, C. Faloutsos 10

11 Problem - specs ~10**6 rows; ~10**3 columns; no updates; random access to any cell(s) ; small error: OK Lipari 2010 (C) 2010, C. Faloutsos 11

12 SVD - Motivation Lipari 2010 (C) 2010, C. Faloutsos 12

13 SVD - Motivation Lipari 2010 (C) 2010, C. Faloutsos 13

14 SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Additional properties Lipari 2010 (C) 2010, C. Faloutsos 14

15 SVD - Definition (reminder: matrix multiplication x = 3 x 2 2 x 1 Lipari 2010 (C) 2010, C. Faloutsos 15

16 SVD - Definition (reminder: matrix multiplication x = 3 x 2 2 x 1 3 x 1 Lipari 2010 (C) 2010, C. Faloutsos 16

17 SVD - Definition (reminder: matrix multiplication x = 3 x 2 2 x 1 3 x 1 Lipari 2010 (C) 2010, C. Faloutsos 17

18 SVD - Definition (reminder: matrix multiplication x = 3 x 2 2 x 1 3 x 1 Lipari 2010 (C) 2010, C. Faloutsos 18

19 SVD - Definition (reminder: matrix multiplication x = Lipari 2010 (C) 2010, C. Faloutsos 19

20 SVD - Definition A [n x m] = U [n x r] Λ [ r x r] (V [m x r] ) T A: n x m matrix (eg., n documents, m terms) U: n x r matrix (n documents, r concepts) Λ: r x r diagonal matrix (strength of each concept ) (r : rank of the matrix) V: m x r matrix (m terms, r concepts) Lipari 2010 (C) 2010, C. Faloutsos 20

21 SVD - Definition A = U Λ V T - example: Lipari 2010 (C) 2010, C. Faloutsos 21

22 SVD - Properties THEOREM [Press+92]: always possible to decompose matrix A into A = U Λ V T, where U, Λ, V: unique (*) U, V: column orthonormal (ie., columns are unit vectors, orthogonal to each other) U T U = I; V T V = I (I: identity matrix) Λ: singular are positive, and sorted in decreasing order Lipari 2010 (C) 2010, C. Faloutsos 22

23 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 23

24 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung CS-concept MD-concept CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 24

25 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung doc-to-concept CS-concept similarity matrix MD-concept CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 25

26 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung strength of CS-concept CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 26

27 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung CS-concept term-to-concept similarity matrix CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 27

28 SVD - Example A = U Λ V T - example: data inf. retrieval brain lung CS-concept term-to-concept similarity matrix CS = x x MD Lipari 2010 (C) 2010, C. Faloutsos 28

29 SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Additional properties Lipari 2010 (C) 2010, C. Faloutsos 29

30 SVD - Interpretation #1 documents, terms and concepts : U: document-to-concept similarity matrix V: term-to-concept sim. matrix Λ: its diagonal elements: strength of each concept Lipari 2010 (C) 2010, C. Faloutsos 30

31 SVD Interpretation #1 documents, terms and concepts : Q: if A is the document-to-term matrix, what is A T A? A: Q: A A T? A: Lipari 2010 (C) 2010, C. Faloutsos 31

32 SVD Interpretation #1 documents, terms and concepts : Q: if A is the document-to-term matrix, what is A T A? A: term-to-term ([m x m]) similarity matrix Q: A A T? A: document-to-document ([n x n]) similarity matrix Lipari 2010 (C) 2010, C. Faloutsos -32

33 SVD properties V are the eigenvectors of the covariance matrix A T A U are the eigenvectors of the Gram (innerproduct) matrix AA T Further reading: 1. Ian T. Jolliffe, Principal Component Analysis (2 nd ed), Springer, Gilbert Strang, Linear Algebra and Its Applications (4 th ed), Brooks Cole, Lipari 2010 Copyright: (C) 2010, Faloutsos, C. Faloutsos Tong (2009) 2-33

34 SVD - Interpretation #2 best axis to project on: ( best = min sum of squares of projection errors) Lipari 2010 (C) 2010, C. Faloutsos 34

35 SVD - Motivation Lipari 2010 (C) 2010, C. Faloutsos 35

36 SVD - interpretation #2 SVD: gives best axis to project first singular vector v1 minimum RMS error Lipari 2010 (C) 2010, C. Faloutsos 36

37 SVD - Interpretation #2 Lipari 2010 (C) 2010, C. Faloutsos 37

38 SVD - Interpretation #2 A = U Λ V T - example: = x x v1 Lipari 2010 (C) 2010, C. Faloutsos 38

39 SVD - Interpretation #2 A = U Λ V T - example: variance ( spread ) on the v1 axis = x x Lipari 2010 (C) 2010, C. Faloutsos 39

40 SVD - Interpretation #2 A = U Λ V T - example: U Λ gives the coordinates of the points in the projection axis = x x Lipari 2010 (C) 2010, C. Faloutsos 40

41 SVD - Interpretation #2 More details Q: how exactly is dim. reduction done? = x x Lipari 2010 (C) 2010, C. Faloutsos 41

42 SVD - Interpretation #2 More details Q: how exactly is dim. reduction done? A: set the smallest singular values to zero: = x x Lipari 2010 (C) 2010, C. Faloutsos 42

43 SVD - Interpretation #2 ~ x x Lipari 2010 (C) 2010, C. Faloutsos 43

44 SVD - Interpretation #2 ~ x x Lipari 2010 (C) 2010, C. Faloutsos 44

45 SVD - Interpretation #2 ~ x x Lipari 2010 (C) 2010, C. Faloutsos 45

46 SVD - Interpretation #2 ~ Lipari 2010 (C) 2010, C. Faloutsos 46

47 SVD - Interpretation #2 Exactly equivalent: spectral decomposition of the matrix: = x x Lipari 2010 (C) 2010, C. Faloutsos 47

48 SVD - Interpretation #2 Exactly equivalent: spectral decomposition of the matrix: = x λ 1 u x 1 u 2 λ 2 v 1 Lipari 2010 (C) 2010, C. Faloutsos 48 v 2

49 SVD - Interpretation #2 Exactly equivalent: spectral decomposition of the matrix: m n = λ 1 u 1 v T 1 + λ 2 u 2 v T Lipari 2010 (C) 2010, C. Faloutsos 49

50 SVD - Interpretation #2 Exactly equivalent: spectral decomposition of the matrix: m r terms n = λ 1 u 1 v T 1 + λ 2 u 2 v T n x 1 1 x m Lipari 2010 (C) 2010, C. Faloutsos 50

51 SVD - Interpretation #2 approximation / dim. reduction: by keeping the first few terms (Q: how many?) m n = λ 1 u 1 v T 1 + λ 2 u 2 v T assume: λ 1 >= λ 2 >=... Lipari 2010 (C) 2010, C. Faloutsos 51

52 SVD - Interpretation #2 A (heuristic - [Fukunaga]): keep 80-90% of energy (= sum of squares of λ i s) m n = λ 1 u 1 v T 1 + λ 2 u 2 v T assume: λ 1 >= λ 2 >=... Lipari 2010 (C) 2010, C. Faloutsos 52

53 SVD - Detailed outline Motivation Definition - properties Interpretation #1: documents/terms/concepts #2: dim. reduction #3: picking non-zero, rectangular blobs Complexity Case studies Additional properties Lipari 2010 (C) 2010, C. Faloutsos 53

54 SVD - Interpretation #3 finds non-zero blobs in a data matrix = x x Lipari 2010 (C) 2010, C. Faloutsos 54

55 SVD - Interpretation #3 finds non-zero blobs in a data matrix = x x Lipari 2010 (C) 2010, C. Faloutsos 55

56 SVD - Interpretation #3 finds non-zero blobs in a data matrix = communities (bi-partite cores, here) Row 1 Row 4 Row 5 Row 7 Col 1 Col 3 Col 4 Lipari 2010 (C) 2010, C. Faloutsos 56

57 SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Additional properties Lipari 2010 (C) 2010, C. Faloutsos 57

58 SVD - Complexity O( n * m * m) or O( n * n * m) (whichever is less) less work, if we just want singular values or if we want first k singular vectors or if the matrix is sparse [Berry] Implemented: in any linear algebra package (LINPACK, matlab, Splus, mathematica...) Lipari 2010 (C) 2010, C. Faloutsos 58

59 SVD - conclusions so far SVD: A= U Λ V T : unique (*) U: document-to-concept similarities V: term-to-concept similarities Λ: strength of each concept dim. reduction: keep the first few strongest singular values (80-90% of energy ) SVD: picks up linear correlations SVD: picks up non-zero blobs Lipari 2010 (C) 2010, C. Faloutsos 59

60 SVD - Detailed outline Motivation Definition - properties Interpretation Complexity SVD properties Case studies Conclusions Lipari 2010 (C) 2010, C. Faloutsos 60

61 SVD - Other properties - summary can produce orthogonal basis (obvious) (who cares?) can solve over- and under-determined linear problems (see C(1) property) can compute fixed points (= steady state prob. in Markov chains ) (see C(4) property) Lipari 2010 (C) 2010, C. Faloutsos 61

62 SVD -outline of properties (A): obvious (B): less obvious (C): least obvious (and most powerful!) Lipari 2010 (C) 2010, C. Faloutsos 62

63 Properties - by defn.: A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] A(1): U T [r x n] U [n x r ] = I [r x r ] (identity matrix) A(2): V T [r x n] V [n x r ] = I [r x r ] A(3): Λ k = diag( λ 1k, λ 2k,... λ r k ) (k: ANY real number) A(4): A T = V Λ U T Lipari 2010 (C) 2010, C. Faloutsos 63

64 Less obvious properties A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] B(1): A [n x m] (A T ) [m x n] =?? Lipari 2010 (C) 2010, C. Faloutsos 64

65 Less obvious properties A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] B(1): A [n x m] (A T ) [m x n] = U Λ 2 U T symmetric; Intuition? Lipari 2010 (C) 2010, C. Faloutsos 65

66 Less obvious properties A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] B(1): A [n x m] (A T ) [m x n] = U Λ 2 U T symmetric; Intuition? document-to-document similarity matrix B(2): symmetrically, for V (AT) [m x n] A [n x m] = V L2 VT Intuition? Lipari 2010 (C) 2010, C. Faloutsos 66

67 Less obvious properties A: term-to-term similarity matrix B(3): ( (A T ) [m x n] A [n x m] ) k = V Λ 2k V T and B(4): (A T A ) k ~ v 1 λ 2k 1 v T 1 for k>>1 where v 1 : [m x 1] first column (singular-vector) of V λ 1 : strongest singular value Lipari 2010 (C) 2010, C. Faloutsos 67

68 Less obvious properties B(4): (A T A ) k ~ v 1 λ 1 2k v 1 T for k>>1 B(5): (A T A ) k v ~ (constant) v 1 ie., for (almost) any v, it converges to a vector parallel to v 1 Thus, useful to compute first singular vector/ value (as well as the next ones, too...) Lipari 2010 (C) 2010, C. Faloutsos 68

69 Less obvious properties - repeated: A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] B(1): A [n x m] (A T ) [m x n] = U Λ 2 U T B(2): (A T ) [m x n] A [n x m] = V Λ2 V T B(3): ( (A T ) [m x n] A [n x m] ) k = V Λ 2k V T B(4): (A T A ) k ~ v 1 λ 1 2k v 1 T B(5): (A T A ) k v ~ (constant) v 1 Lipari 2010 (C) 2010, C. Faloutsos 69

70 Least obvious properties - cont d A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] C(2): A [n x m] v 1 [m x 1] = λ 1 u 1 [n x 1] where v 1, u 1 the first (column) vectors of V, U. (v 1 == right-singular-vector) C(3): symmetrically: u 1 T A = λ 1 v 1 T u 1 == left-singular-vector Therefore: Lipari 2010 (C) 2010, C. Faloutsos 70

71 Least obvious properties - cont d A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] C(4): A T A v 1 = λ 1 2 v 1 (fixed point - the dfn of eigenvector for a symmetric matrix) Lipari 2010 (C) 2010, C. Faloutsos 71

72 Least obvious properties - altogether A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] C(1): A [n x m] x [m x 1] = b [n x 1] then, x 0 = V Λ (-1) U T b: shortest, actual or leastsquares solution C(2): A [n x m] v 1 [m x 1] = λ 1 u 1 [n x 1] C(3): u 1 T A = λ 1 v 1 T C(4): A T A v 1 = λ 1 2 v 1 Lipari 2010 (C) 2010, C. Faloutsos 72

73 Properties - conclusions A(0): A [n x m] = U [ n x r ] Λ [ r x r ] V T [ r x m] B(5): (A T A ) k v ~ (constant) v 1 C(1): A [n x m] x [m x 1] = b [n x 1] then, x 0 = V Λ (-1) U T b: shortest, actual or leastsquares solution C(4): A T A v 1 = λ 1 2 v 1 Lipari 2010 (C) 2010, C. Faloutsos 73

74 SVD - detailed outline... SVD properties case studies Kleinberg s algorithm Google s algorithm Conclusions Lipari 2010 (C) 2010, C. Faloutsos 74

75 Kleinberg s algo (HITS) Kleinberg, Jon (1998). Authoritative sources in a hyperlinked environment. Proc. 9th ACM-SIAM Symposium on Discrete Algorithms. Lipari 2010 (C) 2010, C. Faloutsos 75

76 Recall: problem dfn Given a graph (eg., web pages containing the desirable query word) Q: Which node is the most important? Lipari 2010 (C) 2010, C. Faloutsos 76

77 Kleinberg s algorithm Problem dfn: given the web and a query find the most authoritative web pages for this query Step 0: find all pages containing the query terms Step 1: expand by one move forward and backward Lipari 2010 (C) 2010, C. Faloutsos 77

78 Kleinberg s algorithm Step 1: expand by one move forward and backward Lipari 2010 (C) 2010, C. Faloutsos 78

79 Kleinberg s algorithm on the resulting graph, give high score (= authorities ) to nodes that many important nodes point to give high importance score ( hubs ) to nodes that point to good authorities ) hubs authorities Lipari 2010 (C) 2010, C. Faloutsos 79

80 observations Kleinberg s algorithm recursive definition! each node (say, i -th node) has both an authoritativeness score a i and a hubness score h i Lipari 2010 (C) 2010, C. Faloutsos 80

81 Kleinberg s algorithm Let E be the set of edges and A be the adjacency matrix: the (i,j) is 1 if the edge from i to j exists Let h and a be [n x 1] vectors with the hubness and authoritativiness scores. Then: Lipari 2010 (C) 2010, C. Faloutsos 81

82 Kleinberg s algorithm Then: a i = h k + h l + h m k l m i that is a i = Sum (h j ) over all j that (j,i) edge exists or a = A T h Lipari 2010 (C) 2010, C. Faloutsos 82

83 Kleinberg s algorithm i n q p symmetrically, for the hubness : h i = a n + a p + a q that is h i = Sum (q j ) over all j that (i,j) edge exists or h = A a Lipari 2010 (C) 2010, C. Faloutsos 83

84 Kleinberg s algorithm In conclusion, we want vectors h and a such that: h = A a a = A T h Recall properties: C(2): A [n x m] v 1 [m x 1] = λ 1 u 1 [n x 1] C(3): u T 1 A = λ 1 v T 1 = Lipari 2010 (C) 2010, C. Faloutsos 84

85 Kleinberg s algorithm In short, the solutions to h = A a a = A T h are the left- and right- singular-vectors of the adjacency matrix A. Starting from random a and iterating, we ll eventually converge (Q: to which of all the singular-vectors? why?) Lipari 2010 (C) 2010, C. Faloutsos 85

86 Kleinberg s algorithm (Q: to which of all the singular-vectors? why?) A: to the ones of the strongest singular-value, because of property B(5): B(5): (A T A ) k v ~ (constant) v 1 Lipari 2010 (C) 2010, C. Faloutsos 86

87 Kleinberg s algorithm - results Eg., for the query java : java.sun.com ( the java developer ) Lipari 2010 (C) 2010, C. Faloutsos 87

88 Kleinberg s algorithm - discussion authority score can be used to find similar pages (how?) Lipari 2010 (C) 2010, C. Faloutsos 88

89 SVD - detailed outline... Complexity SVD properties Case studies Kleinberg s algorithm (HITS) Google s algorithm Conclusions Lipari 2010 (C) 2010, C. Faloutsos 89

90 PageRank (google) Brin, Sergey and Lawrence Page (1998). Anatomy of a Large-Scale Hypertextual Web Search Engine. 7th Intl World Wide Web Conf. Larry Page Sergey Brin Lipari 2010 (C) 2010, C. Faloutsos 90

91 Problem: PageRank Given a directed graph, find its most interesting/central node A node is important, if it is connected with important nodes (recursive, but OK!) Lipari 2010 (C) 2010, C. Faloutsos 91

92 Problem: PageRank - solution Given a directed graph, find its most interesting/central node Proposed solution: Random walk; spot most popular node (-> steady state prob. (ssp)) A node has high ssp, if it is connected with high ssp nodes (recursive, but OK!) Lipari 2010 (C) 2010, C. Faloutsos 92

93 (Simplified) PageRank algorithm Let A be the adjacency matrix; let B be the transition matrix: transpose, column-normalized - then To From = B 4 5 Lipari 2010 (C) 2010, C. Faloutsos 93

94 (Simplified) PageRank algorithm B p = p B p = p = 4 5 Lipari 2010 (C) 2010, C. Faloutsos 94

95 A Definitions Adjacency matrix (from-to) D Degree matrix = (diag ( d1, d2,, dn) ) B Transition matrix: to-from, column normalized B = A T D -1 Lipari 2010 (C) 2010, C. Faloutsos 95

96 (Simplified) PageRank algorithm B p = 1 * p thus, p is the eigenvector that corresponds to the highest eigenvalue (=1, since the matrix is column-normalized) Why does such a p exist? p exists if B is nxn, nonnegative, irreducible [Perron Frobenius theorem] Lipari 2010 (C) 2010, C. Faloutsos 96

97 (Simplified) PageRank algorithm In short: imagine a particle randomly moving along the edges compute its steady-state probabilities (ssp) Full version of algo: with occasional random jumps Why? To make the matrix irreducible Lipari 2010 (C) 2010, C. Faloutsos 97

98 Full Algorithm With probability 1-c, fly-out to a random node Then, we have p = c B p + (1-c)/n 1 => p = (1-c)/n [I - c B] -1 1 Lipari 2010 (C) 2010, C. Faloutsos 98

99 Alternative notation M Modified transition matrix Then M = c B + (1-c)/n 1 1 T p = M p That is: the steady state probabilities = PageRank scores form the first eigenvector of the modified transition matrix Lipari 2010 (C) 2010, C. Faloutsos 99

100 Parenthesis: intuition behind eigenvectors Lipari 2010 (C) 2010, C. Faloutsos 100

101 Formal definition If A is a (n x n) square matrix (λ, x) is an eigenvalue/eigenvector pair of A if A x = λ x CLOSELY related to singular values: Lipari 2010 (C) 2010, C. Faloutsos 101

102 Property #1: Eigen- vs singular-values if B [n x m] = U [n x r] Λ [ r x r] (V [m x r] ) T then A = (B T B) is symmetric and C(4): B T B v i = λ 2 i v i ie, v 1, v 2,...: eigenvectors of A = (B T B) Lipari 2010 (C) 2010, C. Faloutsos 102

103 Property #2 If A [nxn] is a real, symmetric matrix Then it has n real eigenvalues (if A is not symmetric, some eigenvalues may be complex) Lipari 2010 (C) 2010, C. Faloutsos 103

104 Property #3 If A [nxn] is a real, symmetric matrix Then it has n real eigenvalues And they agree with its n singular values, except possibly for the sign Lipari 2010 (C) 2010, C. Faloutsos 104

105 Intuition A as vector transformation x A x x = x Lipari 2010 (C) 2010, C. Faloutsos 105

106 Intuition By defn., eigenvectors remain parallel to themselves ( fixed points ) λ 1 v 1 A v * = Lipari 2010 (C) 2010, C. Faloutsos 106

107 Usually, fast: Convergence Lipari 2010 (C) 2010, C. Faloutsos 107

108 Usually, fast: Convergence Lipari 2010 (C) 2010, C. Faloutsos 108

109 Convergence Usually, fast: depends on ratio λ1 : λ2 λ1 λ2 Lipari 2010 (C) 2010, C. Faloutsos 109

110 Kleinberg/google - conclusions SVD helps in graph analysis: hub/authority scores: strongest left- and rightsingular-vectors of the adjacency matrix random walk on a graph: steady state probabilities are given by the strongest eigenvector of the (modified) transition matrix Lipari 2010 (C) 2010, C. Faloutsos 110

111 Conclusions SVD: a valuable tool given a document-term matrix, it finds concepts (LSI)... and can find fixed-points or steady-state probabilities (google/ Kleinberg/ Markov Chains) Lipari 2010 (C) 2010, C. Faloutsos 111

112 Conclusions cont d (We didn t discuss/elaborate, but, SVD... can reduce dimensionality (KL)... and can find rules (PCA; RatioRules)... and can solve optimally over- and underconstraint linear systems (least squares / query feedbacks) Lipari 2010 (C) 2010, C. Faloutsos 112

113 References Berry, Michael: Brin, S. and L. Page (1998). Anatomy of a Large-Scale Hypertextual Web Search Engine. 7th Intl World Wide Web Conf. Lipari 2010 (C) 2010, C. Faloutsos 113

114 References Christos Faloutsos, Searching Multimedia Databases by Content, Springer, (App. D) Fukunaga, K. (1990). Introduction to Statistical Pattern Recognition, Academic Press. I.T. Jolliffe Principal Component Analysis Springer, 2002 (2 nd ed.) Lipari 2010 (C) 2010, C. Faloutsos 114

115 References cont d Kleinberg, J. (1998). Authoritative sources in a hyperlinked environment. Proc. 9th ACM-SIAM Symposium on Discrete Algorithms. Press, W. H., S. A. Teukolsky, et al. (1992). Numerical Recipes in C, Cambridge University Press. Lipari 2010 (C) 2010, C. Faloutsos 115

116 Outline Introduction Motivation Task 1: Node importance Task 2: Recommendations & proximity Task 3: Connection sub-graphs Conclusions Lipari 2010 (C) 2010, C. Faloutsos 116

117 Acknowledgement: Most of the foils in Task 2 are by Hanghang TONG Lipari 2010 (C) 2010, C. Faloutsos 117

118 Detailed outline Problem dfn and motivation Solution: Random walk with restarts Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 118

119 Motivation: Link Prediction A? B i i i i Should we introduce Mr. A to Mr. B? Lipari 2010 (C) 2010, C. Faloutsos 119

120 Motivation - recommendations customers Products / movies smith Terminator 2?? Lipari 2010 (C) 2010, C. Faloutsos 120

121 Answer: proximity yes, if A and B are close yes, if smith and terminator 2 are close QUESTIONS in this part: - How to measure closeness /proximity? - How to do it quickly? - What else can we do, given proximity scores? Lipari 2010 (C) 2010, C. Faloutsos 121

122 How close is A to B? a.k.a Relevance, Closeness, Similarity Lipari 2010 (C) 2010, C. Faloutsos 122

123 Why is it useful? Recommendation And many more Image captioning [Pan+] Conn. / CenterPiece subgraphs [Faloutsos+], [Tong+], [Koren +] and Link prediction [Liben-Nowell+], [Tong+] Ranking [Haveliwala], [Chakrabarti+] Management [Minkov+] Neighborhood Formulation [Sun+] Pattern matching [Tong+] Collaborative Filtering [Fouss+] Lipari 2010 (C) 2010, C. Faloutsos 123

124 Region Automatic Image Captioning Image Test Image Keyword Sea Sun Sky Wave Cat Forest Tiger Grass Q: How to assign keywords to the test image? Lipari 2010 (C) 2010, A: C. Faloutsos Proximity! [Pan+ 2004] 124

125 Center-Piece Subgraph(CePS) Input Output CePS guy Original Graph CePS Q: How to find hub for the black nodes? A: Proximity! [Tong+ KDD 2006] Lipari 2010 (C) 2010, C. Faloutsos 125

126 Detailed outline Problem dfn and motivation Solution: Random walk with restarts Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 126

127 How close is A to B? Should be close, if they have - many, - short - heavy paths Lipari 2010 (C) 2010, C. Faloutsos 127

128 Why not shortest path? Some ``bad proximities A: pizza delivery guy problem Lipari 2010 (C) 2010, C. Faloutsos 128

129 Why not max. netflow? Some ``bad proximities A: No penalty for long paths Lipari 2010 (C) 2010, C. Faloutsos 129

130 What is a ``good Proximity? Multiple Connections Quality of connection Direct & In-directed Conns Length, Degree, Weight Lipari 2010 (C) 2010, C. Faloutsos 130

131 Random walk with restart [Haveliwala 02] Lipari 2010 (C) 2010, C. Faloutsos 131

132 Random walk with restart Nearby nodes, higher scores More red, more relevant Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8 Node 9 Node 10 Node 11 Node 12 Node Ranking vector Lipari 2010 (C) 2010, C. Faloutsos 132

133 Why RWR is a good score? j i : adjacency matrix. c: damping factor all paths from i to j with length 1 all paths from i to j with length 2 all paths from i to j with length 3 Lipari 2010 (C) 2010, C. Faloutsos 133

134 Detailed outline Problem dfn and motivation Solution: Random walk with restarts variants Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 134

135 Variant: escape probability Define Random Walk (RW) on the graph Esc_Prob(CMU Paris) Prob (starting at CMU, reaches Paris before returning to CMU) CMU the remaining graph Paris Esc_Prob = Pr (smile before cry) Lipari 2010 (C) 2010, C. Faloutsos 135

136 Other Variants Other measure by RWs Community Time/Hitting Time [Fouss+] SimRank [Jeh+] Equivalence of Random Walks Electric Networks: EC [Doyle+]; SAEC[Faloutsos+]; CFEC[Koren+] Spring Systems Katz [Katz], [Huang+], [Scholkopf+] Matrix-Forest-based Alg [Chobotarev+] Lipari 2010 (C) 2010, C. Faloutsos 136

137 Other Variants Other measure by RWs Community Time/Hitting Time [Fouss+] SimRank [Jeh+] Equivalence of Random Walks All are related to or similar to Electric Networks: EC [Doyle+]; SAEC[Faloutsos+]; CFEC[Koren+] random walk with restart! Spring Systems Katz [Katz], [Huang+], [Scholkopf+] Matrix-Forest-based Alg [Chobotarev+] Lipari 2010 (C) 2010, C. Faloutsos 137

138 Map of proximity measurements RWR Norma lize 4 ssp decides 1 esc_prob Katz Regularized Un-constrained Quad Opt. Esc_Prob + Sink Hitting Time/ Commute Time relax X out-degree Effective Conductance voltage = position Harmonic Func. Constrained Quad Opt. String System Physical Models Mathematic Tools Lipari 2010 (C) 2010, C. Faloutsos 138

139 Notice: Asymmetry (even in undirected graphs) B C C-> A : high A-> C: low A E D Lipari 2010 (C) 2010, C. Faloutsos 139

140 Summary of Proximity Definitions Goal: Summarize multiple relationships Solutions Basic: Random Walk with Restarts [Haweliwala 02] [Pan+ 2004][Sun+ 2006][Tong+ 2006] Properties: Asymmetry [Koren+ 2006][Tong+ 2007] [Tong+ 2008] Variants: Esc_Prob and many others. [Faloutsos+ 2004] [Koren+ 2006][Tong+ 2007] Lipari 2010 (C) 2010, C. Faloutsos 140

141 Detailed outline Problem dfn and motivation Solution: Random walk with restarts Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 141

142 Reminder: PageRank With probability 1-c, fly-out to a random node Then, we have p = c B p + (1-c)/n 1 => p = (1-c)/n [I - c B] -1 1 Lipari 2010 (C) 2010, C. Faloutsos 142

143 p = c B p + (1-c)/n 1 The only difference Ranking vector Adjacency matrix Restart p Starting vector Lipari 2010 (C) 2010, C. Faloutsos 143

144 p = c B p + (1-c)/n 1 Computing RWR Ranking vector Adjacency matrix Restart p Starting vector n x 1 n x n n x 1 7 Lipari 2010 (C) 2010, C. Faloutsos 144

145 Q: Given query i, how to solve it? Query?? Ranking vector Adjacency matrix Ranking vector Starting vector Lipari 2010 (C) 2010, C. Faloutsos 145

146 OntheFly: No pre-computation/ light storage Slow on-line response O(mE) Lipari 2010 (C) 2010, C. Faloutsos 146

147 PreCompute R: Q c x Q Lipari 2010 (C) 2010, C. Faloutsos 147

148 PreCompute: Fast on-line response Heavy pre-computation/storage cost 3 O(n ) 2 O(n ) Lipari 2010 (C) 2010, C. Faloutsos 148

149 Q: How to Balance? Off-line On-line Lipari 2010 (C) 2010, C. Faloutsos 149

150 How to balance? Idea ( B-Lin ) Break into communities Pre-compute all, within a community Adjust (with S.M.) for bridge edges H. Tong, C. Faloutsos, & J.Y. Pan. Fast Random Walk with Lipari 2010 (C) 2010, C. Faloutsos 150 Restart and Its Applications. ICDM, , 2006.

151 Detailed outline Problem dfn and motivation Solution: Random walk with restarts Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 151

152 Q gcap: Automatic Image Caption { Sea Sun Sky Wave } { Cat Forest Grass Tiger } A: Proximity! {?,?,?,} [Pan+ KDD2004] Lipari 2010 (C) 2010, C. Faloutsos 152

153 Region Image Test Image Sea Sun Sky Wave Cat Forest Tiger Grass Keyword Lipari 2010 (C) 2010, C. Faloutsos 153

154 Region Image {Grass, Forest, Test Image Cat, Tiger} Sea Sun Sky Wave Cat Forest Tiger Grass Keyword Lipari 2010 (C) 2010, C. Faloutsos 154

155 C-DEM (Screen-shot) Lipari 2010 (C) 2010, C. Faloutsos 155

156 C-DEM: Multi-Modal Query System for Drosophila Embryo Databases [Fan+ VLDB 2008] Lipari 2010 (C) 2010, C. Faloutsos 156

157 Detailed outline Problem dfn and motivation Solution: Random walk with restarts Efficient computation Case study: image auto-captioning Extensions: bi-partite graphs; tracking Conclusions Lipari 2010 (C) 2010, C. Faloutsos 157

158 Problem: update E edges changed Involves n authors, m confs. n authors m Conferences Lipari 2010 (C) 2010, C. Faloutsos 158

159 Solution: Use Sherman-Morrison Lemma to quickly update the inverse matrix Lipari 2010 (C) 2010, C. Faloutsos 159

160 Fast-Single-Update log(time) (Seconds) 176x speedup 40x speedup Our method Our method Datasets Lipari 2010 (C) 2010, C. Faloutsos 160

161 ptrack: Philip S. Yu s Top-5 conferences up to each year ICDE ICDCS SIGMETRICS PDIS VLDB CIKM ICDCS ICDE SIGMETRICS ICMCS KDD SIGMOD ICDM CIKM ICDCS ICDM KDD ICDE SDM VLDB Databases Performance Distributed Sys. DBLP: (Au. x Conf.) - 400k aus, - 3.5k confs - 20 yrs Databases Data Mining Lipari 2010 (C) 2010, C. Faloutsos 161

162 ptrack: Philip S. Yu s Top-5 conferences up to each year ICDE ICDCS SIGMETRICS PDIS VLDB CIKM ICDCS ICDE SIGMETRICS ICMCS KDD SIGMOD ICDM CIKM ICDCS ICDM KDD ICDE SDM VLDB Databases Performance Distributed Sys. DBLP: (Au. x Conf.) - 400k aus, - 3.5k confs - 20 yrs Databases Data Mining Lipari 2010 (C) 2010, C. Faloutsos 162

163 Prox. Rank KDD s Rank wrt. VLDB over years Data Mining and Databases are getting closer & closer Year Lipari 2010 (C) 2010, C. Faloutsos 163

164 ctrack:10 most influential authors in NIPS community up to each year T. Sejnowski M. Jordan Author-paper bipartite graph from NIPS k papers, 2037 authors, spreading over 13 years Lipari 2010 (C) 2010, C. Faloutsos 164

165 Conclusions - Take-home messages Proximity Definitions RWR and a lot of variants Computation Sherman Morrison Lemma Fast Incremental Computation Applications Recommendations; auto-captioning; tracking Center-piece Subgraphs (next) management; anomaly detection, Lipari 2010 (C) 2010, C. Faloutsos 165

166 References L. Page, S. Brin, R. Motwani, & T. Winograd. (1998), The PageRank Citation Ranking: Bringing Order to the Web, Technical report, Stanford Library. T.H. Haveliwala. (2002) Topic-Sensitive PageRank. In WWW, , 2002 J.Y. Pan, H.J. Yang, C. Faloutsos & P. Duygulu. (2004) Automatic multimedia cross-modal correlation discovery. In KDD, , Lipari 2010 (C) 2010, C. Faloutsos 166

167 References C. Faloutsos, K. S. McCurley & A. Tomkins. (2002) Fast discovery of connection subgraphs. In KDD, , J. Sun, H. Qu, D. Chakrabarti & C. Faloutsos. (2005) Neighborhood Formation and Anomaly Detection in Bipartite Graphs. In ICDM, , W. Cohen. (2007) Graph Walks and Graphical Models. Draft. Lipari 2010 (C) 2010, C. Faloutsos 167

168 References P. Doyle & J. Snell. (1984) Random walks and electric networks, volume 22. Mathematical Association America, New York. Y. Koren, S. C. North, and C. Volinsky. (2006) Measuring and extracting proximity in networks. In KDD, , A. Agarwal, S. Chakrabarti & S. Aggarwal. (2006) Learning to rank networked entities. In KDD, 14-23, Lipari 2010 (C) 2010, C. Faloutsos 168

169 References S. Chakrabarti. (2007) Dynamic personalized pagerank in entity-relation graphs. In WWW, , F. Fouss, A. Pirotte, J.-M. Renders, & M. Saerens. (2007) Random-Walk Computation of Similarities between Nodes of a Graph with Application to Collaborative Recommendation. IEEE Trans. Knowl. Data Eng. 19(3), Lipari 2010 (C) 2010, C. Faloutsos 169

170 References H. Tong & C. Faloutsos. (2006) Center-piece subgraphs: problem definition and fast solutions. In KDD, , H. Tong, C. Faloutsos, & J.Y. Pan. (2006) Fast Random Walk with Restart and Its Applications. In ICDM, , H. Tong, Y. Koren, & C. Faloutsos. (2007) Fast direction-aware proximity for graph mining. In KDD, , Lipari 2010 (C) 2010, C. Faloutsos 170

171 References H. Tong, B. Gallagher, C. Faloutsos, & T. Eliassi- Rad. (2007) Fast best-effort pattern matching in large attributed graphs. In KDD, , H. Tong, S. Papadimitriou, P.S. Yu & C. Faloutsos. (2008) Proximity Tracking on Time-Evolving Bipartite Graphs. SDM Lipari 2010 (C) 2010, C. Faloutsos 171

172 References B. Gallagher, H. Tong, T. Eliassi-Rad, C. Faloutsos. Using Ghost Edges for Classification in Sparsely Labeled Networks. KDD 2008 H. Tong, Y. Sakurai, T. Eliassi-Rad, and C. Faloutsos. Fast Mining of Complex Time-Stamped Events CIKM 08 H. Tong, H. Qu, and H. Jamjoom. Measuring Proximity on Graphs with Side Information. ICDM 2008 Lipari 2010 (C) 2010, C. Faloutsos 172

173 Resources For software, papers, and ppt of presentations cikm_tutorial.html For the CIKM 08 tutorial on graphs and proximity Again, thanks to Hanghang TONG for permission to use his foils in this part Lipari 2010 (C) 2010, C. Faloutsos 173

174 Outline Introduction Motivation Task 1: Node importance Task 2: Recommendations & proximity Task 3: Connection sub-graphs Conclusions Lipari 2010 (C) 2010, C. Faloutsos 174

175 Problem definition Solution Results Detailed outline H. Tong & C. Faloutsos Center-piece subgraphs: problem definition and fast solutions. In KDD, , Lipari 2010 (C) 2010, C. Faloutsos 175

176 Center-Piece Subgraph(Ceps) Given Q query nodes Find Center-piece ( ) A Input of Ceps Q Query nodes Budget b k softand number App. Social Network Law Inforcement Gene Network Lipari 2010 (C) 2010, C. Faloutsos 176 B B A C C

177 Challenges in Ceps Q1: How to measure importance? (Q2: How to extract connection subgraph? Q3: How to do it efficiently?) Lipari 2010 (C) 2010, C. Faloutsos 177

178 Challenges in Ceps Q1: How to measure importance? A: proximity but how to combine scores? (Q2: How to extract connection subgraph? Q3: How to do it efficiently?) Lipari 2010 (C) 2010, C. Faloutsos 178

179 AND: Combine Scores Q: How to combine scores? Lipari 2010 (C) 2010, C. Faloutsos 179

180 AND: Combine Scores Q: How to combine scores? A: Multiply = prob. 3 random particles coincide on node j Lipari 2010 (C) 2010, C. Faloutsos 180

181 Problem definition Solution Results Detailed outline Lipari 2010 (C) 2010, C. Faloutsos 181

182 Case Study: AND query Lipari 2010 (C) 2010, C. Faloutsos 182

183 Case Study: AND query Lipari 2010 (C) 2010, C. Faloutsos 183

184 Conclusions Proximity (e.g., w/ RWR) helps answer AND and k_softand queries Lipari 2010 (C) 2010, C. Faloutsos 184

185 Overall conclusions SVD: a powerful tool HITS/ pagerank (dimensionality reduction) Proximity: Random Walk with Restarts Recommendation systems Auto-captioning Center-Piece Subgraphs Lipari 2010 (C) 2010, C. Faloutsos 185

Large Graph Mining: Power Tools and a Practitioner s guide

Large Graph Mining: Power Tools and a Practitioner s guide Large Graph Mining: Power Tools and a Practitioner s guide Task 3: Recommendations & proximity Faloutsos, Miller & Tsourakakis CMU KDD'09 Faloutsos, Miller, Tsourakakis P3-1 Outline Introduction Motivation

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Proximity on Graphs -Definitions, Fast Solutions, and Applications

Proximity on Graphs -Definitions, Fast Solutions, and Applications Proximity on Graphs -Definitions, Fast Solutions, and Applications Hanghang Tong htong@cs.cmu.edu Graphs are everywhere! - - - - - Internet Map p[ [Koren 2009] Food Web [2007] Terrorist Network [Krebs

More information

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides Web Search: How to Organize the Web? Ranking Nodes on Graphs Hubs and Authorities PageRank How to Solve PageRank

More information

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides Web Search: How to Organize the Web? Ranking Nodes on Graphs Hubs and Authorities PageRank How to Solve PageRank

More information

Text Analytics (Text Mining)

Text Analytics (Text Mining) http://poloclub.gatech.edu/cse6242 CSE6242 / CX4242: Data & Visual Analytics Text Analytics (Text Mining) Concepts, Algorithms, LSI/SVD Duen Horng (Polo) Chau Assistant Professor Associate Director, MS

More information

1 Searching the World Wide Web

1 Searching the World Wide Web Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating web-pages on

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University http://cs224w.stanford.edu How to organize/navigate it? First try: Human curated Web directories Yahoo, DMOZ, LookSmart

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

Text Analytics (Text Mining)

Text Analytics (Text Mining) http://poloclub.gatech.edu/cse6242 CSE6242 / CX4242: Data & Visual Analytics Text Analytics (Text Mining) Concepts, Algorithms, LSI/SVD Duen Horng (Polo) Chau Assistant Professor Associate Director, MS

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya CS 375 Advanced Machine Learning Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya Outline SVD and LSI Kleinberg s Algorithm PageRank Algorithm Vector Space Model Vector space model represents

More information

Data Mining Recitation Notes Week 3

Data Mining Recitation Notes Week 3 Data Mining Recitation Notes Week 3 Jack Rae January 28, 2013 1 Information Retrieval Given a set of documents, pull the (k) most similar document(s) to a given query. 1.1 Setup Say we have D documents

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 10 Graphs II Rainer Gemulla, Pauli Miettinen Jul 4, 2013 Link analysis The web as a directed graph Set of web pages with associated textual content Hyperlinks between webpages

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Must-read Material : Multimedia Databases and Data Mining. Indexing - Detailed outline. Outline. Faloutsos

Must-read Material : Multimedia Databases and Data Mining. Indexing - Detailed outline. Outline. Faloutsos Must-read Material 15-826: Multimedia Databases and Data Mining Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. Technical Report SAND2007-6702, Sandia National Laboratories,

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

Hub, Authority and Relevance Scores in Multi-Relational Data for Query Search

Hub, Authority and Relevance Scores in Multi-Relational Data for Query Search Hub, Authority and Relevance Scores in Multi-Relational Data for Query Search Xutao Li 1 Michael Ng 2 Yunming Ye 1 1 Department of Computer Science, Shenzhen Graduate School, Harbin Institute of Technology,

More information

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors Michael K. Ng Centre for Mathematical Imaging and Vision and Department of Mathematics

More information

Node and Link Analysis

Node and Link Analysis Node and Link Analysis Leonid E. Zhukov School of Applied Mathematics and Information Science National Research University Higher School of Economics 10.02.2014 Leonid E. Zhukov (HSE) Lecture 5 10.02.2014

More information

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze Link Analysis Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1 The Web as a Directed Graph Page A Anchor hyperlink Page B Assumption 1: A hyperlink between pages

More information

Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci

Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci Link Analysis Information Retrieval and Data Mining Prof. Matteo Matteucci Hyperlinks for Indexing and Ranking 2 Page A Hyperlink Page B Intuitions The anchor text might describe the target page B Anchor

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Given N vectors in n dims, find the k most important axes to project them k is user defined (k < n) Applications: information retrieval & indexing identify the k most important

More information

Node Centrality and Ranking on Networks

Node Centrality and Ranking on Networks Node Centrality and Ranking on Networks Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Social

More information

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10 PageRank Ryan Tibshirani 36-462/36-662: Data Mining January 24 2012 Optional reading: ESL 14.10 1 Information retrieval with the web Last time we learned about information retrieval. We learned how to

More information

How does Google rank webpages?

How does Google rank webpages? Linear Algebra Spring 016 How does Google rank webpages? Dept. of Internet and Multimedia Eng. Konkuk University leehw@konkuk.ac.kr 1 Background on search engines Outline HITS algorithm (Jon Kleinberg)

More information

Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains

Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains Francois Fouss, Jean-Michel Renders & Marco Saerens Université Catholique de Louvain and Xerox Research Center

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search 6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search Daron Acemoglu and Asu Ozdaglar MIT September 30, 2009 1 Networks: Lecture 7 Outline Navigation (or decentralized search)

More information

Page rank computation HPC course project a.y

Page rank computation HPC course project a.y Page rank computation HPC course project a.y. 2015-16 Compute efficient and scalable Pagerank MPI, Multithreading, SSE 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and

More information

Random Surfing on Multipartite Graphs

Random Surfing on Multipartite Graphs Random Surfing on Multipartite Graphs Athanasios N. Nikolakopoulos, Antonia Korba and John D. Garofalakis Department of Computer Engineering and Informatics, University of Patras December 07, 2016 IEEE

More information

Inf 2B: Ranking Queries on the WWW

Inf 2B: Ranking Queries on the WWW Inf B: Ranking Queries on the WWW Kyriakos Kalorkoti School of Informatics University of Edinburgh Queries Suppose we have an Inverted Index for a set of webpages. Disclaimer Not really the scenario of

More information

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm François Fouss 1, Jean-Michel Renders 2 & Marco Saerens 1 {saerens,fouss}@isys.ucl.ac.be, jean-michel.renders@xrce.xerox.com

More information

Web Structure Mining Nodes, Links and Influence

Web Structure Mining Nodes, Links and Influence Web Structure Mining Nodes, Links and Influence 1 Outline 1. Importance of nodes 1. Centrality 2. Prestige 3. Page Rank 4. Hubs and Authority 5. Metrics comparison 2. Link analysis 3. Influence model 1.

More information

Node similarity and classification

Node similarity and classification Node similarity and classification Davide Mottin, Anton Tsitsulin HassoPlattner Institute Graph Mining course Winter Semester 2017 Acknowledgements Some part of this lecture is taken from: http://web.eecs.umich.edu/~dkoutra/tut/icdm14.html

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Slides based on those in:

Slides based on those in: Spyros Kontogiannis & Christos Zaroliagis Slides based on those in: http://www.mmds.org High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Mining Graph/Network Data Instructor: Yizhou Sun yzsun@ccs.neu.edu March 16, 2016 Methods to Learn Classification Clustering Frequent Pattern Mining Matrix Data Decision

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

Eigenvalue Problems Computation and Applications

Eigenvalue Problems Computation and Applications Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and

More information

Applications to network analysis: Eigenvector centrality indices Lecture notes

Applications to network analysis: Eigenvector centrality indices Lecture notes Applications to network analysis: Eigenvector centrality indices Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the second part of the course Nonnegative and spectral matrix

More information

Francois Fouss, Alain Pirotte, Jean-Michel Renders & Marco Saerens. January 31, 2006

Francois Fouss, Alain Pirotte, Jean-Michel Renders & Marco Saerens. January 31, 2006 A novel way of computing dissimilarities between nodes of a graph, with application to collaborative filtering and subspace projection of the graph nodes Francois Fouss, Alain Pirotte, Jean-Michel Renders

More information

Degree Distribution: The case of Citation Networks

Degree Distribution: The case of Citation Networks Network Analysis Degree Distribution: The case of Citation Networks Papers (in almost all fields) refer to works done earlier on same/related topics Citations A network can be defined as Each node is

More information

Complex Social System, Elections. Introduction to Network Analysis 1

Complex Social System, Elections. Introduction to Network Analysis 1 Complex Social System, Elections Introduction to Network Analysis 1 Complex Social System, Network I person A voted for B A is more central than B if more people voted for A In-degree centrality index

More information

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 References 1 L. Freeman, Centrality in Social Networks: Conceptual Clarification, Social Networks, Vol. 1, 1978/1979, pp. 215 239. 2 S. Wasserman

More information

Lecture 7 Mathematics behind Internet Search

Lecture 7 Mathematics behind Internet Search CCST907 Hidden Order in Daily Life: A Mathematical Perspective Lecture 7 Mathematics behind Internet Search Dr. S. P. Yung (907A) Dr. Z. Hua (907B) Department of Mathematics, HKU Outline Google is the

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Mining Graph/Network Data Instructor: Yizhou Sun yzsun@ccs.neu.edu November 16, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining Matrix Data Decision

More information

Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

More information

The Second Eigenvalue of the Google Matrix

The Second Eigenvalue of the Google Matrix The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

IR: Information Retrieval

IR: Information Retrieval / 44 IR: Information Retrieval FIB, Master in Innovation and Research in Informatics Slides by Marta Arias, José Luis Balcázar, Ramon Ferrer-i-Cancho, Ricard Gavaldá Department of Computer Science, UPC

More information

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm François Fouss 1, Jean-Michel Renders 2, Marco Saerens 1 1 ISYS Unit, IAG Université catholique

More information

CS249: ADVANCED DATA MINING

CS249: ADVANCED DATA MINING CS249: ADVANCED DATA MINING Graph and Network Instructor: Yizhou Sun yzsun@cs.ucla.edu May 31, 2017 Methods Learnt Classification Clustering Vector Data Text Data Recommender System Decision Tree; Naïve

More information

MAE 298, Lecture 8 Feb 4, Web search and decentralized search on small-worlds

MAE 298, Lecture 8 Feb 4, Web search and decentralized search on small-worlds MAE 298, Lecture 8 Feb 4, 2008 Web search and decentralized search on small-worlds Search for information Assume some resource of interest is stored at the vertices of a network: Web pages Files in a file-sharing

More information

1998: enter Link Analysis

1998: enter Link Analysis 1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Lecture 12: Link Analysis for Web Retrieval

Lecture 12: Link Analysis for Web Retrieval Lecture 12: Link Analysis for Web Retrieval Trevor Cohn COMP90042, 2015, Semester 1 What we ll learn in this lecture The web as a graph Page-rank method for deriving the importance of pages Hubs and authorities

More information

Computing PageRank using Power Extrapolation

Computing PageRank using Power Extrapolation Computing PageRank using Power Extrapolation Taher Haveliwala, Sepandar Kamvar, Dan Klein, Chris Manning, and Gene Golub Stanford University Abstract. We present a novel technique for speeding up the computation

More information

CMU SCS. Large Graph Mining. Christos Faloutsos CMU

CMU SCS. Large Graph Mining. Christos Faloutsos CMU Large Graph Mining Christos Faloutsos CMU Thank you! Hillol Kargupta NGDM 2007 C. Faloutsos 2 Outline Problem definition / Motivation Static & dynamic laws; generators Tools: CenterPiece graphs; fraud

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Lecture: Local Spectral Methods (1 of 4)

Lecture: Local Spectral Methods (1 of 4) Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

A Note on Google s PageRank

A Note on Google s PageRank A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

More information

Hyperlinked-Induced Topic Search (HITS) identifies. authorities as good content sources (~high indegree) HITS [Kleinberg 99] considers a web page

Hyperlinked-Induced Topic Search (HITS) identifies. authorities as good content sources (~high indegree) HITS [Kleinberg 99] considers a web page IV.3 HITS Hyperlinked-Induced Topic Search (HITS) identifies authorities as good content sources (~high indegree) hubs as good link sources (~high outdegree) HITS [Kleinberg 99] considers a web page a

More information

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University. Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org #1: C4.5 Decision Tree - Classification (61 votes) #2: K-Means - Clustering

More information

Wiki Definition. Reputation Systems I. Outline. Introduction to Reputations. Yury Lifshits. HITS, PageRank, SALSA, ebay, EigenTrust, VKontakte

Wiki Definition. Reputation Systems I. Outline. Introduction to Reputations. Yury Lifshits. HITS, PageRank, SALSA, ebay, EigenTrust, VKontakte Reputation Systems I HITS, PageRank, SALSA, ebay, EigenTrust, VKontakte Yury Lifshits Wiki Definition Reputation is the opinion (more technically, a social evaluation) of the public toward a person, a

More information

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson)

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson) Link Analysis Web Ranking Documents on the web are first ranked according to their relevance vrs the query Additional ranking methods are needed to cope with huge amount of information Additional ranking

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

More information

Finding central nodes in large networks

Finding central nodes in large networks Finding central nodes in large networks Nelly Litvak University of Twente Eindhoven University of Technology, The Netherlands Woudschoten Conference 2017 Complex networks Networks: Internet, WWW, social

More information

Idea: Select and rank nodes w.r.t. their relevance or interestingness in large networks.

Idea: Select and rank nodes w.r.t. their relevance or interestingness in large networks. Graph Databases and Linked Data So far: Obects are considered as iid (independent and identical diributed) the meaning of obects depends exclusively on the description obects do not influence each other

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 622 - Section 2 - Spring 27 Pre-final Review Jan-Willem van de Meent Feedback Feedback https://goo.gl/er7eo8 (also posted on Piazza) Also, please fill out your TRACE evaluations!

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Information Retrieval

Information Retrieval Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices

More information

Spectral Graph Theory Tools. Analysis of Complex Networks

Spectral Graph Theory Tools. Analysis of Complex Networks Spectral Graph Theory Tools for the Department of Mathematics and Computer Science Emory University Atlanta, GA 30322, USA Acknowledgments Christine Klymko (Emory) Ernesto Estrada (Strathclyde, UK) Support:

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

Information Retrieval and Search. Web Linkage Mining. Miłosz Kadziński

Information Retrieval and Search. Web Linkage Mining. Miłosz Kadziński Web Linkage Analysis D24 D4 : Web Linkage Mining Miłosz Kadziński Institute of Computing Science Poznan University of Technology, Poland www.cs.put.poznan.pl/mkadzinski/wpi Web mining: Web Mining Discovery

More information

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models

More information

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31

Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31 Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking Dengyong Zhou zhou@tuebingen.mpg.de Dept. Schölkopf, Max Planck Institute for Biological Cybernetics, Germany Learning from

More information

Fast Direction-Aware Proximity for Graph Mining

Fast Direction-Aware Proximity for Graph Mining Fast Direction-Aware Proximity for Graph Mining Hanghang Tong Carnegie Mellon University htong@cs.cmu.edu Yehuda Koren AT&T Labs Research yehuda@research.att.com Christos Faloutsos Carnegie Mellon University

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #9: Link Analysis Seoul National University 1 In This Lecture Motivation for link analysis Pagerank: an important graph ranking algorithm Flow and random walk formulation

More information

Discovering Contexts and Contextual Outliers Using Random Walks in Graphs

Discovering Contexts and Contextual Outliers Using Random Walks in Graphs Discovering Contexts and Contextual Outliers Using Random Walks in Graphs Xiang Wang Ian Davidson Abstract Introduction. Motivation The identifying of contextual outliers allows the discovery of anomalous

More information

Outline : Multimedia Databases and Data Mining. Indexing - similarity search. Indexing - similarity search. C.

Outline : Multimedia Databases and Data Mining. Indexing - similarity search. Indexing - similarity search. C. Outline 15-826: Multimedia Databases and Data Mining Lecture #30: Conclusions C. Faloutsos Goal: Find similar / interesting things Intro to DB Indexing - similarity search Points Text Time sequences; images

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 18: Latent Semantic Indexing Hinrich Schütze Center for Information and Language Processing, University of Munich 2013-07-10 1/43

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2014 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,

More information

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,

More information