LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

Similar documents
Hyperlinked-Induced Topic Search (HITS) identifies. authorities as good content sources (~high indegree) HITS [Kleinberg 99] considers a web page

Link Analysis Ranking

Online Social Networks and Media. Link Analysis and Web Search

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

Link Analysis. Leonid E. Zhukov

Online Social Networks and Media. Link Analysis and Web Search

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson)

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

Lecture 12: Link Analysis for Web Retrieval

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search

Data Mining and Matrices

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

ECEN 689 Special Topics in Data Science for Communications Networks

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

CS54701 Information Retrieval. Link Analysis. Luo Si. Department of Computer Science Purdue University. Borrowed Slides from Prof.

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

Wiki Definition. Reputation Systems I. Outline. Introduction to Reputations. Yury Lifshits. HITS, PageRank, SALSA, ebay, EigenTrust, VKontakte

Link Analysis. Stony Brook University CSE545, Fall 2016

CS47300: Web Information Search and Management

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

1 Searching the World Wide Web

Information Retrieval and Search. Web Linkage Mining. Miłosz Kadziński

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

0.1 Naive formulation of PageRank

Web Structure Mining Nodes, Links and Influence

Computing PageRank using Power Extrapolation

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Data and Algorithms of the Web

IR: Information Retrieval

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides

Data Mining and Analysis: Fundamental Concepts and Algorithms

1998: enter Link Analysis

Models and Algorithms for Complex Networks. Link Analysis Ranking

Finding central nodes in large networks

Slides based on those in:

Thanks to Jure Leskovec, Stanford and Panayiotis Tsaparas, Univ. of Ioannina for slides

As it is not necessarily possible to satisfy this equation, we just ask for a solution to the more general equation

The Static Absorbing Model for the Web a

Markov Chains, Random Walks on Graphs, and the Laplacian

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Introduction to Data Mining

The Dynamic Absorbing Model for the Web

Lecture: Local Spectral Methods (1 of 4)

Page rank computation HPC course project a.y

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Link Mining PageRank. From Stanford C246

The Second Eigenvalue of the Google Matrix

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya

Probability & Computing

Math 304 Handout: Linear algebra, graphs, and networks.

STA141C: Big Data & High Performance Statistical Computing

A Note on Google s PageRank

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

RETRIEVAL MODELS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

Basic Assumptions. Chapter 4. Link Analysis & Authority Ranking

Algebraic Representation of Networks

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

On the mathematical background of Google PageRank algorithm

Degree Distribution: The case of Citation Networks

Modeling of Growing Networks with Directional Attachment and Communities

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

Jeffrey D. Ullman Stanford University

Complex Social System, Elections. Introduction to Network Analysis 1

How does Google rank webpages?

CPSC 540: Machine Learning

Inf 2B: Ranking Queries on the WWW

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

IS4200/CS6200 Informa0on Retrieval. PageRank Con+nued. with slides from Hinrich Schütze and Chris6na Lioma

Node Centrality and Ranking on Networks

CS6220: DATA MINING TECHNIQUES

Intelligent Data Analysis. PageRank. School of Computer Science University of Birmingham

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Web Information Retrieval Dipl.-Inf. Christoph Carl Kling

Approximating a single component of the solution to a linear system

Markov Chains Handout for Stat 110

Cross-lingual and temporal Wikipedia analysis

Spectral Graph Theory Tools. Analysis of Complex Networks

DS504/CS586: Big Data Analytics Graph Mining II

Temporal patterns in information and social systems

Markov Chains and Stochastic Sampling

Lecture 7 Mathematics behind Internet Search

Applications to network analysis: Eigenvector centrality indices Lecture notes

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Updating PageRank. Amy Langville Carl Meyer

Machine Learning CPSC 340. Tutorial 12

1 Ways to Describe a Stochastic Process

6.842 Randomness and Computation February 24, Lecture 6

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

6.207/14.15: Networks Lecture 12: Generalized Random Graphs

Faloutsos, Tong ICDE, 2009

Node and Link Analysis

Local properties of PageRank and graph limits. Nelly Litvak University of Twente Eindhoven University of Technology, The Netherlands MIPT 2018

Ranking on Large-Scale Graphs with Rich Metadata

Transcription:

LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1

Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models for the web graph HITS PageRank SALSA From queries to top-k results Social search Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 2

The web graph Study of Broder et al.: Graph Structure in the Web. WWW 2000 Web graph structure In-degree distribution (similar distribution for out-degree) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 3

Possible models to explain the web graph Random graph (i.e., Erdös-Renyi model) An edge between any two nodes is included with probability, independent of other edges Preferential attachment (Barabási & Albert; Science 1999) New nodes in the network join with high probability those nodes that already have a high (in)degree ( rich get richer ) Specifically: new node x with m links, chooses node y to link to with probability proportional to degree y P x y = degree(y) degree(y ) y Link copying & preferential attachment New nodes copy the links of an existing nodes with probability α or follow preferential attachment with probability (1 α) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 4

Node authority How to measure the authority of a node in the web graph? What about citation measures in academic work? E.g., ImpactFactor J, Y : average number of references from articles published in journal J in year Y to articles published in J in year Y 1 and Y 2 (not suitable for cross-journal references) Other measures consider following important structures in the citation graph: co-citations & co-referrals B C A E A D Publication A gets higher authority D becomes a better hub Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 5

Web page links vs. citations Similarity between web graph and citation graph Citation of related work y by publication x increases the importance of y (i.e., x endorses y); links on the web can be viewed as endorsements as well. Differences Web pages with high in- or out-degree could be portals or spam. Company websites don t point to their competitors. Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 6

By John Kleinberg in 1999 Every node v has Authority score A(v) Hubness score H v Example If A, E already have high authority, D becomes a better hub. If B and C are good hubs, A gets higher authority. Hypertext induced topic selection (HITS) Web page has high authority score if it has many links from nodes with high hubness scores Web page has high hubness score if it has many links to nodes with high authority scores Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 7

Computing authority and hubness scores (1) v 2 v 5 v 3 v 1 v 1 v 4 a(v 1 ) = h(v 2 ) + h(v 3 ) + h(v 4 ) v 6 h(v 1 ) = a(v 5 ) + a(v 6 ) Compute scores recursively A v = H(u) u v H v = A(w) v w Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 8

Let a = Computing authority and hubness scores (2) a(v 1 ) a(v 2 )... a(v n ), h = h(v 1 ) h(v 2 )... h(v n ) We can write a = M T h and h = Ma, where M is the adjacency matrix m ij = 1 v i v j By substitution a = M T Ma h = MM T h Interpretation M T M (ij) : number of nodes pointing to both i and j MM T (ij): number of nodes to which both i and j point Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 9

Eigenvectors and Eigenvalues Eigenvector, Eigenvalue: for an n n matrix A, n 1 vector v and a scalar λ that satisfy Av = λv are called Eigenvector and Eigenvalue of A. Eigenvalues are the roots of the characteristic function f λ = det A λi n det A = ( 1) (i+j) a ij det(a \ij ) i=1 (j=1) If A is symmetric, all Eigenvalues are real! A \ij is A without the i th row and the j th column Principal eigenvector gives the direction of highest variability! Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 10

Authority and hubness scores revisited a = M T Ma Can be quite dense h = MM T h Eigenvector of M T M Eigenvector of MM T Algorithm Start with a (0) = 1/n 1/n... 1/n, h (0) = 1/n 1/n... 1/n, where n is the number of nodes Repeat a (i+1) = M T h (i), h (i+1) = Ma (i), L1-normalize a (i+1), h (i+1) until convergence (Note: algorithm returns principal Eigenvectors of M T M and MM T ) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 11

The HITS algorithm 1. Start with a root set of pages relevant to a topic (or information need) 2. Create a base subgraph by adding to the root set All pages that have incoming links from pages in the root set For each page in the root set: up to k pages pointing to it 3. Compute authority and hubness scores for all pages in the base subgraph 4. Rank pages by decreasing authority scores ~ thousands of nodes ~ hundreds of nodes Source: http://www.cs.cornell.edu/home/kleinber/auth.pdf Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 12

Drawbacks of HITS Relevance of documents in root set is not addressed Documents may contain multiple identical links to the same document (in some other host) Link spamming is a problem Bias towards bipartite subgraphs Danger: topic drift found pages may not be related to the original query HITS scores need to be computed at query time (i.e., on the base set) Too expensive in most query scenarios Rank is not stable to graph perturbations Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 13

Let s think again Incoming links reflect endorsements (authority ) Outgoing links reflect outflowing authority (authority ) v 1 v 5 v 2 v 4 v 3 v 6 Probabilistic view: what is the probability of being at node v i? P v i = P v i v j v j P v j Restrict to direct predecessors only! P v i = P v i v j v j v i P v j (Assuming uniform probability of choosing successor: P v i v j = 1 Outdegree v j ) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 14

Markov chains Chain of discrete random variables X 1 X 2 X n 1 X n Markov assumption: a variable is independent of all its non-descendants given its parents P X i X 1, X 2,, X i 1 = P X i X i 1 Example X 1 = E X 2 = E X 3 = A Possible instantiation of Markov chain: States with transition probabilities Source: Wikipedia P X i = E = P X i 1 = E P X i = E X i 1 = E +P X i 1 = A P X i = E X i 1 = A e.g., with P X 1 = E = P X 1 = A = 1 2 Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 15

Markov chain model of walking on the web graph u v Stationary probability at node v P X i = v = P X i 1 = u P X i = v X i 1 = u X i 1 =u u v Uniform transition probability accros successors 1 P X i = v X i 1 = u = Outdegree u Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 16

Homogeneous Markov chain properties (1) P X i = v X i 1 = u are independent of i. Irreducible Every state is reachable from any other state (with probability>0). Aperiodic The greatest common divisor of all (recurrence) values l with P X l = v X k v for k = 1,2,, l 1 X 0 = v > 0 is 1 for every v. Positive recurrent For every state, the expected number of steps after which it will be reached is finite. Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 17

Ergodic Markov chain properties (2) homogeneous, irreducible, aperiodic, and positive recurrent Theorem 1 A finite-state irreducible Markov chain is ergodic if it has an aperiodic state. Theorem 2 For every ergodic Markov chain there exist stationary state probabilities. Goal: Markov-chain model to compute stationary probabilities for the nodes in the web graph Finite number of nodes (i.e., finite number of states) Not irreducible (i.e., not every node can be reached from every node) States need to be aperiodic Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 18

PageRank: a random walk model on the web graph Uniform random jump probability, i.e., rj v = 1 n, n: number of nodes u P v = (1 α) rj v + α P(u) P v u u v Random walker reaches v by following one of the outgoing links of the predecessors of v with probability α or by randomly jumping to v with probability 1 α (random jump probability) Notes 1. Every node (i.e., state) can be reached from every other node (through random jumps). 2. Because of transition cycles of length 1 (from one node to the same node with probability > 0) every node (i.e., state) is aperiodic. Valid ergodic Markov chain model (i.e., with existing stationary probabilities for each state) v Uniform transition probability, i.e., P v u = 1 Outdegree u Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 19

Computing PageRank scores u Authority v = PageRank v = P v = (1 α) rj v + α P u P v u u v Jacobi power iteration method 1/n 1/n Initalize p (0), r with p (0). = rj =.. 1/n Repeat until convergence p (i+1) = αmp (i) + (1 α)rj where M is transition matrix with m v,u = P v u = 1, if u v Outdegree(u) 0, otherwise v Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 20

Remarks to Jacobi power iteration for PageRank p (i+1) = αmp (i) + (1 α)r M is a stochastic matrix (i.e., columns sum up to 1) Theorem 3 For every stochastic matrix M, all Eigenvalues λ have the property λ 1 and there is an Eigenvector x with Eigenvalue λ 1 = 1, such that x 0 and x = 1. x is called the Principal Eigenvector. Theorem 4 Power iteration converges to principal Eigenvector with convergence rate α = λ 2 /λ 1, where λ 2 is the second largest Eigenvalue. Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 21

Topic-sensitive and personalized PageRank PageRank v = P v = (1 α) rj v + α P u P v u u v Bias random walk towards pages of certain topic or pages liked by the user How could this be done? Possibility 1: introduce classification process into random walk, e.g., page v could be visited with probability proportional to linear combination of P v and P(v T) difficult to scale Possibility 2: bias random jump towards the set T of target pages (much simpler) 1, if v T rj v = T 0, otherwise and run Jacobi iterations for p (i+1) = αmp (i) + (1 α)rj Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 22

Topic-sensitive PageRank Algorithm by Haveliwala, TKDE 2003 1. Given multiple classes, precompute for each class T k a topic sensitive PageRank vector p k through Jacobi iterations p k (i+1) = αmpk (i) + (1 α)rjk 2. For the user query q compute P T k q 3. Compute authority score of a page v as P T k q p k (v) k Theorem 3 Let rj 1 and rj 2 be biased random-jump vectors and p 1 and p 2 denote the corresponding biased PageRank vectors. For all β 1, β 2 0 with β 1 + β 2 = 1 it holds: p = β 1 p 1 + β 2 p 2 = αm β 1 p 1 + β 2 p 2 + 1 α β 1 rj 1 + β 2 rj 2. Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 23

Precision Topic-sensitive PageRank evaluation Source: Haveliwala, Topic-Sensitive PageRank: A Context-Sensitive Ranking Algorithm for Web Search. TKDE 2003 Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 24

Distributed PageRank computation Efficient PageRank Compute PageRank for each page on a domain and combine it with PageRanklike authority propagation on the domains Compute PageRank by using MapReduce Map: (k1, val1) list(k2, val2) // list of key-value pairs from one domain mapped to list of key-value pairs of another domain Reduce: (k2, list(val2)) list(val3) //compute results from each group in parallel Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 25

General framework for MapReduce Source: J. Dean et al., MapReduce: Simplified Data Processing on Large Clusters. CACM 2008 MapReduce implementations: PIG (Yahoo), Hadoop (Apache), DryadLinq (Microsoft) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 26

MapReduce iteration for PageRank Initial MAP: (url, content) {(url, (PR_0, list(urls pointed to by url)))} //set initial PageRank value Initial REDUCE: return tuples unchanged MAP: (url, (PR(url), list(n urls pointed to by url))) {(linked url1, PR(url) /n),,(linked urln, PR(url) /n), (url, list(n urls pointed to by url))} REDUCE: (url, {(url, PR ) url points to url} {(url, list(urls pointing to url)}) {(url, (PR(url), list(urls pointed to by url)))} Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 27

Strengths PageRank: strengths & weaknesses Elegant theoretical foundation with nice interpretations (e.g., stochastic matrices, principal Eigenvectors, stationary state probabilities in random walk model/ Markov-chain process) Can be extended to topic-based and personalized models Static measure (i.e., can be precomputed) Relatively straight-forward to scale PageRank scores are relatively stable to graph perturbations Weaknesses Rank is not stable to perturbations Random-jump extension is needed to make corresponding Markov process ergodic on arbitrary directed graphs Query independence (no direct relation to relevance) Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 28

SALSA: stochastic approach for link-structure analysis 2-hop random walk model for HITS on connected directed graphs Likelihood of reaching w from u in two hops is 1 1 Outdegree u Outdegree v v V:u v w (assuming uniform probability of choosing successor) u v w From original graph G(V, E), construct undirected bipartite graph G V, E : V = u h u V Outdegree u > 0 {u a u V Indegree u > 0} and E = {u h, v a } u v E. Construct from G hub matrix H: h uw = Construct from G auth. matrix A: a uw = Can be extended with random-jump probabilities 1 1 v V:u h v a w h Degree u Degree v 1 1 v V:u a v h w a Degree u Degree v Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 29

Strengths SALSA: strengths and weaknesses Probabilistic HITS-based (i.e., random-walk) interpretation Matrices H and A are stochastic principal eigenvectors hold stationary state probabilities Weaknesses Random-jump extension is definitely needed to make corresponding Markov process ergodic on arbitrary directed graphs Authority scores are proportional to indegrees of nodes (in the original graph); the same holds for hubness scores and outdegrees suggests much simpler ranking, e.g., based on in-degrees only Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 30

Precision@k ranking comparison Source: Borodin et al., Link analysis ranking: algorithms, theory, and experiments. TOIT 2005 Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 31

Summary Web links vs. citation Links Hubs and Authorities HITS Random walks on the web graph PageRank Topic-sensitve, personalized Distributed Two-hop random walks between hubs and authorities SALSA Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 32