PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/pv211

Similar documents
Introduction to Information Retrieval

Embeddings Learned By Matrix Factorization

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

Latent Semantic Indexing (LSI) CE-324: Modern Information Retrieval Sharif University of Technology

PV211: Introduction to Information Retrieval

PV211: Introduction to Information Retrieval

Information Retrieval

CS 572: Information Retrieval

INF 141 IR METRICS LATENT SEMANTIC ANALYSIS AND INDEXING. Crista Lopes

Latent Semantic Analysis. Hongning Wang

Latent Semantic Analysis. Hongning Wang

Information Retrieval

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

CS276A Text Information Retrieval, Mining, and Exploitation. Lecture 4 15 Oct 2002

Singular Value Decompsition

Information Retrieval and Topic Models. Mausam (Based on slides of W. Arms, Dan Jurafsky, Thomas Hofmann, Ata Kaban, Chris Manning, Melanie Martin)

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Vector Space Scoring Introduction to Information Retrieval Informatics 141 / CS 121 Donald J. Patterson

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

Term Weighting and the Vector Space Model. borrowing from: Pandu Nayak and Prabhakar Raghavan

Scoring (Vector Space Model) CE-324: Modern Information Retrieval Sharif University of Technology

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

CS 3750 Advanced Machine Learning. Applications of SVD and PCA (LSA and Link analysis) Cem Akkaya

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

IS4200/CS6200 Informa0on Retrieval. PageRank Con+nued. with slides from Hinrich Schütze and Chris6na Lioma

Manning & Schuetze, FSNLP (c) 1999,2000

Problems. Looks for literal term matches. Problems:

Matrix Decomposition and Latent Semantic Indexing (LSI) Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson

Lecture 12: Link Analysis for Web Retrieval

Information Retrieval

Matrix decompositions and latent semantic indexing

MATRIX DECOMPOSITION AND LATENT SEMANTIC INDEXING (LSI) Introduction to Information Retrieval CS 150 Donald J. Patterson

Data Mining and Matrices

Dealing with Text Databases

Scoring, Term Weighting and the Vector Space

Vector Space Scoring Introduction to Information Retrieval INF 141 Donald J. Patterson

Boolean and Vector Space Retrieval Models CS 290N Some of slides from R. Mooney (UTexas), J. Ghosh (UT ECE), D. Lee (USTHK).

Manning & Schuetze, FSNLP, (c)

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

.. CSC 566 Advanced Data Mining Alexander Dekhtyar..

Matrices, Vector Spaces, and Information Retrieval

9 Searching the Internet with the SVD

Term Weighting and Vector Space Model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Online Social Networks and Media. Link Analysis and Web Search

Informa(on Retrieval

CSE 494/598 Lecture-6: Latent Semantic Indexing. **Content adapted from last year s slides

13 Searching the Web with the SVD

Informa(on Retrieval

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2018. Instructor: Walid Magdy

Latent semantic indexing

Lecture 5: Web Searching using the SVD

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Linear Algebra Background

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

IR: Information Retrieval

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

1. Ignoring case, extract all unique words from the entire set of documents.

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Online Social Networks and Media. Link Analysis and Web Search

Jeffrey D. Ullman Stanford University

boolean queries Inverted index query processing Query optimization boolean model January 15, / 35

Notes on Latent Semantic Analysis

Probabilistic Latent Semantic Analysis

Bruce Hendrickson Discrete Algorithms & Math Dept. Sandia National Labs Albuquerque, New Mexico Also, CS Department, UNM

CS47300: Web Information Search and Management

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

TDDD43. Information Retrieval. Fang Wei-Kleiner. ADIT/IDA Linköping University. Fang Wei-Kleiner ADIT/IDA LiU TDDD43 Information Retrieval 1

A few applications of the SVD

Link Analysis Ranking

CS246: Mining Massive Data Sets Winter Only one late period is allowed for this homework (11:59pm 2/14). General Instructions

Generic Text Summarization

Lecture: Face Recognition and Feature Reduction

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Semantic Similarity from Corpora - Latent Semantic Analysis

Faloutsos, Tong ICDE, 2009

Ranked IR. Lecture Objectives. Text Technologies for Data Science INFR Learn about Ranked IR. Implement: 10/10/2017. Instructor: Walid Magdy

1 Searching the World Wide Web

Matrix Factorization & Latent Semantic Analysis Review. Yize Li, Lanbo Zhang

Language Information Processing, Advanced. Topic Models

How Latent Semantic Indexing Solves the Pachyderm Problem

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

Dimensionality Reduction

Lecture: Face Recognition and Feature Reduction

PROBABILISTIC LATENT SEMANTIC ANALYSIS

Variable Latent Semantic Indexing

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

Math 304 Handout: Linear algebra, graphs, and networks.

MATH36001 Perron Frobenius Theory 2015

Linear Least Squares. Using SVD Decomposition.

Document Similarity in Information Retrieval

1998: enter Link Analysis

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Numerical Methods I Singular Value Decomposition

Singular Value Decomposition

Latent Semantic Indexing

RETRIEVAL MODELS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

Transcription:

PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/pv211 IIR 18: Latent Semantic Indexing Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University, Brno Center for Information and Language Processing, University of Munich 2018-05-09ex Sojka, IIR Group: PV211: Latent Semantic Indexing 1 / 44

Overview 1 Recap 2 Latent semantic indexing 3 Dimensionality reduction 4 LSI in information retrieval 5 Clustering Sojka, IIR Group: PV211: Latent Semantic Indexing 2 / 44

Indexing anchor text Anchor text is often a better description of a page s content than the page itself. Anchor text can be weighted more highly than the text on the page. A Google bomb is a search with bad results due to maliciously manipulated anchor text. [dangerous cult] on Google, Bing, Yahoo Sojka, IIR Group: PV211: Latent Semantic Indexing 4 / 44

PageRank Model: a web surfer doing a random walk on the web Formalization: Markov chain PageRank is the long-term visit rate of the random surfer or the steady-state distribution. Need teleportation to ensure well-defined PageRank Power method to compute PageRank PageRank is the principal left eigenvector of the transition probability matrix. Sojka, IIR Group: PV211: Latent Semantic Indexing 5 / 44

Computing PageRank: Power method x 1 x 2 P t (d 1 ) P t (d 2 ) P 11 = 0.1 P 12 = 0.9 P 21 = 0.3 P 22 = 0.7 t 0 0 1 0.3 0.7 = xp t 1 0.3 0.7 0.24 0.76 = xp 2 t 2 0.24 0.76 0.252 0.748 = xp 3 t 3 0.252 0.748 0.2496 0.7504 = xp 4... t 0.25 0.75 0.25 0.75 = xp PageRank vector = π = (π 1, π 2 ) = (0.25, 0.75) P t (d 1 ) = P t 1 (d 1 ) P 11 + P t 1 (d 2 ) P 21 P t (d 2 ) = P t 1 (d 1 ) P 12 + P t 1 (d 2 ) P 22 Sojka, IIR Group: PV211: Latent Semantic Indexing 6 / 44

HITS: Hubs and authorities hubs www.bestfares.com authorities www.aa.com www.airlinesquality.com www.delta.com blogs.usatoday.com/sky aviationblog.dallasnews.com www.united.com Sojka, IIR Group: PV211: Latent Semantic Indexing 7 / 44

HITS update rules A: link matrix h: vector of hub scores a: vector of authority scores HITS algorithm: Compute h = A a Compute a = A T h Iterate until convergence Output (i) list of hubs ranked according to hub score and (ii) list of authorities ranked according to authority score Sojka, IIR Group: PV211: Latent Semantic Indexing 8 / 44

Take-away today Latent Semantic Indexing (LSI) / Singular Value Decomposition: The math SVD used for dimensionality reduction LSI: SVD in information retrieval LSI as clustering gensim: Topic modelling for humans (practical use of LSI etal.) Sojka, IIR Group: PV211: Latent Semantic Indexing 9 / 44

Recall: Term-document matrix Anthony Julius The Hamlet Othello Macbeth and Caesar Tempest Cleopatra anthony 5.25 3.18 0.0 0.0 0.0 0.35 brutus 1.21 6.10 0.0 1.0 0.0 0.0 caesar 8.59 2.54 0.0 1.51 0.25 0.0 calpurnia 0.0 1.54 0.0 0.0 0.0 0.0 cleopatra 2.85 0.0 0.0 0.0 0.0 0.0 mercy 1.51 0.0 1.90 0.12 5.25 0.88 worser 1.37 0.0 0.11 4.15 0.25 1.95... This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries? Sojka, IIR Group: PV211: Latent Semantic Indexing 11 / 44

Latent semantic indexing: Overview We will decompose the term-document matrix into a product of matrices. The particular decomposition we ll use: singular value decomposition (SVD). SVD: C = UΣV T (where C = term-document matrix) We will then use the SVD to compute a new, improved term-document matrix C. We ll get better similarity values out of C (compared to C). Using SVD for this purpose is called latent semantic indexing or LSI. Sojka, IIR Group: PV211: Latent Semantic Indexing 12 / 44

Example of C = UΣV T : The matrix C C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example. Sojka, IIR Group: PV211: Latent Semantic Indexing 13 / 44

Example of C = UΣV T : The matrix U U 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 boat 0.13 0.33 0.59 0.00 0.73 ocean 0.48 0.51 0.37 0.00 0.61 wood 0.70 0.35 0.15 0.58 0.16 tree 0.26 0.65 0.41 0.58 0.09 One row per term, one column per min(m, N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions as semantic dimensions that capture distinct topics like politics, sports, economics. 2 = land/water Each number u ij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j. Sojka, IIR Group: PV211: Latent Semantic Indexing 14 / 44

Example of C = UΣV T : The matrix Σ Σ 1 2 3 4 5 1 2.16 0.00 0.00 0.00 0.00 2 0.00 1.59 0.00 0.00 0.00 3 0.00 0.00 1.28 0.00 0.00 4 0.00 0.00 0.00 1.00 0.00 5 0.00 0.00 0.00 0.00 0.39 This is a square, diagonal matrix of dimensionality min(m, N) min(m, N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We ll make use of this by omitting unimportant dimensions. Sojka, IIR Group: PV211: Latent Semantic Indexing 15 / 44

Example of C = UΣV T : The matrix V T V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 One column per document, one row per min(m, N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from matrices U and Σ that capture distinct topics like politics, sports, economics. Each number v ij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension Sojka, j. IIR Group: PV211: Latent Semantic Indexing 16 / 44

Example of C = UΣV T : All four matrices Recall unreduced decomposition C = UΣV T Exercise: Why can this be viewed as soft clustering? C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 = wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 U 1 2 3 4 5 Σ 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 1 2.16 0.00 0.00 0.00 0.00 boat 0.13 0.33 0.59 0.00 0.73 2 0.00 1.59 0.00 0.00 0.00 ocean 0.48 0.51 0.37 0.00 0.61 3 0.00 0.00 1.28 0.00 0.00 wood 0.70 0.35 0.15 0.58 0.16 4 0.00 0.00 0.00 1.00 0.00 tree 0.26 0.65 0.41 0.58 0.09 5 0.00 0.00 0.00 0.00 0.39 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 LSI is decomposition of C into a representation of the terms, a representation of the documents and a representation of the importance of the semantic dimensions. Sojka, IIR Group: PV211: Latent Semantic Indexing 17 / 44

LSI: Summary We ve decomposed the term-document matrix C into a product of three matrices: UΣV T. The term matrix U consists of one (row) vector for each term The document matrix V T consists of one (column) vector for each document The singular value matrix Σ diagonal matrix with singular values, reflecting importance of each dimension Next: Why are we doing this? Sojka, IIR Group: PV211: Latent Semantic Indexing 18 / 44

Exercise V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 Verify that the first document has unit length. Verify that the first two documents are orthogonal. 0.75 2 + 0.29 2 + 0.28 2 + 0.00 2 + 0.53 2 = 1.0059 0.75 0.28 + 0.29 0.53 + 0.28 0.75 + 0.00 0.00 + 0.53 0.29 = 0 Sojka, IIR Group: PV211: Latent Semantic Indexing 19 / 44

How we use the SVD in LSI Key property: Each singular value tells us how important its dimension is. By setting less important dimensions to zero, we keep the important information, but get rid of the details. These details may be noise in that case, reduced LSI is a better representation because it is less noisy. make things dissimilar that should be similar again, the reduced LSI representation is a better representation because it represents similarity better. Analogy for fewer details is better Image of a blue flower Image of a yellow flower Omitting color makes it easier to see the similarity Sojka, IIR Group: PV211: Latent Semantic Indexing 21 / 44

Reducing the dimensionality to 2 U 1 2 3 4 5 ship 0.44 0.30 0.00 0.00 0.00 boat 0.13 0.33 0.00 0.00 0.00 ocean 0.48 0.51 0.00 0.00 0.00 wood 0.70 0.35 0.00 0.00 0.00 tree 0.26 0.65 0.00 0.00 0.00 Σ 2 1 2 3 4 5 1 2.16 0.00 0.00 0.00 0.00 2 0.00 1.59 0.00 0.00 0.00 3 0.00 0.00 0.00 0.00 0.00 4 0.00 0.00 0.00 0.00 0.00 5 0.00 0.00 0.00 0.00 0.00 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.00 0.00 0.00 0.00 0.00 0.00 4 0.00 0.00 0.00 0.00 0.00 0.00 5 0.00 0.00 0.00 0.00 0.00 0.00 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in U and V T to zero when computing the product C = UΣV T. Sojka, IIR Group: PV211: Latent Semantic Indexing 22 / 44

Reducing the dimensionality to 2 C 2 d 1 d 2 d 3 d 4 d 5 d 6 ship 0.85 0.52 0.28 0.13 0.21 0.08 boat 0.36 0.36 0.16 0.20 0.02 0.18 = ocean 1.01 0.72 0.36 0.04 0.16 0.21 wood 0.97 0.12 0.20 1.03 0.62 0.41 tree 0.12 0.39 0.08 0.90 0.41 0.49 U 1 2 3 4 5 Σ 2 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 1 2.16 0.00 0.00 0.00 0.00 boat 0.13 0.33 0.59 0.00 0.73 2 0.00 1.59 0.00 0.00 0.00 ocean 0.48 0.51 0.37 0.00 0.61 3 0.00 0.00 0.00 0.00 0.00 wood 0.70 0.35 0.15 0.58 0.16 4 0.00 0.00 0.00 0.00 0.00 tree 0.26 0.65 0.41 0.58 0.09 5 0.00 0.00 0.00 0.00 0.00 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 Sojka, IIR Group: PV211: Latent Semantic Indexing 23 / 44

Example of C = UΣV T : All four matrices Recall unreduced decomposition C = UΣV T Exercise: Why can this be viewed as soft clustering? C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 = wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 U 1 2 3 4 5 Σ 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 1 2.16 0.00 0.00 0.00 0.00 boat 0.13 0.33 0.59 0.00 0.73 2 0.00 1.59 0.00 0.00 0.00 ocean 0.48 0.51 0.37 0.00 0.61 3 0.00 0.00 1.28 0.00 0.00 wood 0.70 0.35 0.15 0.58 0.16 4 0.00 0.00 0.00 1.00 0.00 tree 0.26 0.65 0.41 0.58 0.09 5 0.00 0.00 0.00 0.00 0.39 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 LSI is decomposition of C into a representation of the terms, a representation of the documents and a representation of the importance of the semantic dimensions. Sojka, IIR Group: PV211: Latent Semantic Indexing 24 / 44

Original matrix C vs. reduced C 2 = UΣ 2 V T C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 C 2 d 1 d 2 d 3 d 4 d 5 d 6 ship 0.85 0.52 0.28 0.13 0.21 0.08 boat 0.36 0.36 0.16 0.20 0.02 0.18 ocean 1.01 0.72 0.36 0.04 0.16 0.21 wood 0.97 0.12 0.20 1.03 0.62 0.41 tree 0.12 0.39 0.08 0.90 0.41 0.49 We can view C 2 as a twodimensional representation of the matrix C. We have performed a dimensionality reduction to two dimensions. Sojka, IIR Group: PV211: Latent Semantic Indexing 25 / 44

Exercise C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 Compute the similarity between d 2 and d 3 for the original matrix and for the reduced matrix. C 2 d 1 d 2 d 3 d 4 d 5 d 6 ship 0.85 0.52 0.28 0.13 0.21 0.08 boat 0.36 0.36 0.16 0.20 0.02 0.18 ocean 1.01 0.72 0.36 0.04 0.16 0.21 wood 0.97 0.12 0.20 1.03 0.62 0.41 tree 0.12 0.39 0.08 0.90 0.41 0.49 Sojka, IIR Group: PV211: Latent Semantic Indexing 26 / 44

Why the reduced matrix C 2 is better than C C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 C 2 d 1 d 2 d 3 d 4 d 5 d 6 ship 0.85 0.52 0.28 0.13 0.21 0.08 boat 0.36 0.36 0.16 0.20 0.02 0.18 ocean 1.01 0.72 0.36 0.04 0.16 0.21 wood 0.97 0.12 0.20 1.03 0.62 0.41 tree 0.12 0.39 0.08 0.90 0.41 0.49 Similarity of d 2 and d 3 in the original space: 0. Similarity of d 2 and d 3 in the reduced space: 0.52 0.28 + 0.36 0.16 + 0.72 0.36 + 0.12 0.20 + 0.39 0.08 0.52 boat and ship are semantically similar. The reduced similarity measure reflects this. What property of the SVD reduction is responsible for improved similarity? Sojka, IIR Group: PV211: Latent Semantic Indexing 27 / 44

Exercise: Compute matrix product C 2 d 1 d 2 d 3 d 4 d 5 d 6 ship 0.09 0.16 0.06-0.19-0.07-0.12 boat 0.10 0.17 0.06-0.21-0.07-0.14 ocean 0.15 0.27 0.10-0.32-0.11-0.21 wood -0.10-0.19-0.07 0.22 0.08 0.14 tree -0.19-0.34-0.12 0.41 0.14 0.27 U 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 boat 0.13 0.33 0.59 0.00 0.73 ocean 0.48 0.51 0.37 0.00 0.61 wood 0.70 0.35 0.15 0.58 0.16 tree 0.26 0.65 0.41 0.58 0.09 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22???????= Σ 2 1 2 3 4 5 1 0.00 0.00 0.00 0.00 0.00 2 0.00 1.59 0.00 0.00 0.00 3 0.00 0.00 0.00 0.00 0.00 4 0.00 0.00 0.00 0.00 0.00 5 0.00 0.00 0.00 0.00 0.00 Sojka, IIR Group: PV211: Latent Semantic Indexing 28 / 44

Why we use LSI in information retrieval LSI takes documents that are semantically similar (= talk about the same topics),...... but are not similar in the vector space (because they use different words)...... and re-represents them in a reduced vector space...... in which they have higher similarity. Thus, LSI addresses the problems of synonymy and semantic relatedness. Standard vector space: Synonyms contribute nothing to document similarity. Desired effect of LSI: Synonyms contribute strongly to document similarity. Sojka, IIR Group: PV211: Latent Semantic Indexing 30 / 44

How LSI addresses synonymy and semantic relatedness The dimensionality reduction forces us to omit a lot of detail. We have to map differents words (= different dimensions of the full space) to the same dimension in the reduced space. The cost of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words. SVD selects the least costly mapping (see below). Thus, it will map synonyms to the same dimension. But it will avoid doing that for unrelated words. Sojka, IIR Group: PV211: Latent Semantic Indexing 31 / 44

LSI: Comparison to other approaches Recap: Relevance feedback and query expansion are used to increase recall in information retrieval if query and documents have no terms in common. (or, more commonly, too few terms in common for a high similarity score) LSI increases recall and hurts precision. Thus, it addresses the same problems as (pseudo) relevance feedback and query expansion...... and it has the same problems. Sojka, IIR Group: PV211: Latent Semantic Indexing 32 / 44

Implementation Compute SVD of term-document matrix Reduce the space and compute reduced document representations Map the query into the reduced space q k = Σ 1 k UT k q. This follows from: C k = U k Σ k Vk T Σ 1 k UT C = Vk T Compute similarity of q k with all reduced documents in V k. Output ranked list of documents as usual Exercise: What is the fundamental problem with this approach? Sojka, IIR Group: PV211: Latent Semantic Indexing 33 / 44

Optimality SVD is optimal in the following sense. Keeping the k largest singular values and setting all others to zero gives you the optimal approximation of the original matrix C. Eckart-Young theorem Optimal: no other matrix of the same rank (= with the same underlying dimensionality) approximates C better. Measure of approximation is Frobenius norm: C F = i j c2 ij So LSI uses the best possible matrix. There is only one best possible matrix unique solution (modulo signs). Caveat: There is only a tenuous relationship between the Frobenius norm and cosine similarity between documents. Sojka, IIR Group: PV211: Latent Semantic Indexing 34 / 44

Data for graphical illustration of LSI c 1 c 2 c 3 c 4 c 5 m 1 m 2 m 3 m 4 Human machine interface for lab abc computer applications A survey of user opinion of computer system response time The EPS user interface management system System and human system engineering testing of EPS Relation of user perceived response time to error measurement The generation of random binary unordered trees The intersection graph of paths in trees Graph minors IV Widths of trees and well quasi ordering Graph minors A survey The matrix C c1 c2 c3 c4 c5 m1 m2 m3 m4 human 1 0 0 1 0 0 0 0 0 interface 1 0 1 0 0 0 0 0 0 computer 1 1 0 0 0 0 0 0 0 user 0 1 1 0 1 0 0 0 0 system 0 1 1 2 0 0 0 0 0 response 0 1 0 0 1 0 0 0 0 time 0 1 0 0 1 0 0 0 0 EPS 0 0 1 1 0 0 0 0 0 survey 0 1 0 0 0 0 0 0 1 trees 0 0 0 0 0 1 1 1 0 graph 0 0 0 0 0 0 1 1 1 minors 0 0 0 0 0 0 0 1 1 Sojka, IIR Group: PV211: Latent Semantic Indexing 35 / 44

Graphical illustration of LSI: Plot of C 2 2-dimensional plot of C 2 (scaled dimensions). Circles = terms. Open squares = documents (component terms in parentheses). q = query human computer interaction. The dotted cone represents the region whose points are within a cosine of.9 from q. All documents about human-computer documents (c1-c5) are near q, even c3/c5 although they share no terms. None of the graph theory documents (m1-m4) are near q. Sojka, IIR Group: PV211: Latent Semantic Indexing 36 / 44

Exercise What happens when we rank documents according to cosine similarity in the original vector space? What happens when we rank documents according to cosine similarity in the reduced vector space? Sojka, IIR Group: PV211: Latent Semantic Indexing 37 / 44

LSI performs better than vector space on MED collection LSI-100 = LSI reduced to 100 dimensions; SMART = SMART implementation of vector space model Sojka, IIR Group: PV211: Latent Semantic Indexing 38 / 44

Example of C = UΣV T : All four matrices Recall unreduced decomposition C = UΣV T Exercise: Why can this be viewed as soft clustering? C d 1 d 2 d 3 d 4 d 5 d 6 ship 1 0 1 0 0 0 boat 0 1 0 0 0 0 ocean 1 1 0 0 0 0 = wood 1 0 0 1 1 0 tree 0 0 0 1 0 1 U 1 2 3 4 5 Σ 1 2 3 4 5 ship 0.44 0.30 0.57 0.58 0.25 1 2.16 0.00 0.00 0.00 0.00 boat 0.13 0.33 0.59 0.00 0.73 2 0.00 1.59 0.00 0.00 0.00 ocean 0.48 0.51 0.37 0.00 0.61 3 0.00 0.00 1.28 0.00 0.00 wood 0.70 0.35 0.15 0.58 0.16 4 0.00 0.00 0.00 1.00 0.00 tree 0.26 0.65 0.41 0.58 0.09 5 0.00 0.00 0.00 0.00 0.39 V T d 1 d 2 d 3 d 4 d 5 d 6 1 0.75 0.28 0.20 0.45 0.33 0.12 2 0.29 0.53 0.19 0.63 0.22 0.41 3 0.28 0.75 0.45 0.20 0.12 0.33 4 0.00 0.00 0.58 0.00 0.58 0.58 5 0.53 0.29 0.63 0.19 0.41 0.22 LSI is decomposition of C into a representation of the terms, a representation of the documents and a representation of the importance of the semantic dimensions. Sojka, IIR Group: PV211: Latent Semantic Indexing 40 / 44

Why LSI can be viewed as soft clustering Each of the k dimensions of the reduced space is one cluster. If the value of the LSI representation of document d on dimension k is x, then x is the soft membership of d in topic k. This soft membership can be positive or negative. Example: Dimension 2 in our SVD decomposition This dimension/cluster corresponds to the water/earth dichotomy. ship, boat, ocean have negative values. wood, tree have positive values. d 1, d 2, d 3 have negative values (most of their terms are water terms). d 4, d 5, d 6 have positive values (all of their terms are earth terms). Sojka, IIR Group: PV211: Latent Semantic Indexing 41 / 44

Semantic indexing and clustering with Gensim Gensim: an open-source vector space modeling and topic modeling toolkit, implemented in the Python programming language Tutorial examples of topic modelling for humans (LSI): http://radimrehurek.com/gensim/tut2.html DML-CZ similarity example: http://dml.cz/handle/10338.dmlcz/500114/similararticles cf. papers similar to famous Otakar Borůvka s paper Go forth and create masterpieces for semantic indexing applications (by gensim, similarly as others already did ;-)! Sojka, IIR Group: PV211: Latent Semantic Indexing 42 / 44

Take-away today Latent Semantic Indexing (LSI) / Singular Value Decomposition: The math SVD used for dimensionality reduction LSI: SVD in information retrieval LSI as clustering gensim: Topic modelling for humans (practical use of LSI etal.) Sojka, IIR Group: PV211: Latent Semantic Indexing 43 / 44

Resources Chapter 18 of IIR Resources at https://www.fi.muni.cz/~sojka/pv211/ and http://cislmu.org, materials in MU IS and FI MU library Original paper on latent semantic indexing by Deerwester et al. Paper on probabilistic LSI by Thomas Hofmann Word space: LSI for words Sojka, IIR Group: PV211: Latent Semantic Indexing 44 / 44