Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Similar documents
CS425: Algorithms for Web Scale Data

CS 175: Project in Artificial Intelligence. Slides 4: Collaborative Filtering

Matrix Factorization Techniques for Recommender Systems

Introduction to Computational Advertising

Collaborative Filtering. Radek Pelánek

Matrix Factorization Techniques For Recommender Systems. Collaborative Filtering

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007

Preliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use!

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Collaborative Filtering Applied to Educational Data Mining

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Restricted Boltzmann Machines for Collaborative Filtering

Recommendation Systems

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Recommender Systems EE448, Big Data Mining, Lecture 10. Weinan Zhang Shanghai Jiao Tong University

CS425: Algorithms for Web Scale Data

Algorithms for Collaborative Filtering

Collaborative Filtering

Recommendation Systems

Using SVD to Recommend Movies

Data Mining Techniques

Data Mining and Matrices

Matrix Factorization In Recommender Systems. Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015

Andriy Mnih and Ruslan Salakhutdinov

4/26/2017. More algorithms for streams: Each element of data stream is a tuple Given a list of keys S Determine which tuples of stream are in S

Large-scale Ordinal Collaborative Filtering

Matrix Factorization Techniques for Recommender Systems

Matrix Factorization and Collaborative Filtering

The BellKor Solution to the Netflix Grand Prize

* Matrix Factorization and Recommendation Systems

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Recommendation Systems

Jeffrey D. Ullman Stanford University

Collaborative Filtering on Ordinal User Feedback

Machine Learning Techniques

Linear Regression (continued)

CS60021: Scalable Data Mining. Large Scale Machine Learning

a Short Introduction

Collaborative Recommendation with Multiclass Preference Context

Data Science Mastery Program

Large-scale Collaborative Ranking in Near-Linear Time

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Lessons Learned from the Netflix Contest. Arthur Dunbar

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Collaborative Filtering Matrix Completion Alternating Least Squares

Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent

Matrix and Tensor Factorization from a Machine Learning Perspective

Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University

Scaling Neighbourhood Methods

Decoupled Collaborative Ranking

Generative Models for Discrete Data

Least Mean Squares Regression

A Modified PMF Model Incorporating Implicit Item Associations

The Pragmatic Theory solution to the Netflix Grand Prize

6.034 Introduction to Artificial Intelligence

Impact of Data Characteristics on Recommender Systems Performance

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing

Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks

Matrix Factorization and Factorization Machines for Recommender Systems

Gradient Descent. Sargur Srihari

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Recommendation. Tobias Scheffer

CSC 411: Lecture 02: Linear Regression

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

ANLP Lecture 22 Lexical Semantics with Dense Vectors

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017

Collaborative Filtering

The BigChaos Solution to the Netflix Prize 2008

Mixed Membership Matrix Factorization

CS246 Final Exam, Winter 2011

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression

Introduction to Logistic Regression

CS60021: Scalable Data Mining. Dimensionality Reduction

Mixed Membership Matrix Factorization

Domokos Miklós Kelen. Online Recommendation Systems. Eötvös Loránd University. Faculty of Natural Sciences. Advisor:

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Kalev Kask

SQL-Rank: A Listwise Approach to Collaborative Ranking

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Lecture Notes 10: Matrix Factorization

Linear classifiers: Overfitting and regularization

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Lecture: Matrix Completion

Lecture 6 Optimization for Deep Neural Networks

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Least Mean Squares Regression. Machine Learning Fall 2018

Algorithms other than SGD. CS6787 Lecture 10 Fall 2017

Classification with Perceptrons. Reading:

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Prof. Alexander Ihler

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Section 29: What s an Inverse?

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Introduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond

ECS171: Machine Learning

SCMF: Sparse Covariance Matrix Factorization for Collaborative Filtering

CS341 info session is on Thu 3/1 5pm in Gates415. CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Graphical Models for Collaborative Filtering

Overlapping Communities

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams

Transcription:

Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: http://www.mmds.org Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University http://www.mmds.org

Training data 00 million ratings, 80,000 users, 7,770 movies 6 years of data: 000-00 Test data Last few ratings of each user (.8 million) Evaluation criterion:root Mean Square Error (RMSE), Netflix s system RMSE: 0.9 Competition,700+ teams $ million prize for 0% improvement on Netflix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Matrix R 7,700 movies 80,000 users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Matrix R 7,700 movies Training Data Set 80,000 users?????, Test Data Set True rating of user x on item i RMSE =, J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Predicted rating

The winner of the Netflix Challenge! Multi-scale modeling of the data: Combine top level, regional modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Global: Mean movie rating:.7 stars The Sixth Senseis 0.stars above avg. Joe rates 0.stars below avg. Baseline estimation: Joewill rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joewill rate The Sixth Sense.8 stars J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

Earliest and most popularcollaborative filtering method Derive unknown ratings from those of similar movies (item-item variant) Define similarity measures ij of items iand j Select k-nearest neighbors, compute the rating N(i; x):items most similar to ithat were rated by x rˆ xi = j N ( i; x) s ij j N ( i; x) s ij r xj J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org s ij similarity of items i and j r xj rating of user x on item j N(i;x) set of items similar to item i that were rated by x 7

In practice we get better estimates if we model deviations: ^ r xi = b xi + baseline estimate for r xi µ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) µ b i = (avg. rating of movie i) µ j N ( i; x) s ij ( r j N ( i; x) xj s ij b Problems/Issues: ) Similarity measures are arbitrary )Pairwise similarities neglect interdependencies among users )Taking a weighted average can be restricting Solution:Instead of s ij use w ij that we estimate directly from data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8 xj )

Use a weighted sumrather than weighted avg.: A few notes: ; ; set of movies rated by user x that are similar to movie i is the interpolation weight (some real number) We allow:, models interaction between pairs of movies (it does not depend on user x) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9

How to set w ij?, Remember, error metric is:, or equivalently SSE:, Find w ij that minimize SSE on training data! Models relationships between item i and its neighbors j w ij can be learned/estimatedbased on xand all other users that rated i Why is this a good idea? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 0

Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hopethe system will also predict well the unknown ratings J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Idea:Let s set values wsuch that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find w ij that minimize SSEon training data!, ; Predicted rating Think of was a vector of numbers True rating J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

A simple way to minimize a function : Compute the take a derivative Start at some point and evaluate Make a step in the reverse direction of the gradient: Repeat until converged J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

We have the optimization problem, now what? Gradient decent: Iterate until convergence: η where is the gradient (derivative evaluated on data):, ; for ;,, else Note:We fix movie i, go over all r xi, for every movie ;, we compute ; η learning rate while w new - w old > ε: w old = w new w new = w old - η w old J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

So far: ; Weights w ij derived based on their role; no use of an arbitrary similarity measure (w ij s ij ) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract regional correlations Global effects Factorization CF/NN J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 CF+Biases+learned weights: 0.9 Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7

items SVD on Netflix data: R Q P T users R items factors. -. -.. -.7 - -..6....7 Q.... -.. -.8. For now let s assume we can approximate the rating matrix Ras a product of thin Q P T Rhas missing entries but let s ignore that for now! -..7 -. Basically, we will want the reconstruction error to be small on known ratings and we don t care about the values on the missing ones...6...7 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8 -.. users -. -.9.8. -. P T SVD: A = U Σ V T -..9.. -.7.8...7. -. -.6 factors -.9..

How to estimate the missing rating of user xfor item i? items items. -. -.. -.7 - users? -..6....7 factors.... -. Q factors. -.8. -..7 -....6...7 -.. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9 -. -.9 users.8. -. P T q i = row i of Q p x = column x of P T -..9.. -.7.8...7. -. -.6 -.9..

How to estimate the missing rating of user xfor item i? items items. -. -.. -.7 - users? -..6....7 factors.... -. Q factors. -.8. -..7 -....6...7 -.. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 0 -. -.9 users.8. -. P T q i = row i of Q p x = column x of P T -..9.. -.7.8...7. -. -.6 -.9..

How to estimate the missing rating of user xfor item i? items items. -. -.. -.7 - users.? -..6....7.... -. f factors Q f factors. -.8. -..7 -....6...7 -.. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org -. -.9 users.8. -. P T q i = row i of Q p x = column x of P T -..9.. -.7.8...7. -. -.6 -.9..

The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared Factor towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared Factor towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

n n Remember SVD: A: Input data matrix U: Left singular vecs m A m Σ V T V: Right singular vecs Σ: Singular values U So in our case: SVD on Netflix data: R Q P T A= R, Q = U, P T = ΣV T J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min Σ,, Note two things: SSE and RMSE are monotonically related: Great news: SVD is minimizing RMSE Complication:The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our Rhas missing entries! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

items users SVD isn t defined when entries are missing! Use specialized methods to find P, Q min, items,. -. -.. -.7 - factors -..6....7.... -.. -.8. Q -..7 -. Note: We don t require cols of P, Qto be orthogonal/unit length P, Qmap users/movies to a latent space The most popular model among Netflix contestants...6...7 -.. -. -.9 users.8. -. -..9. P T. -.7.8...7. -. -.6 -.9.. factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

Our goal is to find P and Q such tat:,, users factors items items. -. -.. -.7 - -..6....7.... -. Q. -.8. -..7 -....6...7 -.. -. -.9 users.8 -...9 -.. P T. -.7.8...7. -. -.6 -.9.. factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8

Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k(# of factors) to capture all the signals But, SSEon testdatabegins to rise for k> This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is it fits too well the training data and thus not generalizing well to unseen test data????? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9

To solve overfittingwe introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce????? min P, Q training ( r xi q i p x ) + λ x p x + λ i q i error λ, λ user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that achieve the minimum of the objective J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 0

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Dumb and Dumber

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Dumb and Dumber

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Dumb and Dumber

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Dumb and Dumber

Want to find matrices Pand Q: + + min ( rxi qi px ) λ px λ qi P, Q training x i Gradient decent: Initialize Pand Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P-η P Q Q-η Q where Q is gradient/derivative of matrix Q: and, Here is entry fof row q i of matrix Q Observation: Computing gradients is slow! How to compute gradient of a matrix? Compute gradient of every element independently! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Gradient Descent (GD) vs. Stochastic GD Observation: where, Here is entry fof row q i of matrix Q η η, Idea:Instead of evaluating gradient over all ratings evaluate it for each individual rating and make a step GD: η SGD: Faster convergence! Need more steps but each step is computed much faster, J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

Convergence of GDvs. SGD Value of the objective function Iteration/step GD improves the value of the objective function at every step. SGD improves the value but in a noisy way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7

Stochastic gradient decent: Initialize Pand Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each r xi : for loops: For until convergence: (derivative of the error ) (update equation) (update equation) learning rate For each r xi Compute gradient, do a step J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8

Koren, Bell, Volinksy, IEEE Computer, 009 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9

user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations μ = overall mean rating b x = bias of user x b i = bias of movie i J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

We have expectations on the rating by user xof movie i, even without estimating x s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day ( frequency ) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: µ=.7 You are a critical reviewer: your ratings are star lower than the mean: b x = - Star Wars gets a mean rating of 0.higher than average movie: b i = + 0. Predicted rating for you on Star Wars: =.7 - + 0. =. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Solve: Stochastic gradient decent to find parameters Note:Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org regularization goodness of fit λ is selected via gridsearch on a validation set ( ) + + + + + + + i i x x x x i i R i x x i i x xi P Q b b p q p q b b r ), (, ) ( min λ λ λ λ µ

0.9 0.9 CF (no time bias) Basic Latent Factors Latent Factors w/ Biases 0.9 RMSE 0.90 0.9 0.89 0.89 0.88 0 00 000 Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.9 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

Sudden rise in the average movie rating (early 00) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8

Original model: r xi = µ +b x + b i + q i p x Add time dependence to biases: r xi = µ +b x (t)+ b i (t) +q i p x Make parameters b x and b i to depend on time () Parameterize time-dependence by linear trends () Each bin corresponds to 0 consecutive weeks Add temporal dependence to factors p x (t) user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9

RMSE 0.9 0.9 0.9 0.90 0.9 0.89 CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF 0.89 0.88 0.88 0.87 0 00 000 0000 Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 0

Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.9 Latent factors: 0.90 Latent factors+biases: 0.89 Latent factors+biases+time: 0.876 Still no prize! Getting desperate. Try a kitchen sink approach! Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

June 6 th submission triggers 0-day last call J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Ensemble team formed Group of other teams on leaderboardforms a new team Relies on combining their models Quickly also get a qualifying score over 0% BellKor Continue to get small improvements in their scores Realize that they are in direct competition with Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Submissions limited to a day Only final submission could be made in the last h hours before deadline BellKorteam member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor s Frantic last hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKorsubmits a little early (on purpose), 0 minsbefore deadline Ensemble submits their final entry 0 mins later.and everyone waits. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7

Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth Further reading: Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 http://www.research.att.com/~volinsky/netflix/bpc.html http://www.the-ensemble.com/ J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8