Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Size: px
Start display at page:

Download "Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University"

Transcription

1 Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

2 Training data 00 million ratings, 80,000 users, 7,770 movies 6 years of data: Test data Last few ratings of each user (.8 million) Evaluation criterion:root Mean Square Error (RMSE), Netflix s system RMSE: 0.9 Competition,700+ teams $ million prize for 0% improvement on Netflix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

3 Matrix R 7,700 movies 80,000 users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

4 Matrix R 7,700 movies Training Data Set 80,000 users?????, Test Data Set True rating of user x on item i RMSE =, J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Predicted rating

5 The winner of the Netflix Challenge! Multi-scale modeling of the data: Combine top level, regional modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

6 Global: Mean movie rating:.7 stars The Sixth Senseis 0.stars above avg. Joe rates 0.stars below avg. Baseline estimation: Joewill rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joewill rate The Sixth Sense.8 stars J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

7 Earliest and most popularcollaborative filtering method Derive unknown ratings from those of similar movies (item-item variant) Define similarity measures ij of items iand j Select k-nearest neighbors, compute the rating N(i; x):items most similar to ithat were rated by x rˆ xi = j N ( i; x) s ij j N ( i; x) s ij r xj J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, s ij similarity of items i and j r xj rating of user x on item j N(i;x) set of items similar to item i that were rated by x 7

8 In practice we get better estimates if we model deviations: ^ r xi = b xi + baseline estimate for r xi µ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) µ b i = (avg. rating of movie i) µ j N ( i; x) s ij ( r j N ( i; x) xj s ij b Problems/Issues: ) Similarity measures are arbitrary )Pairwise similarities neglect interdependencies among users )Taking a weighted average can be restricting Solution:Instead of s ij use w ij that we estimate directly from data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8 xj )

9 Use a weighted sumrather than weighted avg.: A few notes: ; ; set of movies rated by user x that are similar to movie i is the interpolation weight (some real number) We allow:, models interaction between pairs of movies (it does not depend on user x) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9

10 How to set w ij?, Remember, error metric is:, or equivalently SSE:, Find w ij that minimize SSE on training data! Models relationships between item i and its neighbors j w ij can be learned/estimatedbased on xand all other users that rated i Why is this a good idea? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 0

11 Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hopethe system will also predict well the unknown ratings J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

12 Idea:Let s set values wsuch that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find w ij that minimize SSEon training data!, ; Predicted rating Think of was a vector of numbers True rating J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

13 A simple way to minimize a function : Compute the take a derivative Start at some point and evaluate Make a step in the reverse direction of the gradient: Repeat until converged J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

14 We have the optimization problem, now what? Gradient decent: Iterate until convergence: η where is the gradient (derivative evaluated on data):, ; for ;,, else Note:We fix movie i, go over all r xi, for every movie ;, we compute ; η learning rate while w new - w old > ε: w old = w new w new = w old - η w old J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

15 So far: ; Weights w ij derived based on their role; no use of an arbitrary similarity measure (w ij s ij ) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract regional correlations Global effects Factorization CF/NN J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

16 Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 CF+Biases+learned weights: 0.9 Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

17 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 7

18 items SVD on Netflix data: R Q P T users R items factors Q For now let s assume we can approximate the rating matrix Ras a product of thin Q P T Rhas missing entries but let s ignore that for now! Basically, we will want the reconstruction error to be small on known ratings and we don t care about the values on the missing ones J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, users P T SVD: A = U Σ V T factors -.9..

19 How to estimate the missing rating of user xfor item i? items items users? factors Q factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, users P T q i = row i of Q p x = column x of P T

20 How to estimate the missing rating of user xfor item i? items items users? factors Q factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, users P T q i = row i of Q p x = column x of P T

21 How to estimate the missing rating of user xfor item i? items items users.? f factors Q f factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, users P T q i = row i of Q p x = column x of P T

22 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared Factor towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

23 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared Factor towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

24 n n Remember SVD: A: Input data matrix U: Left singular vecs m A m Σ V T V: Right singular vecs Σ: Singular values U So in our case: SVD on Netflix data: R Q P T A= R, Q = U, P T = ΣV T J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

25 We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min Σ,, Note two things: SSE and RMSE are monotonically related: Great news: SVD is minimizing RMSE Complication:The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our Rhas missing entries! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

26 items users SVD isn t defined when entries are missing! Use specialized methods to find P, Q min, items, factors Q Note: We don t require cols of P, Qto be orthogonal/unit length P, Qmap users/movies to a latent space The most popular model among Netflix contestants users P T factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

27

28 Our goal is to find P and Q such tat:,, users factors items items Q users P T factors J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8

29 Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k(# of factors) to capture all the signals But, SSEon testdatabegins to rise for k> This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is it fits too well the training data and thus not generalizing well to unseen test data????? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9

30 To solve overfittingwe introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce????? min P, Q training ( r xi q i p x ) + λ x p x + λ i q i error λ, λ user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that achieve the minimum of the objective J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 0

31 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber

32 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber

33 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber

34 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber

35 Want to find matrices Pand Q: + + min ( rxi qi px ) λ px λ qi P, Q training x i Gradient decent: Initialize Pand Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P-η P Q Q-η Q where Q is gradient/derivative of matrix Q: and, Here is entry fof row q i of matrix Q Observation: Computing gradients is slow! How to compute gradient of a matrix? Compute gradient of every element independently! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

36 Gradient Descent (GD) vs. Stochastic GD Observation: where, Here is entry fof row q i of matrix Q η η, Idea:Instead of evaluating gradient over all ratings evaluate it for each individual rating and make a step GD: η SGD: Faster convergence! Need more steps but each step is computed much faster, J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

37 Convergence of GDvs. SGD Value of the objective function Iteration/step GD improves the value of the objective function at every step. SGD improves the value but in a noisy way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 7

38 Stochastic gradient decent: Initialize Pand Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each r xi : for loops: For until convergence: (derivative of the error ) (update equation) (update equation) learning rate For each r xi Compute gradient, do a step J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8

39 Koren, Bell, Volinksy, IEEE Computer, 009 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9

40

41 user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations μ = overall mean rating b x = bias of user x b i = bias of movie i J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

42 We have expectations on the rating by user xof movie i, even without estimating x s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day ( frequency ) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

43 Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: µ=.7 You are a critical reviewer: your ratings are star lower than the mean: b x = - Star Wars gets a mean rating of 0.higher than average movie: b i = + 0. Predicted rating for you on Star Wars: = =. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

44 Solve: Stochastic gradient decent to find parameters Note:Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, regularization goodness of fit λ is selected via gridsearch on a validation set ( ) i i x x x x i i R i x x i i x xi P Q b b p q p q b b r ), (, ) ( min λ λ λ λ µ

45 CF (no time bias) Basic Latent Factors Latent Factors w/ Biases 0.9 RMSE Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

46 Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.9 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

47

48 Sudden rise in the average movie rating (early 00) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8

49 Original model: r xi = µ +b x + b i + q i p x Add time dependence to biases: r xi = µ +b x (t)+ b i (t) +q i p x Make parameters b x and b i to depend on time () Parameterize time-dependence by linear trends () Each bin corresponds to 0 consecutive weeks Add temporal dependence to factors p x (t) user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9

50 RMSE CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 0

51 Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.9 Latent factors: 0.90 Latent factors+biases: 0.89 Latent factors+biases+time: Still no prize! Getting desperate. Try a kitchen sink approach! Grand Prize: 0.86 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

52 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

53 June 6 th submission triggers 0-day last call J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

54 Ensemble team formed Group of other teams on leaderboardforms a new team Relies on combining their models Quickly also get a qualifying score over 0% BellKor Continue to get small improvements in their scores Realize that they are in direct competition with Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

55 Submissions limited to a day Only final submission could be made in the last h hours before deadline BellKorteam member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor s Frantic last hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKorsubmits a little early (on purpose), 0 minsbefore deadline Ensemble submits their final entry 0 mins later.and everyone waits. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

56 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

57 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 7

58 Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth Further reading: Y. Koren, Collaborative filtering with temporal dynamics, KDD J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8

CS425: Algorithms for Web Scale Data

CS425: Algorithms for Web Scale Data CS: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS. The original slides can be accessed at: www.mmds.org J. Leskovec,

More information

CS 175: Project in Artificial Intelligence. Slides 4: Collaborative Filtering

CS 175: Project in Artificial Intelligence. Slides 4: Collaborative Filtering CS 175: Project in Artificial Intelligence Slides 4: Collaborative Filtering 1 Topic 6: Collaborative Filtering Some slides taken from Prof. Smyth (with slight modifications) 2 Outline General aspects

More information

Matrix Factorization Techniques for Recommender Systems

Matrix Factorization Techniques for Recommender Systems Matrix Factorization Techniques for Recommender Systems By Yehuda Koren Robert Bell Chris Volinsky Presented by Peng Xu Supervised by Prof. Michel Desmarais 1 Contents 1. Introduction 4. A Basic Matrix

More information

Introduction to Computational Advertising

Introduction to Computational Advertising Introduction to Computational Advertising MS&E 9 Stanford University Autumn Instructors: Dr. Andrei Broder and Dr. Vanja Josifovski Yahoo! Research General course info Course Website: http://www.stanford.edu/class/msande9/

More information

Collaborative Filtering. Radek Pelánek

Collaborative Filtering. Radek Pelánek Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine

More information

Matrix Factorization Techniques For Recommender Systems. Collaborative Filtering

Matrix Factorization Techniques For Recommender Systems. Collaborative Filtering Matrix Factorization Techniques For Recommender Systems Collaborative Filtering Markus Freitag, Jan-Felix Schwarz 28 April 2011 Agenda 2 1. Paper Backgrounds 2. Latent Factor Models 3. Overfitting & Regularization

More information

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007 Recommender Systems Dipanjan Das Language Technologies Institute Carnegie Mellon University 20 November, 2007 Today s Outline What are Recommender Systems? Two approaches Content Based Methods Collaborative

More information

Preliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use!

Preliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use! Data Mining The art of extracting knowledge from large bodies of structured data. Let s put it to use! 1 Recommendations 2 Basic Recommendations with Collaborative Filtering Making Recommendations 4 The

More information

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

Collaborative Filtering Applied to Educational Data Mining

Collaborative Filtering Applied to Educational Data Mining Journal of Machine Learning Research (200) Submitted ; Published Collaborative Filtering Applied to Educational Data Mining Andreas Töscher commendo research 8580 Köflach, Austria andreas.toescher@commendo.at

More information

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

Restricted Boltzmann Machines for Collaborative Filtering

Restricted Boltzmann Machines for Collaborative Filtering Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Pawan Goyal CSE, IITKGP October 21, 2014 Pawan Goyal (IIT Kharagpur) Recommendation Systems October 21, 2014 1 / 52 Recommendation System? Pawan Goyal (IIT Kharagpur) Recommendation

More information

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research

More information

Recommender Systems EE448, Big Data Mining, Lecture 10. Weinan Zhang Shanghai Jiao Tong University

Recommender Systems EE448, Big Data Mining, Lecture 10. Weinan Zhang Shanghai Jiao Tong University 2018 EE448, Big Data Mining, Lecture 10 Recommender Systems Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html Content of This Course Overview of

More information

CS425: Algorithms for Web Scale Data

CS425: Algorithms for Web Scale Data CS: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS. The original slides can be accessed at: www.mmds.org Customer

More information

Algorithms for Collaborative Filtering

Algorithms for Collaborative Filtering Algorithms for Collaborative Filtering or How to Get Half Way to Winning $1million from Netflix Todd Lipcon Advisor: Prof. Philip Klein The Real-World Problem E-commerce sites would like to make personalized

More information

Collaborative Filtering

Collaborative Filtering Collaborative Filtering Nicholas Ruozzi University of Texas at Dallas based on the slides of Alex Smola & Narges Razavian Collaborative Filtering Combining information among collaborating entities to make

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Pawan Goyal CSE, IITKGP October 29-30, 2015 Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 1 / 61 Recommendation System? Pawan Goyal (IIT Kharagpur) Recommendation

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 622 - Section 2 - Spring 27 Pre-final Review Jan-Willem van de Meent Feedback Feedback https://goo.gl/er7eo8 (also posted on Piazza) Also, please fill out your TRACE evaluations!

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 04 Matrix Completion Rainer Gemulla, Pauli Miettinen May 02, 2013 Recommender systems Problem Set of users Set of items (movies, books, jokes, products, stories,...) Feedback (ratings,

More information

Matrix Factorization In Recommender Systems. Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015

Matrix Factorization In Recommender Systems. Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015 Matrix Factorization In Recommender Systems Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015 Table of Contents Background: Recommender Systems (RS) Evolution of Matrix

More information

Andriy Mnih and Ruslan Salakhutdinov

Andriy Mnih and Ruslan Salakhutdinov MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative

More information

4/26/2017. More algorithms for streams: Each element of data stream is a tuple Given a list of keys S Determine which tuples of stream are in S

4/26/2017. More algorithms for streams: Each element of data stream is a tuple Given a list of keys S Determine which tuples of stream are in S Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

Large-scale Ordinal Collaborative Filtering

Large-scale Ordinal Collaborative Filtering Large-scale Ordinal Collaborative Filtering Ulrich Paquet, Blaise Thomson, and Ole Winther Microsoft Research Cambridge, University of Cambridge, Technical University of Denmark ulripa@microsoft.com,brmt2@cam.ac.uk,owi@imm.dtu.dk

More information

Matrix Factorization Techniques for Recommender Systems

Matrix Factorization Techniques for Recommender Systems Matrix Factorization Techniques for Recommender Systems Patrick Seemann, December 16 th, 2014 16.12.2014 Fachbereich Informatik Recommender Systems Seminar Patrick Seemann Topics Intro New-User / New-Item

More information

Matrix Factorization and Collaborative Filtering

Matrix Factorization and Collaborative Filtering 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Matrix Factorization and Collaborative Filtering MF Readings: (Koren et al., 2009)

More information

The BellKor Solution to the Netflix Grand Prize

The BellKor Solution to the Netflix Grand Prize 1 The BellKor Solution to the Netflix Grand Prize Yehuda Koren August 2009 I. INTRODUCTION This article describes part of our contribution to the Bell- Kor s Pragmatic Chaos final solution, which won the

More information

* Matrix Factorization and Recommendation Systems

* Matrix Factorization and Recommendation Systems Matrix Factorization and Recommendation Systems Originally presented at HLF Workshop on Matrix Factorization with Loren Anderson (University of Minnesota Twin Cities) on 25 th September, 2017 15 th March,

More information

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University. Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org #1: C4.5 Decision Tree - Classification (61 votes) #2: K-Means - Clustering

More information

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension

More information

Collaborative Filtering on Ordinal User Feedback

Collaborative Filtering on Ordinal User Feedback Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Collaborative Filtering on Ordinal User Feedback Yehuda Koren Google yehudako@gmail.com Joseph Sill Analytics Consultant

More information

Machine Learning Techniques

Machine Learning Techniques Machine Learning Techniques ( 機器學習技法 ) Lecture 15: Matrix Factorization Hsuan-Tien Lin ( 林軒田 ) htlin@csie.ntu.edu.tw Department of Computer Science & Information Engineering National Taiwan University

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

CS60021: Scalable Data Mining. Large Scale Machine Learning

CS60021: Scalable Data Mining. Large Scale Machine Learning J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Large Scale Machine Learning Sourangshu Bhattacharya Example: Spam filtering Instance

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

Collaborative Recommendation with Multiclass Preference Context

Collaborative Recommendation with Multiclass Preference Context Collaborative Recommendation with Multiclass Preference Context Weike Pan and Zhong Ming {panweike,mingz}@szu.edu.cn College of Computer Science and Software Engineering Shenzhen University Pan and Ming

More information

Data Science Mastery Program

Data Science Mastery Program Data Science Mastery Program Copyright Policy All content included on the Site or third-party platforms as part of the class, such as text, graphics, logos, button icons, images, audio clips, video clips,

More information

Large-scale Collaborative Ranking in Near-Linear Time

Large-scale Collaborative Ranking in Near-Linear Time Large-scale Collaborative Ranking in Near-Linear Time Liwei Wu Depts of Statistics and Computer Science UC Davis KDD 17, Halifax, Canada August 13-17, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

Lessons Learned from the Netflix Contest. Arthur Dunbar

Lessons Learned from the Netflix Contest. Arthur Dunbar Lessons Learned from the Netflix Contest Arthur Dunbar Background From Wikipedia: The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for films,

More information

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Schütze s, linked from http://informationretrieval.org/ IR 8: Evaluation & SVD Paul Ginsparg Cornell University, Ithaca, NY 20 Sep 2011

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent

Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent KDD 2011 Rainer Gemulla, Peter J. Haas, Erik Nijkamp and Yannis Sismanis Presenter: Jiawen Yao Dept. CSE, UT Arlington 1 1

More information

Matrix and Tensor Factorization from a Machine Learning Perspective

Matrix and Tensor Factorization from a Machine Learning Perspective Matrix and Tensor Factorization from a Machine Learning Perspective Christoph Freudenthaler Information Systems and Machine Learning Lab, University of Hildesheim Research Seminar, Vienna University of

More information

Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University

Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University Clara De Paolis Kaluza Fall 2016 1 Problem Statement and Motivation The goal of this work is to construct a personalized recommender

More information

Scaling Neighbourhood Methods

Scaling Neighbourhood Methods Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)

More information

Decoupled Collaborative Ranking

Decoupled Collaborative Ranking Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides

More information

Generative Models for Discrete Data

Generative Models for Discrete Data Generative Models for Discrete Data ddebarr@uw.edu 2016-04-21 Agenda Bayesian Concept Learning Beta-Binomial Model Dirichlet-Multinomial Model Naïve Bayes Classifiers Bayesian Concept Learning Numbers

More information

Least Mean Squares Regression

Least Mean Squares Regression Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method

More information

A Modified PMF Model Incorporating Implicit Item Associations

A Modified PMF Model Incorporating Implicit Item Associations A Modified PMF Model Incorporating Implicit Item Associations Qiang Liu Institute of Artificial Intelligence College of Computer Science Zhejiang University Hangzhou 31007, China Email: 01dtd@gmail.com

More information

The Pragmatic Theory solution to the Netflix Grand Prize

The Pragmatic Theory solution to the Netflix Grand Prize The Pragmatic Theory solution to the Netflix Grand Prize Martin Piotte Martin Chabbert August 2009 Pragmatic Theory Inc., Canada nfpragmatictheory@gmail.com Table of Contents 1 Introduction... 3 2 Common

More information

6.034 Introduction to Artificial Intelligence

6.034 Introduction to Artificial Intelligence 6.34 Introduction to Artificial Intelligence Tommi Jaakkola MIT CSAIL The world is drowning in data... The world is drowning in data...... access to information is based on recommendations Recommending

More information

Impact of Data Characteristics on Recommender Systems Performance

Impact of Data Characteristics on Recommender Systems Performance Impact of Data Characteristics on Recommender Systems Performance Gediminas Adomavicius YoungOk Kwon Jingjing Zhang Department of Information and Decision Sciences Carlson School of Management, University

More information

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing High Dimensional Search Min- Hashing Locality Sensi6ve Hashing Debapriyo Majumdar Data Mining Fall 2014 Indian Statistical Institute Kolkata September 8 and 11, 2014 High Support Rules vs Correla6on of

More information

Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks

Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks Yunwen Chen kddchen@gmail.com Yingwei Xin xinyingwei@gmail.com Lu Yao luyao.2013@gmail.com Zuotao

More information

Matrix Factorization and Factorization Machines for Recommender Systems

Matrix Factorization and Factorization Machines for Recommender Systems Talk at SDM workshop on Machine Learning Methods on Recommender Systems, May 2, 215 Chih-Jen Lin (National Taiwan Univ.) 1 / 54 Matrix Factorization and Factorization Machines for Recommender Systems Chih-Jen

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Recommendation. Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Recommendation. Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Recommendation Tobias Scheffer Recommendation Engines Recommendation of products, music, contacts,.. Based on user features, item

More information

CSC 411: Lecture 02: Linear Regression

CSC 411: Lecture 02: Linear Regression CSC 411: Lecture 02: Linear Regression Richard Zemel, Raquel Urtasun and Sanja Fidler University of Toronto (Most plots in this lecture are from Bishop s book) Zemel, Urtasun, Fidler (UofT) CSC 411: 02-Regression

More information

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation. ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Previous lectures: Sparse vectors recap How to represent

More information

ANLP Lecture 22 Lexical Semantics with Dense Vectors

ANLP Lecture 22 Lexical Semantics with Dense Vectors ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Henry S. Thompson ANLP Lecture 22 5 November 2018 Previous

More information

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017 CPSC 340: Machine Learning and Data Mining Stochastic Gradient Fall 2017 Assignment 3: Admin Check update thread on Piazza for correct definition of trainndx. This could make your cross-validation code

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

The BigChaos Solution to the Netflix Prize 2008

The BigChaos Solution to the Netflix Prize 2008 The BigChaos Solution to the Netflix Prize 2008 Andreas Töscher and Michael Jahrer commendo research & consulting Neuer Weg 23, A-8580 Köflach, Austria {andreas.toescher,michael.jahrer}@commendo.at November

More information

Mixed Membership Matrix Factorization

Mixed Membership Matrix Factorization Mixed Membership Matrix Factorization Lester Mackey 1 David Weiss 2 Michael I. Jordan 1 1 University of California, Berkeley 2 University of Pennsylvania International Conference on Machine Learning, 2010

More information

CS246 Final Exam, Winter 2011

CS246 Final Exam, Winter 2011 CS246 Final Exam, Winter 2011 1. Your name and student ID. Name:... Student ID:... 2. I agree to comply with Stanford Honor Code. Signature:... 3. There should be 17 numbered pages in this exam (including

More information

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression Behavioral Data Mining Lecture 7 Linear and Logistic Regression Outline Linear Regression Regularization Logistic Regression Stochastic Gradient Fast Stochastic Methods Performance tips Linear Regression

More information

Introduction to Logistic Regression

Introduction to Logistic Regression Introduction to Logistic Regression Guy Lebanon Binary Classification Binary classification is the most basic task in machine learning, and yet the most frequent. Binary classifiers often serve as the

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

Mixed Membership Matrix Factorization

Mixed Membership Matrix Factorization Mixed Membership Matrix Factorization Lester Mackey University of California, Berkeley Collaborators: David Weiss, University of Pennsylvania Michael I. Jordan, University of California, Berkeley 2011

More information

Domokos Miklós Kelen. Online Recommendation Systems. Eötvös Loránd University. Faculty of Natural Sciences. Advisor:

Domokos Miklós Kelen. Online Recommendation Systems. Eötvös Loránd University. Faculty of Natural Sciences. Advisor: Eötvös Loránd University Faculty of Natural Sciences Online Recommendation Systems MSc Thesis Domokos Miklós Kelen Applied Mathematics MSc Advisor: András Benczúr Ph.D. Department of Operations Research

More information

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Kalev Kask

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Kalev Kask Machine Learning and Data Mining Dimensionality Reduction; PCA & SVD Kalev Kask Motivation High-dimensional data Images of faces Text from articles All S&P 500 stocks Can we describe them in a simpler

More information

SQL-Rank: A Listwise Approach to Collaborative Ranking

SQL-Rank: A Listwise Approach to Collaborative Ranking SQL-Rank: A Listwise Approach to Collaborative Ranking Liwei Wu Depts of Statistics and Computer Science UC Davis ICML 18, Stockholm, Sweden July 10-15, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/26/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2 More algorithms

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Linear classifiers: Overfitting and regularization

Linear classifiers: Overfitting and regularization Linear classifiers: Overfitting and regularization Emily Fox University of Washington January 25, 2017 Logistic regression recap 1 . Thus far, we focused on decision boundaries Score(x i ) = w 0 h 0 (x

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

Lecture: Matrix Completion

Lecture: Matrix Completion 1/56 Lecture: Matrix Completion http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html Acknowledgement: this slides is based on Prof. Jure Leskovec and Prof. Emmanuel Candes s lecture notes Recommendation systems

More information

Lecture 6 Optimization for Deep Neural Networks

Lecture 6 Optimization for Deep Neural Networks Lecture 6 Optimization for Deep Neural Networks CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 12, 2017 Things we will look at today Stochastic Gradient Descent Things

More information

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions? Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

Algorithms other than SGD. CS6787 Lecture 10 Fall 2017

Algorithms other than SGD. CS6787 Lecture 10 Fall 2017 Algorithms other than SGD CS6787 Lecture 10 Fall 2017 Machine learning is not just SGD Once a model is trained, we need to use it to classify new examples This inference task is not computed with SGD There

More information

Classification with Perceptrons. Reading:

Classification with Perceptrons. Reading: Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters

More information

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Prof. Alexander Ihler

Machine Learning and Data Mining. Dimensionality Reduction; PCA & SVD. Prof. Alexander Ihler + Machine Learning and Data Mining Dimensionality Reduction; PCA & SVD Prof. Alexander Ihler Mo#va#on High-dimensional data Images of faces Text from articles All S&P 500 stocks Can we describe them in

More information

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Section 29: What s an Inverse?

Section 29: What s an Inverse? Section 29: What s an Inverse? Our investigations in the last section showed that all of the matrix operations had an identity element. The identity element for addition is, for obvious reasons, called

More information

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t Markov Chains. Statistical Problem. We may have an underlying evolving system (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t Consecutive speech feature vectors are related

More information

Introduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond

Introduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond PCA, ICA and beyond Summer School on Manifold Learning in Image and Signal Analysis, August 17-21, 2009, Hven Technical University of Denmark (DTU) & University of Copenhagen (KU) August 18, 2009 Motivation

More information

ECS171: Machine Learning

ECS171: Machine Learning ECS171: Machine Learning Lecture 4: Optimization (LFD 3.3, SGD) Cho-Jui Hsieh UC Davis Jan 22, 2018 Gradient descent Optimization Goal: find the minimizer of a function min f (w) w For now we assume f

More information

SCMF: Sparse Covariance Matrix Factorization for Collaborative Filtering

SCMF: Sparse Covariance Matrix Factorization for Collaborative Filtering SCMF: Sparse Covariance Matrix Factorization for Collaborative Filtering Jianping Shi Naiyan Wang Yang Xia Dit-Yan Yeung Irwin King Jiaya Jia Department of Computer Science and Engineering, The Chinese

More information

CS341 info session is on Thu 3/1 5pm in Gates415. CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS341 info session is on Thu 3/1 5pm in Gates415. CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS341 info session is on Thu 3/1 5pm in Gates415 CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/28/18 Jure Leskovec, Stanford CS246: Mining Massive Datasets,

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Overlapping Communities

Overlapping Communities Overlapping Communities Davide Mottin HassoPlattner Institute Graph Mining course Winter Semester 2017 Acknowledgements Most of this lecture is taken from: http://web.stanford.edu/class/cs224w/slides GRAPH

More information

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Jimeng Sun Spiros Papadimitriou Philip S. Yu Carnegie Mellon University Pittsburgh, PA, USA IBM T.J. Watson Research Center Hawthorne,

More information