CS425: Algorithms for Web Scale Data

Size: px
Start display at page:

Download "CS425: Algorithms for Web Scale Data"

Transcription

1 CS: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS. The original slides can be accessed at:

2 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Training data 100 million ratings, 80,000 users, 17,770 movies 6 years of data: Test data Last few ratings of each user (.8 million) Evaluation criterion: Root Mean Square Error (RMSE) = 1 R σ (i,x) R r xi Ƹ r xi Netflix s system RMSE: 0.91 Competition,700+ teams $1 million prize for 10% improvement on Netflix

3 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Matrix R 17,700 movies 80,000 users

4 Matrix R 17,700 movies Training Data Set 80,000 users 1??? 1?? 1 r,6 Test Data Set True rating of user x on item i RMSE = 1 R σ (i,x) R r xi Ƹ r xi J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Predicted rating

5 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The winner of the Netflix Challenge! Multi-scale modeling of the data: Combine top level, regional modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering

6 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6 Global: Mean movie rating:.7 stars The Sixth Sense is 0. stars above avg. Joe rates 0. stars below avg. Baseline estimation: Joe will rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joe will rate The Sixth Sense.8 stars

7 Earliest and most popular collaborative filtering method Derive unknown ratings from those of similar movies (item-item variant) Define similarity measure s ij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x rˆ xi jn ( i; x) s ij jn ( i; x) s ij r xj J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, s ij similarity of items i and j r xj rating of user x on item j N(i;x) set of items similar to item i that were rated by x 7

8 In practice we get better estimates if we model deviations: μ b x b i ^ r xi b xi baseline estimate for r xi b xi = μ + b x + b i = overall mean rating = rating deviation of user x = (avg. rating of user x) μ = (avg. rating of movie i) μ jn ( i; x) s ij ( r jn ( i; x) xj s ij Problems/Issues: 1) Similarity measures are arbitrary ) Pairwise similarities neglect interdependencies among users ) Taking a weighted average can be restricting Solution: Instead of s ij use w ij that we estimate directly from data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8 b xj )

9 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9 Use a weighted sum rather than weighted avg.: r xi = b xi + w ij r xj b xj A few notes: j N(i;x) N(i; x) set of movies rated by user x that are similar to movie i w ij is the interpolation weight (some real number) We allow: σ j N(i,x) w ij 1 w ij models interaction between pairs of movies (it does not depend on user x)

10 r xi = b xi + σ j N(i,x) w ij r xj b xj How to set w ij? Remember, error metric is: 1 R σ (i,x) R r xi Ƹ r xi or equivalently SSE: σ (i,x) R r xi r xi Find w ij that minimize SSE on training data! Models relationships between item i and its neighbors j w ij can be learned/estimated based on x and all other users that rated i Why is this a good idea? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 10

11 Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 11

12 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 1 Idea: Let s set values w such that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find w ij that minimize SSE on training data! J w = x,i b xi + j N i;x Predicted rating Think of w as a vector of numbers w ij r xj b xj r xi True rating

13 A simple way to minimize a function f(x): Compute the derivative f Start at some point y and evaluate f(y) Make a step in the reverse direction of the gradient: y = y f(y) Repeat until converged f f y + f(y) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, y 1

14 Example: Formulation Assume we have a dataset with a single user x and items 0, 1, and. We are given all ratings, and we want to compute the weights w 01, w 0, and w 0. Rating estimate: r xi = b xi + σ j N(i,x) w ij r xj b xj Training dataset already has the correct r xi values. We will use the estimation formula to compute the unknown weights w 01, w 0, and w 0. Optimization problem: Compute w ij values to minimize: Plug in the formulas: σ (i,x) R r xi r xi minimize J w = b x0 + w 01 r x1 b x1 + w 0 r x b x r x0 + b x1 + w 01 r x0 b x0 + w 1 r x b x r x1 + b x + w 0 r x0 b x0 + w 1 r x1 b x1 r x CS Lecture 9 Mustafa Ozdal, Bilkent University 1

15 Example: Algorithm Initialize unknown variables: Iterate: w new = new w 01 new w 0 new w 1 = 0 w 01 0 w 0 0 w 1 while w new - w old > ε w old = w new w new = w old - J(w old ) is the learning rate (a parameter) How to compute J(w old )? CS Lecture 9 Mustafa Ozdal, Bilkent University 1

16 Example: Gradient-Based Update J w = b x0 + w 01 r x1 b x1 + w 0 r x b x r x0 + b x1 + w 01 r x0 b x0 + w 1 r x b x r x1 + b x + w 0 r x0 b x0 + w 1 r x1 b x1 r x J(w) = J(w) w 01 J(w) w 0 new w 01 new w 0 new w 1 = old w 01 old w 0 old w 1 η J(w) w 01 J(w) w 0 J(w) w 1 Each partial derivative is evaluated at w old. J(w) w 1 CS Lecture 9 Mustafa Ozdal, Bilkent University 16

17 Example: Computing Partial Derivatives J w = b x0 + w 01 r x1 b x1 + w 0 r x b x r x0 + b x1 + w 01 r x0 b x0 + w 1 r x b x r x1 + b x + w 0 r x0 b x0 + w 1 r x1 b x1 r x Reminder: ( ax+b ) x = ax + b a J(w) = b w x0 + w 01 r x1 b x w 0 r x b x r x0 r x1 b x1 + b x1 + w 01 r x0 b x0 + w 1 r x b x r x1 r x0 b x0 Evaluate each partial derivative at w old to compute the gradient direction. CS Lecture 9 Mustafa Ozdal, Bilkent University 17

18 We have the optimization problem, now what? Gradient descent: Iterate until convergence: w w w J where w J is the gradient (derivative evaluated on data): w J = J(w) w ij = x,i b xi + k N i;x J w = learning rate w ik r xk b xk r xi r xj b xj for j {N i; x, i, x } else J(w) = 0 w ij Note: We fix movie i, go over all r xi, for every movie j N i; x, we compute J(w) w ij while w new - w old > ε: w old = w new w new = w old - w old J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 18 x b xi + j N i;x w ij r xj b xj r xi

19 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 19 So far: r xi = b xi + σ j N(i;x) w ij r xj b xj Weights w ij derived based on their role; no use of an arbitrary similarity measure (w ij s ij ) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract regional correlations Global effects Factorization CF/NN

20 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 0 Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 CF+Biases+learned weights: 0.91 Grand Prize: 0.86

21 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 1 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber

22 items items J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, SVD on Netflix data: R Q P T 1 users R 1 factors Q For now let s assume we can approximate the rating matrix R as a product of thin Q P T R has missing entries but let s ignore that for now! Basically, we will want the reconstruction error to be small on known ratings and we don t care about the values on the missing ones users P T SVD: A = U V T factors

23 items factors items How to estimate the missing rating of user x for item i? users? factors Q users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, r xi = q i p x = P T f q if p xf q i = row i of Q p x = column x of P T

24 items factors items How to estimate the missing rating of user x for item i? users? factors Q users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, r xi = q i p x = P T f q if p xf q i = row i of Q p x = column x of P T

25 items f factors items How to estimate the missing rating of user x for item i? users.? f factors 1 Q users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, r xi = q i p x = P T f q if p xf q i = row i of Q p x = column x of P T

26 Factor J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber

27 Factor J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 7 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber

28 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8 n n FYI, SVD: A: Input data matrix U: Left singular vecs m A m V T V: Right singular vecs : Singular values U So in our case: SVD on Netflix data: R Q P T A = R, Q = U, P T = V T r xi = q i p x

29 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9 We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min U,V,Σ ij A Note two things: A ij UΣV T ij SSE and RMSE are monotonically related: RMSE = 1 c SSE Great news: SVD is minimizing RMSE Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries!

30 items items J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, users factors SVD isn t defined when entries are missing! Use specialized methods to find P, Q min P,Q σ i,x R r xi q i p x Note: Q We don t require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants users P T r xi Ƹ = q i p x factors

31

32 General Concept: Overfitting Almost-linear data is fit to a linear function and a polynomial function. Polynomial model fits perfectly to data. Linear model has some error in the training set. Image source: Wikipedia CS Lecture 9 Mustafa Ozdal, Bilkent University Linear model is expected to perform better on test data, because it filters out noise.

33 items items J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Our goal is to find P and Q such that: min P,Q i,x R r xi q i p x users factors Q users P T factors

34 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is it fits too well the training data and thus not generalizing well to unseen test data 1??? 1?? 1

35 To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce 1??? 1?? 1 min P, Q training ( r xi q i p x ) 1 x p x i q i error 1, user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that achieve the minimum of the objective J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

36 Regularization min P, Q training ( r xi q i p x ) 1 x p x i q i error 1, user set regularization parameters length What happens if the user x has rated hundreds of movies? The error term will dominate, and we ll get a rich model Noise is less of an issue because we have lots of data What happens if the user x has rated only a few movies? Length term for p x will have more effect, and we ll get a simple model Same argument applies for items CS Lecture 9 Mustafa Ozdal, Bilkent University 6

37 Factor The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x min factors error + length x p x i q i The Lion King funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber 7

38 Factor The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x min factors error + length x p x i q i The Lion King funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber 8

39 Factor The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x min factors error + length x p x i q i The Lion King funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber 9

40 Factor The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x min factors error + length x p x i q i The Lion King funny Independence Day J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Dumb and Dumber 0

41 Want to find matrices P and Q: min ( rxi qi px ) 1 px qi P, Q training x i Gradient descent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P - P Q Q - Q where Q is gradient/derivative of matrix Q: Q = [ q if ] and q if = σ x,i r xi q i p x Here q if is entry f of row q i of matrix Q Observation: Computing gradients is slow! How to compute gradient of a matrix? Compute gradient of every element independently! p xf + λ q if J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 1

42 Example min P, Q training ( r q Rewrite objective as: xi i p x 1 x x Assume we want factors per user and item: ) p i q i p x = p x0 p x1 p x q i = q i0 q i1 q i min r xi q i0 p x0 + q i1 p x1 + q i p x x,i +λ 1 σ x p x0 + p x1 + p x +λ σ i q i0 + q i1 + q i CS Lecture 9 Mustafa Ozdal, Bilkent University

43 Example min r xi q i0 p x0 + q i1 p x1 + q i p x x,i +λ 1 x p x0 + p x1 + p x p x = p x0 p x1 p x q i = q i0 q i1 q i +λ q i0 + q i1 + q i i Compute gradient for variable q i0 : q i0 = r xi (q i0 p x0 + q i1 p x1 + q i p x ) p x0 + λ q i0 x,i Do the same for every free variable CS Lecture 9 Mustafa Ozdal, Bilkent University

44 Gradient Descent - Computation Cost Q = [ q if ] and q if = σ x,i r xi q i p x p xf + λ q if How many free variables do we have? (# of users + # of items). (# of factors) Which ratings do we process to compute q if? All ratings for item i Which ratings do we process to compute p xf? All ratings for user x What is the complexity of one iteration? O(# of ratings. # of factors) CS Lecture 9 Mustafa Ozdal, Bilkent University

45 Stochastic Gradient Descent Gradient Descent (GD): Update all free variables in one step. Need to process all ratings. Stochastic Gradient Descent (SGD): Update the free variables associated with a single rating in one step. Need many more steps to converge Each step is much faster In practice: SGD much faster than GD GD: QQ σ rxi Q(r xi ) SGD: QQ μq(r xi ) CS Lecture 9 Mustafa Ozdal, Bilkent University

46 Stochastic Gradient Descent q if = r xi q i p x x,i p xf = r xi q i p x x,i p xf + λ q if q if + λ 1 p xf Which free variables are associated with rating r xi? p x = p x0 p x1.. q i = q i0 q i1.. p xk q ik CS Lecture 9 Mustafa Ozdal, Bilkent University 6

47 Stochastic Gradient Descent q if = r xi q i p x p xf + λ q if x,i p xf = r xi q i p x q if + λ 1 p xf x,i For each r xi : ε xi = (r xi q i p x ) (derivative of the error ) q i q i + μ 1 ε xi p x λ q i (update equation) p x p x + μ ε xi q i λ 1 p x (update equation) μ learning rate Note: The operations above are vector operations CS Lecture 9 Mustafa Ozdal, Bilkent University 7

48 Stochastic gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each r xi : ε xi = (r xi q i p x ) (derivative of the error ) q i q i + μ 1 ε xi p x λ q i (update equation) p x p x + μ ε xi q i λ 1 p x (update equation) for loops: For until convergence: μ learning rate For each r xi Compute gradient, do a step J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 8

49 Value of the objective function Convergence of GD vs. SGD Iteration/step GD improves the value of the objective function at every step. SGD improves the value but in a noisy way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9

50 Koren, Bell, Volinksy, IEEE Computer, 009 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 0

51

52 user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations μ = overall mean rating b x = bias of user x = bias of movie i b i J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

53 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, We have expectations on the rating by user x of movie i, even without estimating x s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day ( frequency )

54 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, r xi = μ + b x + b i + q i p x Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: =.7 You are a critical reviewer: your ratings are 1 star lower than the mean: b x = -1 Star Wars gets a mean rating of 0. higher than average movie: b i = + 0. Predicted rating for you on Star Wars: = =.

55 Solve: Stochastic gradient decent to find parameters Note: Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, regularization goodness of fit is selected via gridsearch on a validation set i i x x x x i i R i x x i i x xi P Q b b p q p q b b r 1 ), (, ) ( min

56 RMSE CF (no time bias) Basic Latent Factors Latent Factors w/ Biases Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

57 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 7 Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86

58

59 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 9 Sudden rise in the average movie rating (early 00) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD 09

60 Original model: r xi = +b x + b i + q i p x Add time dependence to biases: r xi = +b x (t)+ b i (t) +q i p x Make parameters b x and b i to depend on time (1) Parameterize time-dependence by linear trends () Each bin corresponds to 10 consecutive weeks Add temporal dependence to factors p x (t) user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 60

61 RMSE CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF Millions of parameters J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 61

62 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6 Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Latent factors+biases+time: Still no prize! Getting desperate. Try a kitchen sink approach! Grand Prize: 0.86

63 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

64 June 6 th submission triggers 0-day last call J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

65 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6 Ensemble team formed Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10% BellKor Continue to get small improvements in their scores Realize that they are in direct competition with Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score

66 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 66 Submissions limited to 1 a day Only 1 final submission could be made in the last h hours before deadline BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor s Frantic last hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKor submits a little early (on purpose), 0 mins before deadline Ensemble submits their final entry 0 mins later.and everyone waits.

67 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 67

68 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 68

69 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 69 Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth Further reading: Y. Koren, Collaborative filtering with temporal dynamics, KDD

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

CS 175: Project in Artificial Intelligence. Slides 4: Collaborative Filtering

CS 175: Project in Artificial Intelligence. Slides 4: Collaborative Filtering CS 175: Project in Artificial Intelligence Slides 4: Collaborative Filtering 1 Topic 6: Collaborative Filtering Some slides taken from Prof. Smyth (with slight modifications) 2 Outline General aspects

More information

Matrix Factorization Techniques for Recommender Systems

Matrix Factorization Techniques for Recommender Systems Matrix Factorization Techniques for Recommender Systems By Yehuda Koren Robert Bell Chris Volinsky Presented by Peng Xu Supervised by Prof. Michel Desmarais 1 Contents 1. Introduction 4. A Basic Matrix

More information

Collaborative Filtering. Radek Pelánek

Collaborative Filtering. Radek Pelánek Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine

More information

Introduction to Computational Advertising

Introduction to Computational Advertising Introduction to Computational Advertising MS&E 9 Stanford University Autumn Instructors: Dr. Andrei Broder and Dr. Vanja Josifovski Yahoo! Research General course info Course Website: http://www.stanford.edu/class/msande9/

More information

CS425: Algorithms for Web Scale Data

CS425: Algorithms for Web Scale Data CS: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS. The original slides can be accessed at: www.mmds.org Customer

More information

Matrix Factorization Techniques For Recommender Systems. Collaborative Filtering

Matrix Factorization Techniques For Recommender Systems. Collaborative Filtering Matrix Factorization Techniques For Recommender Systems Collaborative Filtering Markus Freitag, Jan-Felix Schwarz 28 April 2011 Agenda 2 1. Paper Backgrounds 2. Latent Factor Models 3. Overfitting & Regularization

More information

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007 Recommender Systems Dipanjan Das Language Technologies Institute Carnegie Mellon University 20 November, 2007 Today s Outline What are Recommender Systems? Two approaches Content Based Methods Collaborative

More information

Preliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use!

Preliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use! Data Mining The art of extracting knowledge from large bodies of structured data. Let s put it to use! 1 Recommendations 2 Basic Recommendations with Collaborative Filtering Making Recommendations 4 The

More information

Collaborative Filtering Applied to Educational Data Mining

Collaborative Filtering Applied to Educational Data Mining Journal of Machine Learning Research (200) Submitted ; Published Collaborative Filtering Applied to Educational Data Mining Andreas Töscher commendo research 8580 Köflach, Austria andreas.toescher@commendo.at

More information

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research

More information

Recommender Systems EE448, Big Data Mining, Lecture 10. Weinan Zhang Shanghai Jiao Tong University

Recommender Systems EE448, Big Data Mining, Lecture 10. Weinan Zhang Shanghai Jiao Tong University 2018 EE448, Big Data Mining, Lecture 10 Recommender Systems Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html Content of This Course Overview of

More information

Restricted Boltzmann Machines for Collaborative Filtering

Restricted Boltzmann Machines for Collaborative Filtering Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Pawan Goyal CSE, IITKGP October 21, 2014 Pawan Goyal (IIT Kharagpur) Recommendation Systems October 21, 2014 1 / 52 Recommendation System? Pawan Goyal (IIT Kharagpur) Recommendation

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 04 Matrix Completion Rainer Gemulla, Pauli Miettinen May 02, 2013 Recommender systems Problem Set of users Set of items (movies, books, jokes, products, stories,...) Feedback (ratings,

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Algorithms for Collaborative Filtering

Algorithms for Collaborative Filtering Algorithms for Collaborative Filtering or How to Get Half Way to Winning $1million from Netflix Todd Lipcon Advisor: Prof. Philip Klein The Real-World Problem E-commerce sites would like to make personalized

More information

CS60021: Scalable Data Mining. Large Scale Machine Learning

CS60021: Scalable Data Mining. Large Scale Machine Learning J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Large Scale Machine Learning Sourangshu Bhattacharya Example: Spam filtering Instance

More information

Matrix Factorization In Recommender Systems. Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015

Matrix Factorization In Recommender Systems. Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015 Matrix Factorization In Recommender Systems Yong Zheng, PhDc Center for Web Intelligence, DePaul University, USA March 4, 2015 Table of Contents Background: Recommender Systems (RS) Evolution of Matrix

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 622 - Section 2 - Spring 27 Pre-final Review Jan-Willem van de Meent Feedback Feedback https://goo.gl/er7eo8 (also posted on Piazza) Also, please fill out your TRACE evaluations!

More information

CSC 411: Lecture 02: Linear Regression

CSC 411: Lecture 02: Linear Regression CSC 411: Lecture 02: Linear Regression Richard Zemel, Raquel Urtasun and Sanja Fidler University of Toronto (Most plots in this lecture are from Bishop s book) Zemel, Urtasun, Fidler (UofT) CSC 411: 02-Regression

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

Matrix Factorization and Collaborative Filtering

Matrix Factorization and Collaborative Filtering 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Matrix Factorization and Collaborative Filtering MF Readings: (Koren et al., 2009)

More information

Collaborative Filtering

Collaborative Filtering Collaborative Filtering Nicholas Ruozzi University of Texas at Dallas based on the slides of Alex Smola & Narges Razavian Collaborative Filtering Combining information among collaborating entities to make

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Pawan Goyal CSE, IITKGP October 29-30, 2015 Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 1 / 61 Recommendation System? Pawan Goyal (IIT Kharagpur) Recommendation

More information

Matrix Factorization Techniques for Recommender Systems

Matrix Factorization Techniques for Recommender Systems Matrix Factorization Techniques for Recommender Systems Patrick Seemann, December 16 th, 2014 16.12.2014 Fachbereich Informatik Recommender Systems Seminar Patrick Seemann Topics Intro New-User / New-Item

More information

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Schütze s, linked from http://informationretrieval.org/ IR 8: Evaluation & SVD Paul Ginsparg Cornell University, Ithaca, NY 20 Sep 2011

More information

Andriy Mnih and Ruslan Salakhutdinov

Andriy Mnih and Ruslan Salakhutdinov MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

The BellKor Solution to the Netflix Grand Prize

The BellKor Solution to the Netflix Grand Prize 1 The BellKor Solution to the Netflix Grand Prize Yehuda Koren August 2009 I. INTRODUCTION This article describes part of our contribution to the Bell- Kor s Pragmatic Chaos final solution, which won the

More information

Large-scale Ordinal Collaborative Filtering

Large-scale Ordinal Collaborative Filtering Large-scale Ordinal Collaborative Filtering Ulrich Paquet, Blaise Thomson, and Ole Winther Microsoft Research Cambridge, University of Cambridge, Technical University of Denmark ulripa@microsoft.com,brmt2@cam.ac.uk,owi@imm.dtu.dk

More information

Scaling Neighbourhood Methods

Scaling Neighbourhood Methods Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)

More information

* Matrix Factorization and Recommendation Systems

* Matrix Factorization and Recommendation Systems Matrix Factorization and Recommendation Systems Originally presented at HLF Workshop on Matrix Factorization with Loren Anderson (University of Minnesota Twin Cities) on 25 th September, 2017 15 th March,

More information

Lessons Learned from the Netflix Contest. Arthur Dunbar

Lessons Learned from the Netflix Contest. Arthur Dunbar Lessons Learned from the Netflix Contest Arthur Dunbar Background From Wikipedia: The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for films,

More information

Generative Models for Discrete Data

Generative Models for Discrete Data Generative Models for Discrete Data ddebarr@uw.edu 2016-04-21 Agenda Bayesian Concept Learning Beta-Binomial Model Dirichlet-Multinomial Model Naïve Bayes Classifiers Bayesian Concept Learning Numbers

More information

Machine Learning Techniques

Machine Learning Techniques Machine Learning Techniques ( 機器學習技法 ) Lecture 15: Matrix Factorization Hsuan-Tien Lin ( 林軒田 ) htlin@csie.ntu.edu.tw Department of Computer Science & Information Engineering National Taiwan University

More information

Collaborative Filtering on Ordinal User Feedback

Collaborative Filtering on Ordinal User Feedback Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Collaborative Filtering on Ordinal User Feedback Yehuda Koren Google yehudako@gmail.com Joseph Sill Analytics Consultant

More information

Collaborative Recommendation with Multiclass Preference Context

Collaborative Recommendation with Multiclass Preference Context Collaborative Recommendation with Multiclass Preference Context Weike Pan and Zhong Ming {panweike,mingz}@szu.edu.cn College of Computer Science and Software Engineering Shenzhen University Pan and Ming

More information

Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University

Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University Recommender System for Yelp Dataset CS6220 Data Mining Northeastern University Clara De Paolis Kaluza Fall 2016 1 Problem Statement and Motivation The goal of this work is to construct a personalized recommender

More information

6.034 Introduction to Artificial Intelligence

6.034 Introduction to Artificial Intelligence 6.34 Introduction to Artificial Intelligence Tommi Jaakkola MIT CSAIL The world is drowning in data... The world is drowning in data...... access to information is based on recommendations Recommending

More information

Large-scale Collaborative Ranking in Near-Linear Time

Large-scale Collaborative Ranking in Near-Linear Time Large-scale Collaborative Ranking in Near-Linear Time Liwei Wu Depts of Statistics and Computer Science UC Davis KDD 17, Halifax, Canada August 13-17, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information

Data Science Mastery Program

Data Science Mastery Program Data Science Mastery Program Copyright Policy All content included on the Site or third-party platforms as part of the class, such as text, graphics, logos, button icons, images, audio clips, video clips,

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Least Mean Squares Regression

Least Mean Squares Regression Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method

More information

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.

Sparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation. ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Previous lectures: Sparse vectors recap How to represent

More information

ANLP Lecture 22 Lexical Semantics with Dense Vectors

ANLP Lecture 22 Lexical Semantics with Dense Vectors ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Henry S. Thompson ANLP Lecture 22 5 November 2018 Previous

More information

Matrix and Tensor Factorization from a Machine Learning Perspective

Matrix and Tensor Factorization from a Machine Learning Perspective Matrix and Tensor Factorization from a Machine Learning Perspective Christoph Freudenthaler Information Systems and Machine Learning Lab, University of Hildesheim Research Seminar, Vienna University of

More information

Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent

Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent Large-Scale Matrix Factorization with Distributed Stochastic Gradient Descent KDD 2011 Rainer Gemulla, Peter J. Haas, Erik Nijkamp and Yannis Sismanis Presenter: Jiawen Yao Dept. CSE, UT Arlington 1 1

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression Behavioral Data Mining Lecture 7 Linear and Logistic Regression Outline Linear Regression Regularization Logistic Regression Stochastic Gradient Fast Stochastic Methods Performance tips Linear Regression

More information

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Slides credits: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017 CPSC 340: Machine Learning and Data Mining Stochastic Gradient Fall 2017 Assignment 3: Admin Check update thread on Piazza for correct definition of trainndx. This could make your cross-validation code

More information

Lecture 2: Linear regression

Lecture 2: Linear regression Lecture 2: Linear regression Roger Grosse 1 Introduction Let s ump right in and look at our first machine learning algorithm, linear regression. In regression, we are interested in predicting a scalar-valued

More information

Lecture: Matrix Completion

Lecture: Matrix Completion 1/56 Lecture: Matrix Completion http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html Acknowledgement: this slides is based on Prof. Jure Leskovec and Prof. Emmanuel Candes s lecture notes Recommendation systems

More information

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

More information

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression CSC2515 Winter 2015 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/csc2515_winter15.html

More information

Linear Regression. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Linear Regression. Robot Image Credit: Viktoriya Sukhanova 123RF.com Linear Regression These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Recommendation. Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Recommendation. Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Recommendation Tobias Scheffer Recommendation Engines Recommendation of products, music, contacts,.. Based on user features, item

More information

CS246 Final Exam, Winter 2011

CS246 Final Exam, Winter 2011 CS246 Final Exam, Winter 2011 1. Your name and student ID. Name:... Student ID:... 2. I agree to comply with Stanford Honor Code. Signature:... 3. There should be 17 numbered pages in this exam (including

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Algorithms other than SGD. CS6787 Lecture 10 Fall 2017

Algorithms other than SGD. CS6787 Lecture 10 Fall 2017 Algorithms other than SGD CS6787 Lecture 10 Fall 2017 Machine learning is not just SGD Once a model is trained, we need to use it to classify new examples This inference task is not computed with SGD There

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

4 Bias-Variance for Ridge Regression (24 points)

4 Bias-Variance for Ridge Regression (24 points) Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

The Pragmatic Theory solution to the Netflix Grand Prize

The Pragmatic Theory solution to the Netflix Grand Prize The Pragmatic Theory solution to the Netflix Grand Prize Martin Piotte Martin Chabbert August 2009 Pragmatic Theory Inc., Canada nfpragmatictheory@gmail.com Table of Contents 1 Introduction... 3 2 Common

More information

Machine Learning and Data Mining. Linear regression. Kalev Kask

Machine Learning and Data Mining. Linear regression. Kalev Kask Machine Learning and Data Mining Linear regression Kalev Kask Supervised learning Notation Features x Targets y Predictions ŷ Parameters q Learning algorithm Program ( Learner ) Change q Improve performance

More information

CSC321 Lecture 2: Linear Regression

CSC321 Lecture 2: Linear Regression CSC32 Lecture 2: Linear Regression Roger Grosse Roger Grosse CSC32 Lecture 2: Linear Regression / 26 Overview First learning algorithm of the course: linear regression Task: predict scalar-valued targets,

More information

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University. Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org #1: C4.5 Decision Tree - Classification (61 votes) #2: K-Means - Clustering

More information

Matrix Factorization and Factorization Machines for Recommender Systems

Matrix Factorization and Factorization Machines for Recommender Systems Talk at SDM workshop on Machine Learning Methods on Recommender Systems, May 2, 215 Chih-Jen Lin (National Taiwan Univ.) 1 / 54 Matrix Factorization and Factorization Machines for Recommender Systems Chih-Jen

More information

The Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017

The Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017 The Kernel Trick, Gram Matrices, and Feature Extraction CS6787 Lecture 4 Fall 2017 Momentum for Principle Component Analysis CS6787 Lecture 3.1 Fall 2017 Principle Component Analysis Setting: find the

More information

ECE521 lecture 4: 19 January Optimization, MLE, regularization

ECE521 lecture 4: 19 January Optimization, MLE, regularization ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity

More information

UVA CS 4501: Machine Learning

UVA CS 4501: Machine Learning UVA CS 4501: Machine Learning Lecture 21: Decision Tree / Random Forest / Ensemble Dr. Yanjun Qi University of Virginia Department of Computer Science Where are we? è Five major sections of this course

More information

Stochastic Gradient Descent

Stochastic Gradient Descent Stochastic Gradient Descent Machine Learning CSE546 Carlos Guestrin University of Washington October 9, 2013 1 Logistic Regression Logistic function (or Sigmoid): Learn P(Y X) directly Assume a particular

More information

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017 CPSC 340: Machine Learning and Data Mining MLE and MAP Fall 2017 Assignment 3: Admin 1 late day to hand in tonight, 2 late days for Wednesday. Assignment 4: Due Friday of next week. Last Time: Multi-Class

More information

Impact of Data Characteristics on Recommender Systems Performance

Impact of Data Characteristics on Recommender Systems Performance Impact of Data Characteristics on Recommender Systems Performance Gediminas Adomavicius YoungOk Kwon Jingjing Zhang Department of Information and Decision Sciences Carlson School of Management, University

More information

Week 3: Linear Regression

Week 3: Linear Regression Week 3: Linear Regression Instructor: Sergey Levine Recap In the previous lecture we saw how linear regression can solve the following problem: given a dataset D = {(x, y ),..., (x N, y N )}, learn to

More information

Decoupled Collaborative Ranking

Decoupled Collaborative Ranking Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

A Modified PMF Model Incorporating Implicit Item Associations

A Modified PMF Model Incorporating Implicit Item Associations A Modified PMF Model Incorporating Implicit Item Associations Qiang Liu Institute of Artificial Intelligence College of Computer Science Zhejiang University Hangzhou 31007, China Email: 01dtd@gmail.com

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS LAST TIME Intro to cudnn Deep neural nets using cublas and cudnn TODAY Building a better model for image classification Overfitting

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks

Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks Context-aware Ensemble of Multifaceted Factorization Models for Recommendation Prediction in Social Networks Yunwen Chen kddchen@gmail.com Yingwei Xin xinyingwei@gmail.com Lu Yao luyao.2013@gmail.com Zuotao

More information

We choose parameter values that will minimize the difference between the model outputs & the true function values.

We choose parameter values that will minimize the difference between the model outputs & the true function values. CSE 4502/5717 Big Data Analytics Lecture #16, 4/2/2018 with Dr Sanguthevar Rajasekaran Notes from Yenhsiang Lai Machine learning is the task of inferring a function, eg, f : R " R This inference has to

More information

Mixed Membership Matrix Factorization

Mixed Membership Matrix Factorization Mixed Membership Matrix Factorization Lester Mackey 1 David Weiss 2 Michael I. Jordan 1 1 University of California, Berkeley 2 University of Pennsylvania International Conference on Machine Learning, 2010

More information

Homework 6: Image Completion using Mixture of Bernoullis

Homework 6: Image Completion using Mixture of Bernoullis Homework 6: Image Completion using Mixture of Bernoullis Deadline: Wednesday, Nov. 21, at 11:59pm Submission: You must submit two files through MarkUs 1 : 1. a PDF file containing your writeup, titled

More information

CSCI 1951-G Optimization Methods in Finance Part 12: Variants of Gradient Descent

CSCI 1951-G Optimization Methods in Finance Part 12: Variants of Gradient Descent CSCI 1951-G Optimization Methods in Finance Part 12: Variants of Gradient Descent April 27, 2018 1 / 32 Outline 1) Moment and Nesterov s accelerated gradient descent 2) AdaGrad and RMSProp 4) Adam 5) Stochastic

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector

More information

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model Checking/Diagnostics Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics The session is a continuation of a version of Section 11.3 of MMD&S. It concerns

More information

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model Checking/Diagnostics Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics The session is a continuation of a version of Section 11.3 of MMD&S. It concerns

More information

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing

High Dimensional Search Min- Hashing Locality Sensi6ve Hashing High Dimensional Search Min- Hashing Locality Sensi6ve Hashing Debapriyo Majumdar Data Mining Fall 2014 Indian Statistical Institute Kolkata September 8 and 11, 2014 High Support Rules vs Correla6on of

More information

SQL-Rank: A Listwise Approach to Collaborative Ranking

SQL-Rank: A Listwise Approach to Collaborative Ranking SQL-Rank: A Listwise Approach to Collaborative Ranking Liwei Wu Depts of Statistics and Computer Science UC Davis ICML 18, Stockholm, Sweden July 10-15, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information