CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Similar documents
Support Vector Machines

Support Vector Machines

Ensemble Methods: Boosting

Generalized Linear Methods

Which Separator? Spring 1

Kernel Methods and SVMs Extension

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Linear Classification, SVMs and Nearest Neighbors

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines

Support Vector Machines

1 Convex Optimization

10-701/ Machine Learning, Fall 2005 Homework 3

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

18-660: Numerical Methods for Engineering Design and Optimization

Lecture 10 Support Vector Machines. Oct

Natural Language Processing and Information Retrieval

CS60021: Scalable Data Mining. Large Scale Machine Learning

Lecture Notes on Linear Regression

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Feature Selection: Part 1

Linear Feature Engineering 11

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

Lecture 10 Support Vector Machines II

Support Vector Machines

Multilayer Perceptron (MLP)

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

Boostrapaggregating (Bagging)

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Support Vector Machines CS434

CSC 411 / CSC D11 / CSC C11

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification

Week 5: Neural Networks

CSE 546 Midterm Exam, Fall 2014(with Solution)

Intro to Visual Recognition

15-381: Artificial Intelligence. Regression and cross validation

Lecture 3: Dual problems and Kernels

Online Classification: Perceptron and Winnow

Support Vector Machines CS434

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Structured Perceptrons & Structural SVMs

Pattern Classification

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

17 Support Vector Machines

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Homework Assignment 3 Due in class, Thursday October 15

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso

Large-Margin HMM Estimation for Speech Recognition

Kristin P. Bennett. Rensselaer Polytechnic Institute

Multilayer neural networks

Lagrange Multipliers Kernel Trick

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CSCI B609: Foundations of Data Science

Maximal Margin Classifier

Supporting Information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

SDMML HT MSc Problem Sheet 4

Spectral Clustering. Shannon Quinn

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

CSE 252C: Computer Vision III

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

Evaluation of classifiers MLPs

Multi-layer neural networks

Discriminative classifier: Logistic Regression. CS534-Machine Learning

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning

Chapter 6 Support vector machine. Séparateurs à vaste marge

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Evaluation for sets of classes

EEE 241: Linear Systems

IV. Performance Optimization

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Logistic Classifier CISC 5800 Professor Daniel Leeds

Semi-supervised Classification with Active Query Selection

Nonlinear Classifiers II

Classification as a Regression Problem

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.

Cluster Validation Determining Number of Clusters. Umut ORHAN, PhD.

Maximum Likelihood Estimation (MLE)

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Excess Error, Approximation Error, and Estimation Error

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Machine learning: Density estimation

Learning Theory: Lecture Notes

Lecture 6: Introduction to Linear Regression

Semi-Supervised Learning

1 Gradient descent for convex functions: univariate case

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Gaussian Mixture Models

EM and Structure Learning

Chapter 9: Statistical Inference and the Relationship between Two Variables

1 The Mistake Bound Model

Maxent Models & Deep Learning

Calculation of time complexity (3%)

Transcription:

CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data Infnte data Machne learnng Apps Localty senstve hashng PageRank, SmRank Flterng data streams SVM Recommen der systems Clusterng Communty Detecton Web advertsng Decson Trees Assocaton Rules Dmensonal ty reducton Spam Detecton Queres on streams Perceptron, knn Duplcate document detecton

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 3 Gven some data: Learn a functon to map from the nput to the output Gven: Tranng examples! ", $ = &! " unknown functon & Fnd: A good approxmaton to & for some

2/19/18 Jure Leskovec, Pnterest Machne Learnng Class, Wnter 2017 4 Supervsed: Gven labeled data {x,y}, learn f(x)=y Unsupervsed: Gven only unlabeled data {x}, learn f(x) Semsupervsed: Gven some labeled and some unlabeled data Actve learnng: Whenever we predct f(x)=y, we then receve true y * Transfer learnng: Learn f(x) so that t works well on new doman f(z)

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 5 Would lke to do predcton: estmate a functon f(x) so that y = f(x) Where y can be: Real number: Regresson Categorcal: Classfcaton Complex object: Rankng of tems, Parse tree, etc. Data s labeled: Have many pars {(x, y)} x vector of bnary, categorcal, real valued features y class: {1, 1}, or a real number

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 6 Task: Gven data (X,Y) buld a model f() to predct Y based on X Strategy: Estmate! = $ % on (', )). Hope that the same $(%) also works to predct unknown ) The hope s called generalzaton Tranng data Test data Overfttng: If f(x) predcts well Y but s unable to predct Y We want to buld a model that generalzes well to unseen data X X Y Y

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 7 tranng ponts 1) Tranng data s drawn ndependently at random accordng to unknown probablty dstrbuton!(#, %) 2) The learnng algorthm analyzes the examples and produces a classfer ' Gven new data #, % drawn from (, the classfer s gven # and predcts * = '(#) The loss (*, *) s then measured Goal of the learnng algorthm: Fnd ' that mnmzes expected loss. ( []

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 8 test data!(#, %) (#, %) tranng data % # Tranng set ' Learnng algorthm ( Why s t hard? We estmate on tranng data but want the to work well on unseen future (.e., test) data %) % loss functon L(%), %)

Goal: Mnmze the expected loss mn & ' [)] $ But, we don t have access to but only to tranng sample,: mn & [)] $ So, we mnmze the average loss on the tranng data: 2/19/18 mn. / = 1 $ 2 3 L /(6 7), : 7 ; 7<= Problem: Just memorzng the tranng data gves us a perfect model (wth zero loss) Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 9

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 10 Gven: A set of N tranng examples {(# $, & $ ), (# (, & ( ),, (# *, & * )} A loss functon, Choose:. / =. / 3 Fnd: The weght vector 4 that mnmzes the expected loss on the tranng data = 5 6 = 1 8 9 L 4 # ; <, & ; ;>$

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 11 Problem: Stepwse Constant Loss functon 6 5 4 3 Loss 2 1 0 4 2 0 2 4 1 f w (x) Dervatve s ether 0 or

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 12 Approxmatng the expected loss by a smooth functon Replace the orgnal objectve functon by a surrogate loss functon. E.g., hnge loss: 5 %& ' = 1 ( ) max 0, 1! 0 1(3 0 ) 067 When! = 1:

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 13 Example: Spam flterng Instance space x Î X ( X = n data ponts) Bnary or realvalued feature vector x of word occurrences d features (words other thngs, d~100,000) Class y Î Y y: Spam (1), Ham (1)

!(#, %): dstrbuton of emal messages # and ther true labels % ( spam, ham ) Tranng sample: a set of emal messages that have been labeled by the user Learnng algorthm: What we study! ': The classfer output by the learnng alg. Test pont: A new emal # (wth ts true, but hdden, label %) Loss functon (()*, )): 2/19/18 predcted label % true label % spam ham spam 0 10 not spam 1 0 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 14

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 15 We wll talk about the followng methods: Support Vector Machnes Decson trees Man queston: How to effcently tran (buld a model/fnd model parameters)?

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 17 Want to separate from usng a lne Data: Tranng examples: (x 1, y 1 ) (x n, y n ) Each example : x = ( x (1),, x (d) ) x (j) s real valued y Î { 1, 1 } Inner product:,! # = & (() (() (. Whch s best lnear separator (defned by w,b)?

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 18 A B C Dstance from the separatng hyperplane corresponds to the confdence of predcton Example: We are more sure about the class of A and B than of C

Margn!: Dstance of closest example from the decson lne/hyperplane The reason we defne margn ths way s due to theoretcal convenence and exstence of generalzaton error bounds that depend on the value of margn. 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 19

Remember: The Dot product! # =! # %&' (! 234(! = *! (,). 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 20 /,01

Dot product! # =! # %&' ( What s ) *, ) *,? x 2 x 1 ) w x b = 0 x 2 x 1 x 2 x 1 ) ) In ths case ), In ths case,, ), So, roughly corresponds to the margn Bottom lne: Bgger bgger the separaton 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 21

Dstance from a pont to a lne A (x A (1), x A (2) ) d(a, L) H w L w x b = 0 Let: Lne L: w xb = w (1) x (1) w (2) x (2) b=0 w = (w (1), w (2) ) Pont A = (x A (1), x A (2) ) Note we assume! " = $ Pont M on a lne = (x M (1), x M (2) ) (0,0) M (x M (1), x M (2) ) d(a, L) = AH = (AM) w = (x A (1) x M (1) ) w (1) (x A (2) x M (2) ) w (2) = x A (1) w (1) x A (2) w (2) b = w A b Remember x M (1) w (1) x M (2) w (2) = b snce M belongs to lne L 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 22

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 23 % w x b = 0 Predcton = sgn(w x b) Confdence = (w x b) y For thdatapont:! " = % & " ( ) " Want to solve: *, *./! " %,( " Can rewrte as maxg w, g s. t. ", y ( w x b) ³ g

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 24 Maxmze the margn: Good accordng to ntuton, theory (c.f. VC dmenson ) and practce max g w, g s. t. ", y ( w x b) ³ g g g g w xb=0! s margn dstance from the separatng hyperplane Maxmzng the margn

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 26 Separatng hyperplane s defned by the support vectors Ponts on / planes from the soluton If you knew these ponts, you could gnore the rest Generally, d1 support vectors (for d dm. data)

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 27 Problem: Let! " $ % = ' then (! " ($ % = (' Scalng w ncreases margn! Soluton: x 2 w xb=1 x 1 w xb=0 w xb=1 Work wth normalzed w: ' =!! " $ % w w Let s also requre support vectors " ) to be on the plane defned by:! " ) $ = ±, 5 w = / 0 (2) 4 267

Want to maxmze margn! What s the relaton between x 1 and x 2?! " =! $ $& ' ' We also know: '! " * = " '! $ * = " So: '! " * = " '! $ $& ' ' * = " '! $ * $& ' ' = " ' 1 Þ g = w xb=1 w xb=0 w xb=1 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 28 x 2 w w 2g x 1 w 1 = w w w Note: w w = w 2

We started wth g max, w g s. t. ", y ( w x b) arg maxg = arg max mn w 1 w 1 2 w s. t. ", y ( w x b) 2 = arg mn w ³ arg mn 1 2 g But w can be arbtrarly large! We normalzed and... Then: ³ 1 Ths s called SVM wth hard constrants = w 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 29 2 x 2 w w 2g w xb=1 x 1 w xb=0 w xb=1

If data s not separable ntroduce penalty: mn w 1 2 s. t. ", y w 2 ( w x C (#number of b) ³ 1 Mnmze ǁwǁ 2 plus the number of tranng mstakes Set C usng cross valdaton How to penalze mstakes? All mstakes are not equally bad! mstakes) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 30 w xb=0

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 31 Introduce slack varables x mn w, b, x ³ 0 s. t. ", 1 2 y w 2 ( w x If pont x s on the wrong sde of the margn then get penalty x n C åx b) = 1 ³ 1x x x j w xb=0 For each data pont: If margn ³ 1, don t care If margn < 1, pay lnear penalty

mn w s. t. ", 1 2 y w 2 ( w x C (#number of b) ³ 1 What s the role of slack penalty C: C= : Only want to w, b that separate the data C=0: Can set x to anythng, then w=0 (bascally gnores the data) (0,0) mstakes) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 32 small C ḇg C good C

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 33 SVM n the natural form arg mn w, b 1 2 w w C å SVM uses Hnge Loss : 0/1 loss n max { 0,1 y ( w x b) } Margn = 1 Emprcal loss L (how well we ft tranng data) Regularzaton parameter penalty mn w, b 1 2 s. t. ", y w 2 ( w x n Cåx = 1 b) ³ 1x 1 0 1 2 Hnge loss: max{0, 1z} z = y ( x w b)

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 35 mn w, b 1 2 s. t. ", y ( x w Want to estmate! and "! Standard way: Use a solver! Solver: software for fndng solutons to common optmzaton problems Use a quadratc solver: Mnmze quadratc functon Subject to lnear constrants Problem: Solvers are neffcent for bg data! n w w C å = 1 b) x ³ 1 x

Want to mnmze J(w,b): Compute the gradent Ñ(j) w.r.t. w (j) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 36 å = = = Ñ n j j j j w y x L C w w w b L J 1 ) ( ) ( ) ( ) ( ), ( ), ( else 1 ) (w f 0 ), ( ) ( ) ( j j x y b x y w y x L = ³ = å( ) å å = = = þ ýü î í ì = n d j j j d j j b x w y C w w b J 1 1 ) ( ) ( 1 2 ) ( 2 1 ) ( 0,1 max ), ( Emprcal loss!(# $ & $ )

2/20/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 37 Gradent descent: Iterate untl convergence: For j = 1 d Evaluate: j) f ( w, b) ÑJ = ( j) Update: w w (j) w (j) hñj (j) w w Problem: Computng ÑJ (j) takes O(n) tme! n sze of the tranng dataset n ( ( j) L( x = å, y ) w C ( j) = 1 w h learnng rate parameter C regularzaton parameter

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 38 Stochastc Gradent Descent Instead of evaluatng gradent over all examples evaluate t for each ndvdual tranng example j) ( j) L( x, y ÑJ ( x ) = w C ( j) w Stochastc gradent descent: ( ) Iterate untl convergence: For = 1 n For j = 1 d Compute: ÑJ (j) (x ) Update: w (j) w (j) h ÑJ (j) (x ) ÑJ We just had: n ( j) ( j) L( x, y ) = w Cå ( j) = 1 w Notce: no summaton over anymore

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 40 Example by Leon Bottou: Reuters RCV1 document corpus Predct a category of a document One vs. the rest classfcaton n = 781,000 tranng examples (documents) 23,000 test examples d = 50,000 features One feature per word Remove stopwords Remove low frequency words

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 41 Questons: (1) Is SGD successful at mnmzng J(w,b)? (2) How quckly does SGD fnd the mn of J(w,b)? (3) What s the error on a test set? Standard SVM Fast SVM SGDSVM Tranng tme Value of J(w,b) Test error (1) SGDSVM s successful at mnmzng the value of J(w,b) (2) SGDSVM s super fast (3) SGDSVM test set error s comparable

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 42 SGD SVM Conventonal SVM Optmzaton qualty: J(w,b) J (w opt,b opt ) For optmzng J(w,b) wthn reasonable qualty SGDSVM s super fast

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 43 Need to choose learnng rate h and t 0 ht æ L( x, y ) ö wt 1 wt ç wt C t t0 è w ø Leon suggests: Choose t 0 so that the expected ntal updates are comparable wth the expected sze of the weghts Choose h: Select a small subsample Try varous rates h (e.g., 10, 1, 0.1, 0.01, ) Pck the one that most reduces the cost Use h for next 100k teratons on the full dataset

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 44 Sparse Lnear SVM: Feature vector x s sparse (contans many zeros) Do not do: x = [0,0,0,1,0,0,0,0,5,0,0,0,0,0,0, ] But represent x as a sparse vector x =[(4,1), (9,5), ] Can we do the SGD update more effcently? æ w w hç w C è Approxmated n 2 steps: w L( x, y ) w hc w w w( 1h) L( x, y w ) ö ø cheap: x s sparse and so few coordnates j of w wll be updated expensve: w s not sparse, all coordnates need to be updated

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 45 Soluton 1:! = # % Represent vector w as the product of scalar s and vector v Then the update procedure s: Two step update procedure: (1) (2) L( x, y ) w w hc w w w( 1h) (1) % = % () *,,/ *! (2) # = #(1 () Soluton 2: Perform only step (1) for each tranng example Perform step (2) wth lower frequency and hgher h

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 46 Stoppng crtera: How many teratons of SGD? Early stoppng wth cross valdaton Create a valdaton set Montor cost functon on the valdaton set Stop when loss stops decreasng Early stoppng Extract two (very) small sets of tranng data A and B Tran on A, stop by valdatng on B Number of tranng epochs on A s an estmate of k Tran for k epochs on the full dataset

Idea 1: One aganst all Learn 3 classfers vs. {o, } vs. {o, } o vs. {, } Obtan: w b, w b, w o b o How to classfy? Return class c arg max c w c x b c 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 47

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 48 Idea 2: Learn 3 sets of weghts smoultaneously! For each class c estmate w c, b c Want the correct class y to have hghest margn: w y x b y ³ 1 w c x b c "c ¹ y, " (x, y )

Optmzaton problem: To obtan parameters w c, b c (for each class c) we can use smlar technques as for 2 class SVM SVM s wdely perceved a very powerful learnng algorthm 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 49 c c y y n c w b b x w b x w C w x x ³ å å = 1 mn 1 c 2 2 1, y c " ³ " ¹ " 0,, x

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 51 New settng: Onlne Learnng Allows for modelng problems where we have a contnuous stream of data We want an algorthm to learn from t and slowly adapt to the changes n data Idea: Do slow updates to the model SGDSVM makes updates f msclassfyng a datapont So: Frst tran the classfer on tranng data. Then for every example from the stream, f we msclassfy, update the model (usng a small learnng rate)

Protocol: User comes and tell us orgn and destnaton We offer to shp the package for some money ($10 $50) Based on the prce we offer, sometmes the user uses our servce (y = 1), sometmes they don't (y = 1) Task: Buld an algorthm to optmze what prce we offer to the users Features x capture: Informaton about user Orgn and destnaton Problem: Wll user accept the prce? 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 52

Model whether user wll accept our prce:! = $(&; () Accept: y =1, Not accept: y=1 Buld ths model wth say Perceptron or SVM The webste that runs contnuously Onlne learnng algorthm would do somethng lke User comes User s represented as an (x,y) par where x: Feature vector ncludng prce we offer, orgn, destnaton y: If they chose to use our servce or not The algorthm updates w usng just the (x,y) par Bascally, we update the w parameters every tme we get some new data 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 53

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 54 We dscard ths dea of a data set Instead we have a contnuous stream of data Further comments: For a major webste where you have a massve stream of data then ths knd of algorthm s pretty reasonable Don t need to deal wth all the tranng data If you had a small number of users you could save ther data and then run a normal algorthm on the full dataset Dong multple passes over the data

2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 55 An onlne algorthm can adapt to changng user preferences For example, over tme users may become more prce senstve The algorthm adapts and learns ths So the system s dynamc