CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Size: px
Start display at page:

Download "CS246: Mining Massive Datasets Jure Leskovec, Stanford University"

Transcription

1 CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty

2 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 2 Hgh dm. data Graph data Infnte data Machne learnng Apps Localty senstve hashng PageRank, SmRank Flterng data streams SVM Recommen der systems Clusterng Communty Detecton Web advertsng Decson Trees Assocaton Rules Dmensonal ty reducton Spam Detecton Queres on streams Perceptron, knn Duplcate document detecton

3 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 3 Gven some data: Learn a functon to map from the nput to the output Gven: Tranng examples! ", $ = &! " unknown functon & Fnd: A good approxmaton to & for some

4 2/19/18 Jure Leskovec, Pnterest Machne Learnng Class, Wnter Supervsed: Gven labeled data {x,y}, learn f(x)=y Unsupervsed: Gven only unlabeled data {x}, learn f(x) Semsupervsed: Gven some labeled and some unlabeled data Actve learnng: Whenever we predct f(x)=y, we then receve true y * Transfer learnng: Learn f(x) so that t works well on new doman f(z)

5 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 5 Would lke to do predcton: estmate a functon f(x) so that y = f(x) Where y can be: Real number: Regresson Categorcal: Classfcaton Complex object: Rankng of tems, Parse tree, etc. Data s labeled: Have many pars {(x, y)} x vector of bnary, categorcal, real valued features y class: {1, 1}, or a real number

6 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 6 Task: Gven data (X,Y) buld a model f() to predct Y based on X Strategy: Estmate! = $ % on (', )). Hope that the same $(%) also works to predct unknown ) The hope s called generalzaton Tranng data Test data Overfttng: If f(x) predcts well Y but s unable to predct Y We want to buld a model that generalzes well to unseen data X X Y Y

7 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 7 tranng ponts 1) Tranng data s drawn ndependently at random accordng to unknown probablty dstrbuton!(#, %) 2) The learnng algorthm analyzes the examples and produces a classfer ' Gven new data #, % drawn from (, the classfer s gven # and predcts * = '(#) The loss (*, *) s then measured Goal of the learnng algorthm: Fnd ' that mnmzes expected loss. ( []

8 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 8 test data!(#, %) (#, %) tranng data % # Tranng set ' Learnng algorthm ( Why s t hard? We estmate on tranng data but want the to work well on unseen future (.e., test) data %) % loss functon L(%), %)

9 Goal: Mnmze the expected loss mn & ' [)] $ But, we don t have access to but only to tranng sample,: mn & [)] $ So, we mnmze the average loss on the tranng data: 2/19/18 mn. / = 1 $ 2 3 L /(6 7), : 7 ; 7<= Problem: Just memorzng the tranng data gves us a perfect model (wth zero loss) Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 9

10 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 10 Gven: A set of N tranng examples {(# $, & $ ), (# (, & ( ),, (# *, & * )} A loss functon, Choose:. / =. / 3 Fnd: The weght vector 4 that mnmzes the expected loss on the tranng data = 5 6 = L 4 # ; <, & ; ;>$

11 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 11 Problem: Stepwse Constant Loss functon Loss f w (x) Dervatve s ether 0 or

12 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 12 Approxmatng the expected loss by a smooth functon Replace the orgnal objectve functon by a surrogate loss functon. E.g., hnge loss: 5 %& ' = 1 ( ) max 0, 1! 0 1(3 0 ) 067 When! = 1:

13 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 13 Example: Spam flterng Instance space x Î X ( X = n data ponts) Bnary or realvalued feature vector x of word occurrences d features (words other thngs, d~100,000) Class y Î Y y: Spam (1), Ham (1)

14 !(#, %): dstrbuton of emal messages # and ther true labels % ( spam, ham ) Tranng sample: a set of emal messages that have been labeled by the user Learnng algorthm: What we study! ': The classfer output by the learnng alg. Test pont: A new emal # (wth ts true, but hdden, label %) Loss functon (()*, )): 2/19/18 predcted label % true label % spam ham spam 0 10 not spam 1 0 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 14

15 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 15 We wll talk about the followng methods: Support Vector Machnes Decson trees Man queston: How to effcently tran (buld a model/fnd model parameters)?

16

17 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 17 Want to separate from usng a lne Data: Tranng examples: (x 1, y 1 ) (x n, y n ) Each example : x = ( x (1),, x (d) ) x (j) s real valued y Î { 1, 1 } Inner product:,! # = & (() (() (. Whch s best lnear separator (defned by w,b)?

18 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 18 A B C Dstance from the separatng hyperplane corresponds to the confdence of predcton Example: We are more sure about the class of A and B than of C

19 Margn!: Dstance of closest example from the decson lne/hyperplane The reason we defne margn ths way s due to theoretcal convenence and exstence of generalzaton error bounds that depend on the value of margn. 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 19

20 Remember: The Dot product! # =! # %&' (! 234(! = *! (,). 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 20 /,01

21 Dot product! # =! # %&' ( What s ) *, ) *,? x 2 x 1 ) w x b = 0 x 2 x 1 x 2 x 1 ) ) In ths case ), In ths case,, ), So, roughly corresponds to the margn Bottom lne: Bgger bgger the separaton 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 21

22 Dstance from a pont to a lne A (x A (1), x A (2) ) d(a, L) H w L w x b = 0 Let: Lne L: w xb = w (1) x (1) w (2) x (2) b=0 w = (w (1), w (2) ) Pont A = (x A (1), x A (2) ) Note we assume! " = $ Pont M on a lne = (x M (1), x M (2) ) (0,0) M (x M (1), x M (2) ) d(a, L) = AH = (AM) w = (x A (1) x M (1) ) w (1) (x A (2) x M (2) ) w (2) = x A (1) w (1) x A (2) w (2) b = w A b Remember x M (1) w (1) x M (2) w (2) = b snce M belongs to lne L 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 22

23 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 23 % w x b = 0 Predcton = sgn(w x b) Confdence = (w x b) y For thdatapont:! " = % & " ( ) " Want to solve: *, *./! " %,( " Can rewrte as maxg w, g s. t. ", y ( w x b) ³ g

24 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 24 Maxmze the margn: Good accordng to ntuton, theory (c.f. VC dmenson ) and practce max g w, g s. t. ", y ( w x b) ³ g g g g w xb=0! s margn dstance from the separatng hyperplane Maxmzng the margn

25

26 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 26 Separatng hyperplane s defned by the support vectors Ponts on / planes from the soluton If you knew these ponts, you could gnore the rest Generally, d1 support vectors (for d dm. data)

27 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 27 Problem: Let! " $ % = ' then (! " ($ % = (' Scalng w ncreases margn! Soluton: x 2 w xb=1 x 1 w xb=0 w xb=1 Work wth normalzed w: ' =!! " $ % w w Let s also requre support vectors " ) to be on the plane defned by:! " ) $ = ±, 5 w = / 0 (2) 4 267

28 Want to maxmze margn! What s the relaton between x 1 and x 2?! " =! $ $& ' ' We also know: '! " * = " '! $ * = " So: '! " * = " '! $ $& ' ' * = " '! $ * $& ' ' = " ' 1 Þ g = w xb=1 w xb=0 w xb=1 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 28 x 2 w w 2g x 1 w 1 = w w w Note: w w = w 2

29 We started wth g max, w g s. t. ", y ( w x b) arg maxg = arg max mn w 1 w 1 2 w s. t. ", y ( w x b) 2 = arg mn w ³ arg mn 1 2 g But w can be arbtrarly large! We normalzed and... Then: ³ 1 Ths s called SVM wth hard constrants = w 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, x 2 w w 2g w xb=1 x 1 w xb=0 w xb=1

30 If data s not separable ntroduce penalty: mn w 1 2 s. t. ", y w 2 ( w x C (#number of b) ³ 1 Mnmze ǁwǁ 2 plus the number of tranng mstakes Set C usng cross valdaton How to penalze mstakes? All mstakes are not equally bad! mstakes) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 30 w xb=0

31 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 31 Introduce slack varables x mn w, b, x ³ 0 s. t. ", 1 2 y w 2 ( w x If pont x s on the wrong sde of the margn then get penalty x n C åx b) = 1 ³ 1x x x j w xb=0 For each data pont: If margn ³ 1, don t care If margn < 1, pay lnear penalty

32 mn w s. t. ", 1 2 y w 2 ( w x C (#number of b) ³ 1 What s the role of slack penalty C: C= : Only want to w, b that separate the data C=0: Can set x to anythng, then w=0 (bascally gnores the data) (0,0) mstakes) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 32 small C ḇg C good C

33 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 33 SVM n the natural form arg mn w, b 1 2 w w C å SVM uses Hnge Loss : 0/1 loss n max { 0,1 y ( w x b) } Margn = 1 Emprcal loss L (how well we ft tranng data) Regularzaton parameter penalty mn w, b 1 2 s. t. ", y w 2 ( w x n Cåx = 1 b) ³ 1x Hnge loss: max{0, 1z} z = y ( x w b)

34

35 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 35 mn w, b 1 2 s. t. ", y ( x w Want to estmate! and "! Standard way: Use a solver! Solver: software for fndng solutons to common optmzaton problems Use a quadratc solver: Mnmze quadratc functon Subject to lnear constrants Problem: Solvers are neffcent for bg data! n w w C å = 1 b) x ³ 1 x

36 Want to mnmze J(w,b): Compute the gradent Ñ(j) w.r.t. w (j) 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 36 å = = = Ñ n j j j j w y x L C w w w b L J 1 ) ( ) ( ) ( ) ( ), ( ), ( else 1 ) (w f 0 ), ( ) ( ) ( j j x y b x y w y x L = ³ = å( ) å å = = = þ ýü î í ì = n d j j j d j j b x w y C w w b J 1 1 ) ( ) ( 1 2 ) ( 2 1 ) ( 0,1 max ), ( Emprcal loss!(# $ & $ )

37 2/20/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 37 Gradent descent: Iterate untl convergence: For j = 1 d Evaluate: j) f ( w, b) ÑJ = ( j) Update: w w (j) w (j) hñj (j) w w Problem: Computng ÑJ (j) takes O(n) tme! n sze of the tranng dataset n ( ( j) L( x = å, y ) w C ( j) = 1 w h learnng rate parameter C regularzaton parameter

38 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 38 Stochastc Gradent Descent Instead of evaluatng gradent over all examples evaluate t for each ndvdual tranng example j) ( j) L( x, y ÑJ ( x ) = w C ( j) w Stochastc gradent descent: ( ) Iterate untl convergence: For = 1 n For j = 1 d Compute: ÑJ (j) (x ) Update: w (j) w (j) h ÑJ (j) (x ) ÑJ We just had: n ( j) ( j) L( x, y ) = w Cå ( j) = 1 w Notce: no summaton over anymore

39

40 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 40 Example by Leon Bottou: Reuters RCV1 document corpus Predct a category of a document One vs. the rest classfcaton n = 781,000 tranng examples (documents) 23,000 test examples d = 50,000 features One feature per word Remove stopwords Remove low frequency words

41 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 41 Questons: (1) Is SGD successful at mnmzng J(w,b)? (2) How quckly does SGD fnd the mn of J(w,b)? (3) What s the error on a test set? Standard SVM Fast SVM SGDSVM Tranng tme Value of J(w,b) Test error (1) SGDSVM s successful at mnmzng the value of J(w,b) (2) SGDSVM s super fast (3) SGDSVM test set error s comparable

42 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 42 SGD SVM Conventonal SVM Optmzaton qualty: J(w,b) J (w opt,b opt ) For optmzng J(w,b) wthn reasonable qualty SGDSVM s super fast

43 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 43 Need to choose learnng rate h and t 0 ht æ L( x, y ) ö wt 1 wt ç wt C t t0 è w ø Leon suggests: Choose t 0 so that the expected ntal updates are comparable wth the expected sze of the weghts Choose h: Select a small subsample Try varous rates h (e.g., 10, 1, 0.1, 0.01, ) Pck the one that most reduces the cost Use h for next 100k teratons on the full dataset

44 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 44 Sparse Lnear SVM: Feature vector x s sparse (contans many zeros) Do not do: x = [0,0,0,1,0,0,0,0,5,0,0,0,0,0,0, ] But represent x as a sparse vector x =[(4,1), (9,5), ] Can we do the SGD update more effcently? æ w w hç w C è Approxmated n 2 steps: w L( x, y ) w hc w w w( 1h) L( x, y w ) ö ø cheap: x s sparse and so few coordnates j of w wll be updated expensve: w s not sparse, all coordnates need to be updated

45 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 45 Soluton 1:! = # % Represent vector w as the product of scalar s and vector v Then the update procedure s: Two step update procedure: (1) (2) L( x, y ) w w hc w w w( 1h) (1) % = % () *,,/ *! (2) # = #(1 () Soluton 2: Perform only step (1) for each tranng example Perform step (2) wth lower frequency and hgher h

46 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 46 Stoppng crtera: How many teratons of SGD? Early stoppng wth cross valdaton Create a valdaton set Montor cost functon on the valdaton set Stop when loss stops decreasng Early stoppng Extract two (very) small sets of tranng data A and B Tran on A, stop by valdatng on B Number of tranng epochs on A s an estmate of k Tran for k epochs on the full dataset

47 Idea 1: One aganst all Learn 3 classfers vs. {o, } vs. {o, } o vs. {, } Obtan: w b, w b, w o b o How to classfy? Return class c arg max c w c x b c 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 47

48 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 48 Idea 2: Learn 3 sets of weghts smoultaneously! For each class c estmate w c, b c Want the correct class y to have hghest margn: w y x b y ³ 1 w c x b c "c ¹ y, " (x, y )

49 Optmzaton problem: To obtan parameters w c, b c (for each class c) we can use smlar technques as for 2 class SVM SVM s wdely perceved a very powerful learnng algorthm 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 49 c c y y n c w b b x w b x w C w x x ³ å å = 1 mn 1 c 2 2 1, y c " ³ " ¹ " 0,, x

50

51 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 51 New settng: Onlne Learnng Allows for modelng problems where we have a contnuous stream of data We want an algorthm to learn from t and slowly adapt to the changes n data Idea: Do slow updates to the model SGDSVM makes updates f msclassfyng a datapont So: Frst tran the classfer on tranng data. Then for every example from the stream, f we msclassfy, update the model (usng a small learnng rate)

52 Protocol: User comes and tell us orgn and destnaton We offer to shp the package for some money ($10 $50) Based on the prce we offer, sometmes the user uses our servce (y = 1), sometmes they don't (y = 1) Task: Buld an algorthm to optmze what prce we offer to the users Features x capture: Informaton about user Orgn and destnaton Problem: Wll user accept the prce? 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 52

53 Model whether user wll accept our prce:! = $(&; () Accept: y =1, Not accept: y=1 Buld ths model wth say Perceptron or SVM The webste that runs contnuously Onlne learnng algorthm would do somethng lke User comes User s represented as an (x,y) par where x: Feature vector ncludng prce we offer, orgn, destnaton y: If they chose to use our servce or not The algorthm updates w usng just the (x,y) par Bascally, we update the w parameters every tme we get some new data 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 53

54 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 54 We dscard ths dea of a data set Instead we have a contnuous stream of data Further comments: For a major webste where you have a massve stream of data then ths knd of algorthm s pretty reasonable Don t need to deal wth all the tranng data If you had a small number of users you could save ther data and then run a normal algorthm on the full dataset Dong multple passes over the data

55 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 55 An onlne algorthm can adapt to changng user preferences For example, over tme users may become more prce senstve The algorthm adapts and learns ths So the system s dynamc

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

CS60021: Scalable Data Mining. Large Scale Machine Learning

CS60021: Scalable Data Mining. Large Scale Machine Learning J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Large Scale Machine Learning Sourangshu Bhattacharya Example: Spam filtering Instance

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification Instance-Based earnng (a.k.a. memory-based learnng) Part I: Nearest Neghbor Classfcaton Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

CSE 546 Midterm Exam, Fall 2014(with Solution)

CSE 546 Midterm Exam, Fall 2014(with Solution) CSE 546 Mdterm Exam, Fall 014(wth Soluton) 1. Personal nfo: Name: UW NetID: Student ID:. There should be 14 numbered pages n ths exam (ncludng ths cover sheet). 3. You can use any materal you brought:

More information

Intro to Visual Recognition

Intro to Visual Recognition CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable

More information

15-381: Artificial Intelligence. Regression and cross validation

15-381: Artificial Intelligence. Regression and cross validation 15-381: Artfcal Intellgence Regresson and cross valdaton Where e are Inputs Densty Estmator Probablty Inputs Classfer Predct category Inputs Regressor Predct real no. Today Lnear regresson Gven an nput

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

Structured Perceptrons & Structural SVMs

Structured Perceptrons & Structural SVMs Structured Perceptrons Structural SVMs 4/6/27 CS 59: Advanced Topcs n Machne Learnng Recall: Sequence Predcton Input: x = (x,,x M ) Predct: y = (y,,y M ) Each y one of L labels. x = Fsh Sleep y = (N, V)

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012 Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a

More information

17 Support Vector Machines

17 Support Vector Machines 17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant

More information

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them? Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso Machne Learnng Data Mnng CS/CS/EE 155 Lecture 4: Regularzaton, Sparsty Lasso 1 Recap: Complete Ppelne S = {(x, y )} Tranng Data f (x, b) = T x b Model Class(es) L(a, b) = (a b) 2 Loss Functon,b L( y, f

More information

Large-Margin HMM Estimation for Speech Recognition

Large-Margin HMM Estimation for Speech Recognition Large-Margn HMM Estmaton for Speech Recognton Prof. Hu Jang Department of Computer Scence and Engneerng York Unversty, Toronto, Ont. M3J 1P3, CANADA Emal: hj@cs.yorku.ca Ths s a jont work wth Chao-Jun

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

CSCI B609: Foundations of Data Science

CSCI B609: Foundations of Data Science CSCI B609: Foundatons of Data Scence Lecture 13/14: Gradent Descent, Boostng and Learnng from Experts Sldes at http://grgory.us/data-scence-class.html Grgory Yaroslavtsev http://grgory.us Constraned Convex

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

Spectral Clustering. Shannon Quinn

Spectral Clustering. Shannon Quinn Spectral Clusterng Shannon Qunn (wth thanks to Wllam Cohen of Carnege Mellon Unverst, and J. Leskovec, A. Raaraman, and J. Ullman of Stanford Unverst) Graph Parttonng Undrected graph B- parttonng task:

More information

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics /7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space

More information

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

p 1 c 2 + p 2 c 2 + p 3 c p m c 2 Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

Discriminative classifier: Logistic Regression. CS534-Machine Learning

Discriminative classifier: Logistic Regression. CS534-Machine Learning Dscrmnatve classfer: Logstc Regresson CS534-Machne Learnng robablstc Classfer Gven an nstance, hat does a probablstc classfer do dfferentl compared to, sa, perceptron? It does not drectl predct Instead,

More information

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov 9.93 Class IV Part I Bayesan Decson Theory Yur Ivanov TOC Roadmap to Machne Learnng Bayesan Decson Makng Mnmum Error Rate Decsons Mnmum Rsk Decsons Mnmax Crteron Operatng Characterstcs Notaton x - scalar

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

Evaluation for sets of classes

Evaluation for sets of classes Evaluaton for Tet Categorzaton Classfcaton accuracy: usual n ML, the proporton of correct decsons, Not approprate f the populaton rate of the class s low Precson, Recall and F 1 Better measures 21 Evaluaton

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Logistic Classifier CISC 5800 Professor Daniel Leeds

Logistic Classifier CISC 5800 Professor Daniel Leeds lon 9/7/8 Logstc Classfer CISC 58 Professor Danel Leeds Classfcaton strategy: generatve vs. dscrmnatve Generatve, e.g., Bayes/Naïve Bayes: 5 5 Identfy probablty dstrbuton for each class Determne class

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont. UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 10: Classfca8on wth Support Vector Machne (cont. ) Yanjun Q / Jane Unversty of Vrgna Department of Computer Scence 9/6/14

More information

Cluster Validation Determining Number of Clusters. Umut ORHAN, PhD.

Cluster Validation Determining Number of Clusters. Umut ORHAN, PhD. Cluster Analyss Cluster Valdaton Determnng Number of Clusters 1 Cluster Valdaton The procedure of evaluatng the results of a clusterng algorthm s known under the term cluster valdty. How do we evaluate

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding Recall: man dea of lnear regresson Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 8 Lnear regresson can be used to study an

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 008 Recall: man dea of lnear regresson Lnear regresson can be used to study

More information

Machine learning: Density estimation

Machine learning: Density estimation CS 70 Foundatons of AI Lecture 3 Machne learnng: ensty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square ata: ensty estmaton {.. n} x a vector of attrbute values Objectve: estmate the model of

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Semi-Supervised Learning

Semi-Supervised Learning Sem-Supervsed Learnng Consder the problem of Prepostonal Phrase Attachment. Buy car wth money ; buy car wth wheel There are several ways to generate features. Gven the lmted representaton, we can assume

More information

1 Gradient descent for convex functions: univariate case

1 Gradient descent for convex functions: univariate case prnceton unv. F 13 cos 51: Advanced Algorthm Desgn Lecture 19: Gong wth the slope: offlne, onlne, and randomly Lecturer: Sanjeev Arora Scrbe: hs lecture s about gradent descent, a popular method for contnuous

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

Maxent Models & Deep Learning

Maxent Models & Deep Learning Maxent Models & Deep Learnng 1. Last bts of maxent (sequence) models 1.MEMMs vs. CRFs 2.Smoothng/regularzaton n maxent models 2. Deep Learnng 1. What s t? Why s t good? (Part 1) 2. From logstc regresson

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information