LECTURE 2: Linear and quadratic classifiers

Size: px
Start display at page:

Download "LECTURE 2: Linear and quadratic classifiers"

Transcription

1 LECURE : Lear ad quadratc classfers g Part : Bayesa Decso heory he Lkelhood Rato est Maxmum A Posteror ad Maxmum Lkelhood Dscrmat fuctos g Part : Quadratc classfers Bayes classfers for ormally dstrbuted classes Eucldea ad Mahalaobs dstace classfers Numercal example g Part 3: Lear classfers Gradet descet he perceptro rule he pseudo-verse soluto Least mea squares Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

2 Part : Bayesa Decso heory Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

3 he Lkelhood Rato est () g Assume we are to classfy a object based o the evdece provded by a measuremet (or feature vector) x g Would you agree that a reasoable decso rule would be the followg? "Choose the class that s most probable gve the observed feature vector x More formally: Evaluate the posteror probablty of each class P( x) ad choose the class wth largest P( x) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 3

4 he Lkelhood Rato est () g Let us exame ths decso rule for a two-class problem I ths case the decso rule becomes f P( x) > P( x) else choose choose g Or, a more compact form Applyg Bayes theorem P( x) < > P( x) P (x )P( ) P(x )P( P(x) < > P(x) ) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 4

5 he Lkelhood Rato est (3) P(x) does ot affect the decso rule so t ca be elmated*. Rearragg the prevous expresso P(x ) Λ(x) P(x ) < > P( ) P( ) he term Λ(x) s called the lkelhood rato, ad the decso rule s kow as the lkelhood rato test Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 5

6 Lkelhood Rato est: a example () g Gve a classfcato problem wth the followg class codtoal destes: P(x ) π e (x 4) P(x ) π e (x 0) P(x ) P(x ) 4 0 x g Derve a classfcato rule based o the Lkelhood Rato est (assume equal prors) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 6

7 Lkelhood Rato est: a example () g Soluto Substtutg the gve lkelhoods ad prors to the LR expresso: Smplfyg, chagg sgs ad takg logs: Whch yelds: < x 7 > Λ(x) (x 4) π π e e (x 4) (x 0) (x 0) > < < 0 > hs LR result makes tutve sese sce the lkelhoods are detcal ad dffer oly ther mea value R : say R : say P(x ) P(x ) 4 0 x Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 7

8 he probablty of error g he probablty of error s the probablty of assgg x to the wrog class For a two-class problem, P[error x] s smply P(error x) P( P( x) x) f f we decde we decde g It makes sese that the classfcato rule be desged to mmze the average prob. of error P[error] across all possble values of x + P(error) P(error,x)dx P(error x)p(x)dx + g o mmze P(error) we mmze the tegrad P(error x) at each x: choose the class wth maxmum posteror P( x) hs s called the MAXIMUM A POSERIORI (MAP) RULE Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 8

9 Mmzg probablty of error g We prove the optmalty of the MAP rule graphcally he rght plot shows the posteror for each of the two classes he bottom plots shows the P(error) for the MAP rule ad a alteratve decso rule Whch oe has lower P(error) (color-flled area)? P(w x) x HE MAP RULE HE OHER RULE Choose RED Choose BLUE Choose RED Choose RED Choose BLUE Choose RED Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 9

10 he Bayes Rsk () g So far we have assumed that the pealty of msclassfyg as s the same as the recprocal I geeral, ths s ot the case Msclassfcatos the fsh sortg lead to dfferet costs Medcal dagostcs errors are very asymmetrc g We capture ths cocept wth a cost fucto C j C j represets the cost of choosg class whe class j s the true class g Ad defe the Bayes Rsk as the expected value of the cost R E[C] j C j P[choose ad x j] Cj P[x R j] P[ j] j Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 0

11 he Bayes Rsk () g What s the decso rule that mmzes the Bayes Rsk? It ca be show* that the mmum rsk ca be acheved by usg the followg decso rule: P(x ) > (C P(x ) < (C C C ) P[ ] ) P[ ] *For a tutve proof vst my lecture otes at AMU g Notce ay smlartes wth the LR? Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

12 he Bayes Rsk: a example () g Cosder a classfcato problem wth two classes defed by the followg lkelhood fuctos P(x ) P(x ) π π e 3 e (x ) x 3 lkelhood x g What s the decso rule that mmzes P[error]? Assume P[ ]P[ ]0.5, C C 0, C ad C 3 / Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

13 he Bayes Rsk: a example () Λ(x) e e x x 3 (x ) x 3 π π e 3 > < e (x ) + (x ) > 0 < 3 > x + 0 x 4.73,.7 < x 3 > < R R x R Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 3

14 Varatos of the LR g he LR that mmzes the Bayes Rsk s called the Bayes Crtero Λ(x) P(x ) > P(x ) < (C (C C C ) P[ ] ) P[ ] g Maxmum A Posteror Crtero Sometmes we wll be terested mmzg P[error], whch s a specal case of the Bayes Crtero f we use a zero-oe cost fucto C j 0 j j Λ(x) P(x ) P(x ) > P( ) < P( ) P( P( x) > x) < Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 4

15 Varatos of the LR g Maxmum Lkelhood Fally, the smplest for of the LR s obtaed for the case of equal prors P[ ]/ ad zero-oe cost fucto: 0 j j j P( ) C C P(x ) > Λ(x) P(x ) < Whe would you wat to use a ML crtero? Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 5

16 Mult-class problems g he prevous decso rules were derved for twoclass problems, but geeralze gracefully for multple classes: o mmze P[error] choose the class wth hghest P[ x] argmax P( C x) o mmze Bayes rsk choose the class wth lowest R[ x] C C argmr( j x) argm CjP( j x) C j Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 6

17 Dscrmat fuctos () g All these decso rules have the same structure At each pot x feature space choose class whch maxmzes (or mmzes) some measure g (x) hs structure ca be formalzed wth a set of dscrmat fuctos g (x),..c, ad the followg decso rule " assg x to class f g (x) > g j(x) j " We ca the express the three basc decso rules (Bayes, MAP ad ML) terms of Dscrmat Fuctos: Crtero Dscrmat Fucto Bayes g (x)-r(α x) MAP g (x)p( x) ML g (x)p(x ) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 7

18 Dscrmat fuctos () g herefore, we ca vsualze the decso rule as a etwork that computes C dscrmat fuctos ad selects the class correspodg to the largest dscrmat Class assgmet Select max Costs Dscrmat fuctos g (x) g (x) g C (x) Features x x x 3 x d Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 8

19 Recappg g he LR s a theoretcal result that ca oly be appled f we have complete kowledge of the lkelhoods P[x ] P[x ] geerally ukow, but ca be estmated from data If the form of the lkelhood s kow (e.g., Gaussa) the problem s smplfed b/c we oly eed to estmate the parameters of the model (e.g., mea ad covarace) g hs leads to a classfer kow as QUADRAIC, whch we cover ext If the form of the lkelhood s ukow, the problem becomes much harder, ad requres a techque kow as oparametrc desty estmato g hs techque s covered lecture 3 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 9

20 Part : Quadratc classfers Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 0

21 Bayes classfer for Gaussa classes () g For Normally dstrbuted classes, the dscrmat fuctos reduce to very smple expressos he (multvarate) Gaussa desty ca be defed as p(x) ( π) / / exp (x µ) (x µ) Usg the Bayes rule, the MAP dscrmat fucto ca be wrtte as P(x )P( ) g (x) P( x) P(x) / ( π) / exp (x µ ) (x µ ) P( ) P(x) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

22 Bayes classfer for Gaussa classes () Elmatg costat terms akg logs g(x) -/ exp (x µ) (x µ) P() g (x) (x µ ) (x µ ) - log + ( ) log( P( )) hs s kow as a QUADRAIC dscrmat fucto (because t s a fucto of the square of x) I the ext few sldes we wll aalyze what happes to ths expresso uder dfferet assumptos for the covarace Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty

23 Case : Σ σ I () g hs stuato occurs whe the features are statstcally depedet, ad have the same varace for all classes I ths case, the quadratc dscrmat fucto becomes - ( σ I) (x µ ) - log( σ I ) + log( P( )) (x µ ) (x µ ) log( P( )) g (x) (x µ ) + σ Assumg equal prors ad droppg costat terms DIM (x µ ) (x µ ) - µ g (x) ( x ) hs s called a Eucldea-dstace or earest mea classfer From [Schalkoff, 99] Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 3

24 Case : Σ σ I () g hs s probably the smplest statstcal classfer that you ca buld: Assg a ukow example to the class whose ceter s the closest usg the Eucldea dstace x µ µ µ C Eucldea Dstace Eucldea Dstace Eucldea Dstace Mmum Selector class Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 4

25 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 5 Case : Σ σ I, example [ ] [ ] [ ] 0 0 Σ 0 0 Σ 0 0 Σ 5 µ 4 7 µ 3 µ 3 3

26 Case : Σ Σ (Σ o-dagoal) g All the classes have the same covarace matrx, but the matrx s ot dagoal I ths case, the quadratc dscrmat becomes g (x) (x µ ) (x µ ) - log + ( ) log( P( )) Assumg equal prors ad elmatg costat terms g (x) (x µ ) Σ - (x µ ) hs s kow as a Mahalaobs-dstace classfer x Σ µ µ µ C Mahalaobs Dstace Mahalaobs Dstace Mahalaobs Dstace Mmum Selector class Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 6

27 he Mahalaobs dstace g he quadratc term s called the Mahalaobs dstace, a very mportat metrc SPR he Mahalaobs metrc s a vector dstace that uses a - orm - ca be thought of as a stretchg factor o the space Note that for a detty covarace matrx ( I), the Mahalaobs dstace becomes the famlar Eucldea dstace x µ x x - µ K x - µ Κ Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 7

28 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 8 [ ] [ ] [ ] Σ Σ Σ 5 µ 4 5 µ 3 µ 3 3 Case : Σ Σ (Σ o-dagoal), example

29 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 9 [ ] [ ] [ ] Σ 7 Σ Σ 5 µ 4 5 µ 3 µ 3 3 Case 3: Σ Σ j geeral case, example Zoom out

30 Numercal example () g Derve a lear dscrmat fucto for the twoclass 3D classfcato problem defed by /4 0 0 µ 0 0 /4 [ 0 0 0] ; µ [ ] ;Σ Σ 0 /4 0 ; p( ) p( ) g Aybody would dare to sketch the lkelhood destes ad decso boudary for ths problem? Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 30

31 Numercal example () g Soluto x -µ x x -µ x g (x) ( x -µ ) Σ ( x -µ ) logσ + logp( ) y -µ y y -µ y + logp( ) z -µ z z - -µ z x x - 0 g(x) y y log ; 3 z z - 0 x - g(x) y - z x - y - + log z - 3 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 3

32 Numercal example (3) g Soluto (cotued) ( ) > ( + y + z ) + log - ( x ) + ( y ) + ( z ) - x > g (x) g (x) < 3 < + log 3 x + y + z > 6 log < 4.3 Classfy the test example x u [ ] > x < u Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 3

33 Coclusos g he Eucldea dstace classfer s Bayes-optmal* f Gaussa classes ad equal covarace matrces proportoal to the detty matrx ad equal prors g he Mahalaobs dst. classfer s Bayes-optmal f Gaussa classes ad equal covarace matrces ad equal prors *Bayes optmal meas that the classfer yelds the mmum P[error], whch s the best ANY classfer ca acheve Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 33

34 Part 3: Lear classfers Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 34

35 Lear Dscrmat Fuctos () g he objectve of ths secto s to preset methods for learg lear dscrmat fuctos of the form g ( x) w x + w 0 g g ( x) ( x) > 0 < 0 x x x w x+w 0 >0 x ( where w s the weght vector ad w 0 s the threshold or bas Smlar dscrmat fuctos were derved the prevous secto as a specal case of the quadratc classfer w x+w 0 <0 d x x ( w x I ths chapter, the dscrmat fuctos wll be derved a o-parametrc fasho, ths s, o assumptos wll be made about the uderlyg destes Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 35

36 Lear Dscrmat Fuctos () g For coveece, we wll focus o bary classfcato Exteso to the multcategory case ca be easly acheved by g g Usg /ot dchotomes Usg / dchotomes Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 36

37 Gradet descet () g Gradet descet s a geeral method for fucto mmzato From basc calculus, we kow that the mmum of a fucto J(x) s defed by the zeros of the gradet [ ] J(x) 0 x* argm J(x) x x Oly very specal cases ths mmzato fucto has a closed form soluto I some other cases, a closed form soluto may exst, but s umercally ll-posed or mpractcal (e.g., memory requremets) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 37

38 Gradet descet () g Gradet descet fds the mmum a teratve fasho by movg the drecto of steepest descet. Start wth a arbtrary soluto x(0). Compute the gradet x J(x(k)) 3. Move the drecto of steepest descet: x ( k + ) x ( k ) η x J( x ( k )) 4. Go to (utl covergece) J(w) J<0 w>0 Local mmum J>0 w<0 w where η s a learg rate x 0 Ital guess Global mmum x Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 38

39 Perceptro learg () g Let s ow cosder the problem of solvg a bary classfcato problem wth a lear dscrmat As usual, assume we have a dataset X{x (,x (, x (N } cotag examples from the two classes For coveece, we wll absorb the tercept w 0 by augmetg the feature vector x wth a addtoal costat dmeso: w x + w 0 x [ ] w w a y 0 From [Duda, Hart ad Stork, 00] Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 39

40 Perceptro learg () Keep md that our objectve s to fd a vector a such that g ( x) a > y < 0 0 x x o smplfy the dervato, we wll ormalze the trag set by replacg all examples from class by ther egatve y [ y] y g hs allows us to gore class labels ad look for a weght vector such that a y > 0 y From [Duda, Hart ad Stork, 00] Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 40

41 Perceptro learg (3) g o fd ths soluto we must frst defe a objectve fucto J(a) A good choce s what s kow as the Perceptro crtero J P ( a) ( a y) y Υ M g g where Y M s the set of examples msclassfed by a Note that J P (a) s o-egatve sce a y<0 for msclassfed samples Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 4

42 Perceptro learg (4) g o fd the mmum of J P (a), we use gradet descet he gradet s defed by a ( a) ( y) Ad the gradet descet update rule becomes a J P y ( k + ) a( k) hs s kow as the perceptro batch update rule. g he weght vector may also be updated a o-le fasho, ths s, after the presetato of each dvdual example Υ M + η y Υ M y ( k ) ( ( k ) a( k) ηy a + + Perceptro rule where y ( s a example that has bee msclassfed by a(k) Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 4

43 Perceptro learg (5) g If classes are learly separable, the perceptro rule s guarateed to coverge to a vald soluto g However, f the two classes are ot learly separable, the perceptro rule wll ot coverge Sce o weght vector a ca correctly classfy every sample a o-separable dataset, the correctos the perceptro rule wll ever cease Oe ad-hoc soluto to ths problem s to eforce covergece by usg varable learg rates η(k) that approach zero as k approaches fte Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 43

44 Perceptro learg example g Cosder the followg classfcato problem class defed by feature vectors x{[0, 0], [0, ] }; class defed by feature vectors x{[, 0], [, ] }. g Apply the perceptro algorthm to buld a vector a that separates both classes. Use learg rate η ad a(0) [, -, -]. Update the vector a o a per-example bass Preset examples the order whch they were gve above. Draw a scatterplot of the data, ad the separatg le you foud wth the perceptro rule. Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 44

45 Mmum Squared Error soluto () g he classcal Mmum Squared Error (MSE) crtero provdes a alteratve to the perceptro rule he perceptro rule seeks a weght vector a that satsfes the equalty a y ( >0 g he perceptro rule oly cosders msclassfed samples, sce these are the oly oes that volate the above equalty Istead, the MSE crtero looks for a soluto to the equalty a y ( b (, where b ( are some pre-specfed target values (e.g., class labels) g As a result, the MSE soluto uses ALL of the samples the trag set Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty From [Duda, Hart ad Stork, 00] 45

46 Mmum Squared Error soluto () g he system of equatos solved by MSE s where a s the weght vector, each row Y s a trag example, ad each row b s the correspodg class label g y y M M y ( 0 ( 0 (N 0 y y y ( ( M M (N L L L y y y ( D ( D M M (N D a a M a 0 D b b M M b For cosstecy, we wll cotue assumg that examples from class have bee replaced by ther egatve vector, although ths s ot a requremet for the MSE soluto ( ( (N Ya b Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty From [Duda, Hart ad Stork, 00] 46

47 Mmum Squared Error soluto (3) g A exact soluto to Yab ca sometmes be foud If the umber of (depedet) equatos (N) s equal to the umber of ukows (D+), the exact soluto s defed by g I practce, however, Y wll be sgular so ts verse Y - does ot exst a Y b Y wll commoly have more rows (examples) tha colums (ukow), whch yelds a over-determed system, for whch a exact soluto caot be foud Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 47

48 Mmum Squared Error soluto (4) g he soluto ths case s to fd a weght vector that mmzes some fucto of the error betwee the model (ay) ad the desred output (b) I partcular, MSE seeks to Mmze the sum of the Squares of these Errors: N ( ) ( ( ( ) a a y b Ya -b J MSE whch, as usual, ca be foud by settg ts gradet to zero Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 48

49 he pseudo-verse soluto () g he gradet of the objectve fucto s N ( ( ( ( a) ( a y b ) y Y ( Ya b) 0 a JMSE wth zeros defed by Ya Notce that Y Y s ow a square matrx! Y g If Y Y s osgular, the MSE soluto becomes a Y b ( ) - Y Y Y b Y b Pseudo-verse soluto where the matrx Y (Y Y) - Y s kow as the pseudo-verse of Y (Y YI) g Note that, geeral, YY I geeral Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty 49

50 Least-mea-squares soluto () g he objectve fucto J MSE (a) Ya-b ca also be foud usg a gradet descet procedure hs avods the problems that arse whe Y Y s sgular I addto, t also avods the eed for workg wth large matrces g Lookg at the expresso of the gradet, the obvous update rule s ( + ) a( k) + η( k) Y ( b - Ya( k) ) a k It ca be show that f η(k)η()/k, where η() s ay postve costat, ths rule geerates a sequece of vectors that coverge to a soluto to Y (Ya-b)0 Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty From [Duda, Hart ad Stork, 00] 50

51 Least-mea-squares soluto () g he storage requremets of ths algorthm ca be reduced by cosderg each sample sequetally ( ( ) y ( ( k ) a( k) η( k) b - y ( a( k) a + + LMS rule hs s kow as the Wdrow-Hoff, least-mea-squares (LMS) or delta rule [Mtchell, 997] ( + ) a( k) + η( k) Y ( b - Ya( k) ) a k Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty From [Duda, Hart ad Stork, 00] 5

Lecture 7: Linear and quadratic classifiers

Lecture 7: Linear and quadratic classifiers Lecture 7: Lear ad quadratc classfers Bayes classfers for ormally dstrbuted classes Case : Σ σ I Case : Σ Σ (Σ daoal Case : Σ Σ (Σ o-daoal Case 4: Σ σ I Case 5: Σ Σ j eeral case Lear ad quadratc classfers:

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

6. Nonparametric techniques

6. Nonparametric techniques 6. Noparametrc techques Motvato Problem: how to decde o a sutable model (e.g. whch type of Gaussa) Idea: just use the orgal data (lazy learg) 2 Idea 1: each data pot represets a pece of probablty P(x)

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

Lecture 12: Multilayer perceptrons II

Lecture 12: Multilayer perceptrons II Lecture : Multlayer perceptros II Bayes dscrmats ad MLPs he role of hdde uts A eample Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty Bayes dscrmats ad MLPs ( As we have see throuhout

More information

Unsupervised Learning and Other Neural Networks

Unsupervised Learning and Other Neural Networks CSE 53 Soft Computg NOT PART OF THE FINAL Usupervsed Learg ad Other Neural Networs Itroducto Mture Destes ad Idetfablty ML Estmates Applcato to Normal Mtures Other Neural Networs Itroducto Prevously, all

More information

Applications of Multiple Biological Signals

Applications of Multiple Biological Signals Applcatos of Multple Bologcal Sgals I the Hosptal of Natoal Tawa Uversty, curatve gastrectomy could be performed o patets of gastrc cacers who are udergoe the curatve resecto to acqure sgal resposes from

More information

STK4011 and STK9011 Autumn 2016

STK4011 and STK9011 Autumn 2016 STK4 ad STK9 Autum 6 Pot estmato Covers (most of the followg materal from chapter 7: Secto 7.: pages 3-3 Secto 7..: pages 3-33 Secto 7..: pages 35-3 Secto 7..3: pages 34-35 Secto 7.3.: pages 33-33 Secto

More information

Generative classification models

Generative classification models CS 75 Mache Learg Lecture Geeratve classfcato models Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Data: D { d, d,.., d} d, Classfcato represets a dscrete class value Goal: lear f : X Y Bar classfcato

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Bayes Decision Theory - II

Bayes Decision Theory - II Bayes Decso Theory - II Ke Kreutz-Delgado (Nuo Vascocelos) ECE 175 Wter 2012 - UCSD Nearest Neghbor Classfer We are cosderg supervsed classfcato Nearest Neghbor (NN) Classfer A trag set D = {(x 1,y 1 ),,

More information

Multivariate Transformation of Variables and Maximum Likelihood Estimation

Multivariate Transformation of Variables and Maximum Likelihood Estimation Marquette Uversty Multvarate Trasformato of Varables ad Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Assocate Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Copyrght 03 by Marquette Uversty

More information

Simple Linear Regression

Simple Linear Regression Statstcal Methods I (EST 75) Page 139 Smple Lear Regresso Smple regresso applcatos are used to ft a model descrbg a lear relatoshp betwee two varables. The aspects of least squares regresso ad correlato

More information

TESTS BASED ON MAXIMUM LIKELIHOOD

TESTS BASED ON MAXIMUM LIKELIHOOD ESE 5 Toy E. Smth. The Basc Example. TESTS BASED ON MAXIMUM LIKELIHOOD To llustrate the propertes of maxmum lkelhood estmates ad tests, we cosder the smplest possble case of estmatg the mea of the ormal

More information

Bayesian Classification. CS690L Data Mining: Classification(2) Bayesian Theorem: Basics. Bayesian Theorem. Training dataset. Naïve Bayes Classifier

Bayesian Classification. CS690L Data Mining: Classification(2) Bayesian Theorem: Basics. Bayesian Theorem. Training dataset. Naïve Bayes Classifier Baa Classfcato CS6L Data Mg: Classfcato() Referece: J. Ha ad M. Kamber, Data Mg: Cocepts ad Techques robablstc learg: Calculate explct probabltes for hypothess, amog the most practcal approaches to certa

More information

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model Chapter 3 Asmptotc Theor ad Stochastc Regressors The ature of eplaator varable s assumed to be o-stochastc or fed repeated samples a regresso aalss Such a assumpto s approprate for those epermets whch

More information

18.413: Error Correcting Codes Lab March 2, Lecture 8

18.413: Error Correcting Codes Lab March 2, Lecture 8 18.413: Error Correctg Codes Lab March 2, 2004 Lecturer: Dael A. Spelma Lecture 8 8.1 Vector Spaces A set C {0, 1} s a vector space f for x all C ad y C, x + y C, where we take addto to be compoet wse

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

Supervised learning: Linear regression Logistic regression

Supervised learning: Linear regression Logistic regression CS 57 Itroducto to AI Lecture 4 Supervsed learg: Lear regresso Logstc regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Data: D { D D.. D D Supervsed learg d a set of eamples s

More information

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression Overvew Basc cocepts of Bayesa learg Most probable model gve data Co tosses Lear regresso Logstc regresso Bayesa predctos Co tosses Lear regresso 30 Recap: regresso problems Iput to learg problem: trag

More information

Bayes Estimator for Exponential Distribution with Extension of Jeffery Prior Information

Bayes Estimator for Exponential Distribution with Extension of Jeffery Prior Information Malaysa Joural of Mathematcal Sceces (): 97- (9) Bayes Estmator for Expoetal Dstrbuto wth Exteso of Jeffery Pror Iformato Hadeel Salm Al-Kutub ad Noor Akma Ibrahm Isttute for Mathematcal Research, Uverst

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Marquette Uverst Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Coprght 08 b Marquette Uverst Maxmum Lkelhood Estmato We have bee sag that ~

More information

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Numercal Computg -I UNIT SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Structure Page Nos..0 Itroducto 6. Objectves 7. Ital Approxmato to a Root 7. Bsecto Method 8.. Error Aalyss 9.4 Regula Fals Method

More information

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture) CSE 546: Mache Learg Lecture 6 Feature Selecto: Part 2 Istructor: Sham Kakade Greedy Algorthms (cotued from the last lecture) There are varety of greedy algorthms ad umerous amg covetos for these algorthms.

More information

An Introduction to. Support Vector Machine

An Introduction to. Support Vector Machine A Itroducto to Support Vector Mache Support Vector Mache (SVM) A classfer derved from statstcal learg theory by Vapk, et al. 99 SVM became famous whe, usg mages as put, t gave accuracy comparable to eural-etwork

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Mache Learg Problem set Due Frday, September 9, rectato Please address all questos ad commets about ths problem set to 6.867-staff@a.mt.edu. You do ot eed to use MATLAB for ths problem set though

More information

Introduction to Matrices and Matrix Approach to Simple Linear Regression

Introduction to Matrices and Matrix Approach to Simple Linear Regression Itroducto to Matrces ad Matrx Approach to Smple Lear Regresso Matrces Defto: A matrx s a rectagular array of umbers or symbolc elemets I may applcatos, the rows of a matrx wll represet dvduals cases (people,

More information

Econometric Methods. Review of Estimation

Econometric Methods. Review of Estimation Ecoometrc Methods Revew of Estmato Estmatg the populato mea Radom samplg Pot ad terval estmators Lear estmators Ubased estmators Lear Ubased Estmators (LUEs) Effcecy (mmum varace) ad Best Lear Ubased Estmators

More information

ESS Line Fitting

ESS Line Fitting ESS 5 014 17. Le Fttg A very commo problem data aalyss s lookg for relatoshpetwee dfferet parameters ad fttg les or surfaces to data. The smplest example s fttg a straght le ad we wll dscuss that here

More information

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections ENGI 441 Jot Probablty Dstrbutos Page 7-01 Jot Probablty Dstrbutos [Navd sectos.5 ad.6; Devore sectos 5.1-5.] The jot probablty mass fucto of two dscrete radom quattes, s, P ad p x y x y The margal probablty

More information

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines CS 675 Itroducto to Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Mdterm eam October 9, 7 I-class eam Closed book Stud materal: Lecture otes Correspodg chapters

More information

Lecture Note to Rice Chapter 8

Lecture Note to Rice Chapter 8 ECON 430 HG revsed Nov 06 Lecture Note to Rce Chapter 8 Radom matrces Let Y, =,,, m, =,,, be radom varables (r.v. s). The matrx Y Y Y Y Y Y Y Y Y Y = m m m s called a radom matrx ( wth a ot m-dmesoal dstrbuto,

More information

Chapter 8. Inferences about More Than Two Population Central Values

Chapter 8. Inferences about More Than Two Population Central Values Chapter 8. Ifereces about More Tha Two Populato Cetral Values Case tudy: Effect of Tmg of the Treatmet of Port-We tas wth Lasers ) To vestgate whether treatmet at a youg age would yeld better results tha

More information

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model Lecture 7. Cofdece Itervals ad Hypothess Tests the Smple CLR Model I lecture 6 we troduced the Classcal Lear Regresso (CLR) model that s the radom expermet of whch the data Y,,, K, are the outcomes. The

More information

CSE 5526: Introduction to Neural Networks Linear Regression

CSE 5526: Introduction to Neural Networks Linear Regression CSE 556: Itroducto to Neural Netorks Lear Regresso Part II 1 Problem statemet Part II Problem statemet Part II 3 Lear regresso th oe varable Gve a set of N pars of data , appromate d by a lear fucto

More information

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3. Sampling, sampling distributions, and parameter estimation Lecture 3 Samplg, samplg dstrbutos, ad parameter estmato Samplg Defto Populato s defed as the collecto of all the possble observatos of terest. The collecto of observatos we take from the populato s called

More information

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines It J Cotemp Math Sceces, Vol 5, 2010, o 19, 921-929 Solvg Costraed Flow-Shop Schedulg Problems wth Three Maches P Pada ad P Rajedra Departmet of Mathematcs, School of Advaced Sceces, VIT Uversty, Vellore-632

More information

Chapter 14 Logistic Regression Models

Chapter 14 Logistic Regression Models Chapter 4 Logstc Regresso Models I the lear regresso model X β + ε, there are two types of varables explaatory varables X, X,, X k ad study varable y These varables ca be measured o a cotuous scale as

More information

CHAPTER 4 RADICAL EXPRESSIONS

CHAPTER 4 RADICAL EXPRESSIONS 6 CHAPTER RADICAL EXPRESSIONS. The th Root of a Real Number A real umber a s called the th root of a real umber b f Thus, for example: s a square root of sce. s also a square root of sce ( ). s a cube

More information

Logistic regression (continued)

Logistic regression (continued) STAT562 page 138 Logstc regresso (cotued) Suppose we ow cosder more complex models to descrbe the relatoshp betwee a categorcal respose varable (Y) that takes o two (2) possble outcomes ad a set of p explaatory

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON430 Statstcs Date of exam: Frday, December 8, 07 Grades are gve: Jauary 4, 08 Tme for exam: 0900 am 00 oo The problem set covers 5 pages Resources allowed:

More information

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy Bouds o the expected etropy ad KL-dvergece of sampled multomal dstrbutos Brado C. Roy bcroy@meda.mt.edu Orgal: May 18, 2011 Revsed: Jue 6, 2011 Abstract Iformato theoretc quattes calculated from a sampled

More information

Linear Regression Linear Regression with Shrinkage. Some slides are due to Tommi Jaakkola, MIT AI Lab

Linear Regression Linear Regression with Shrinkage. Some slides are due to Tommi Jaakkola, MIT AI Lab Lear Regresso Lear Regresso th Shrkage Some sldes are due to Tomm Jaakkola, MIT AI Lab Itroducto The goal of regresso s to make quattatve real valued predctos o the bass of a vector of features or attrbutes.

More information

ECON 5360 Class Notes GMM

ECON 5360 Class Notes GMM ECON 560 Class Notes GMM Geeralzed Method of Momets (GMM) I beg by outlg the classcal method of momets techque (Fsher, 95) ad the proceed to geeralzed method of momets (Hase, 98).. radtoal Method of Momets

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions. Ordary Least Squares egresso. Smple egresso. Algebra ad Assumptos. I ths part of the course we are gog to study a techque for aalysg the lear relatoshp betwee two varables Y ad X. We have pars of observatos

More information

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 THE ROYAL STATISTICAL SOCIETY 06 EAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 The Socety s provdg these solutos to assst cadtes preparg for the examatos 07. The solutos are teded as learg ads ad should

More information

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best Error Aalyss Preamble Wheever a measuremet s made, the result followg from that measuremet s always subject to ucertaty The ucertaty ca be reduced by makg several measuremets of the same quatty or by mprovg

More information

Kernel-based Methods and Support Vector Machines

Kernel-based Methods and Support Vector Machines Kerel-based Methods ad Support Vector Maches Larr Holder CptS 570 Mache Learg School of Electrcal Egeerg ad Computer Scece Washgto State Uverst Refereces Muller et al. A Itroducto to Kerel-Based Learg

More information

STA 108 Applied Linear Models: Regression Analysis Spring Solution for Homework #1

STA 108 Applied Linear Models: Regression Analysis Spring Solution for Homework #1 STA 08 Appled Lear Models: Regresso Aalyss Sprg 0 Soluto for Homework #. Let Y the dollar cost per year, X the umber of vsts per year. The the mathematcal relato betwee X ad Y s: Y 300 + X. Ths s a fuctoal

More information

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames KLT Tracker Tracker. Detect Harrs corers the frst frame 2. For each Harrs corer compute moto betwee cosecutve frames (Algmet). 3. Lk moto vectors successve frames to get a track 4. Itroduce ew Harrs pots

More information

Regression and the LMS Algorithm

Regression and the LMS Algorithm CSE 556: Itroducto to Neural Netorks Regresso ad the LMS Algorthm CSE 556: Regresso 1 Problem statemet CSE 556: Regresso Lear regresso th oe varable Gve a set of N pars of data {, d }, appromate d b a

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postpoed exam: ECON430 Statstcs Date of exam: Jauary 0, 0 Tme for exam: 09:00 a.m. :00 oo The problem set covers 5 pages Resources allowed: All wrtte ad prted

More information

ENGI 3423 Simple Linear Regression Page 12-01

ENGI 3423 Simple Linear Regression Page 12-01 ENGI 343 mple Lear Regresso Page - mple Lear Regresso ometmes a expermet s set up where the expermeter has cotrol over the values of oe or more varables X ad measures the resultg values of aother varable

More information

Support vector machines

Support vector machines CS 75 Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Outle Outle: Algorthms for lear decso boudary Support vector maches Mamum marg hyperplae.

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Binary classification: Support Vector Machines

Binary classification: Support Vector Machines CS 57 Itroducto to AI Lecture 6 Bar classfcato: Support Vector Maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Supervsed learg Data: D { D, D,.., D} a set of eamples D, (,,,,,

More information

Lecture 3 Probability review (cont d)

Lecture 3 Probability review (cont d) STATS 00: Itroducto to Statstcal Iferece Autum 06 Lecture 3 Probablty revew (cot d) 3. Jot dstrbutos If radom varables X,..., X k are depedet, the ther dstrbuto may be specfed by specfyg the dvdual dstrbuto

More information

Lecture Notes 2. The ability to manipulate matrices is critical in economics.

Lecture Notes 2. The ability to manipulate matrices is critical in economics. Lecture Notes. Revew of Matrces he ablt to mapulate matrces s crtcal ecoomcs.. Matr a rectagular arra of umbers, parameters, or varables placed rows ad colums. Matrces are assocated wth lear equatos. lemets

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA THE ROYAL STATISTICAL SOCIETY EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA PAPER II STATISTICAL THEORY & METHODS The Socety provdes these solutos to assst caddates preparg for the examatos future years ad for

More information

1 Solution to Problem 6.40

1 Solution to Problem 6.40 1 Soluto to Problem 6.40 (a We wll wrte T τ (X 1,...,X where the X s are..d. wth PDF f(x µ, σ 1 ( x µ σ g, σ where the locato parameter µ s ay real umber ad the scale parameter σ s > 0. Lettg Z X µ σ we

More information

CHAPTER 3 POSTERIOR DISTRIBUTIONS

CHAPTER 3 POSTERIOR DISTRIBUTIONS CHAPTER 3 POSTERIOR DISTRIBUTIONS If scece caot measure the degree of probablt volved, so much the worse for scece. The practcal ma wll stck to hs apprecatve methods utl t does, or wll accept the results

More information

Recall MLR 5 Homskedasticity error u has the same variance given any values of the explanatory variables Var(u x1,...,xk) = 2 or E(UU ) = 2 I

Recall MLR 5 Homskedasticity error u has the same variance given any values of the explanatory variables Var(u x1,...,xk) = 2 or E(UU ) = 2 I Chapter 8 Heterosedastcty Recall MLR 5 Homsedastcty error u has the same varace gve ay values of the eplaatory varables Varu,..., = or EUU = I Suppose other GM assumptos hold but have heterosedastcty.

More information

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then Secto 5 Vectors of Radom Varables Whe workg wth several radom varables,,..., to arrage them vector form x, t s ofte coveet We ca the make use of matrx algebra to help us orgaze ad mapulate large umbers

More information

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions Iteratoal Joural of Computatoal Egeerg Research Vol, 0 Issue, Estmato of Stress- Stregth Relablty model usg fte mxture of expoetal dstrbutos K.Sadhya, T.S.Umamaheswar Departmet of Mathematcs, Lal Bhadur

More information

Some Different Perspectives on Linear Least Squares

Some Different Perspectives on Linear Least Squares Soe Dfferet Perspectves o Lear Least Squares A stadard proble statstcs s to easure a respose or depedet varable, y, at fed values of oe or ore depedet varables. Soetes there ests a deterstc odel y f (,,

More information

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006 Mult-Layer Refracto Problem Rafael Espercueta, Bakersfeld College, November, 006 Lght travels at dfferet speeds through dfferet meda, but refracts at layer boudares order to traverse the least-tme path.

More information

Special Instructions / Useful Data

Special Instructions / Useful Data JAM 6 Set of all real umbers P A..d. B, p Posso Specal Istructos / Useful Data x,, :,,, x x Probablty of a evet A Idepedetly ad detcally dstrbuted Bomal dstrbuto wth parameters ad p Posso dstrbuto wth

More information

New Schedule. Dec. 8 same same same Oct. 21. ^2 weeks ^1 week ^1 week. Pattern Recognition for Vision

New Schedule. Dec. 8 same same same Oct. 21. ^2 weeks ^1 week ^1 week. Pattern Recognition for Vision ew Schedule Dec. 8 same same same Oct. ^ weeks ^ week ^ week Fall 004 Patter Recogto for Vso 9.93 Patter Recogto for Vso Classfcato Berd Hesele Fall 004 Overvew Itroducto Lear Dscrmat Aalyss Support Vector

More information

Simulation Output Analysis

Simulation Output Analysis Smulato Output Aalyss Summary Examples Parameter Estmato Sample Mea ad Varace Pot ad Iterval Estmato ermatg ad o-ermatg Smulato Mea Square Errors Example: Sgle Server Queueg System x(t) S 4 S 4 S 3 S 5

More information

Machine Learning. Introduction to Regression. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Introduction to Regression. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Mache Learg CSE6740/CS764/ISYE6740, Fall 0 Itroducto to Regresso Le Sog Lecture 4, August 30, 0 Based o sldes from Erc g, CMU Readg: Chap. 3, CB Mache learg for apartmet hutg Suppose ou are to move to

More information

Chapter 9 Jordan Block Matrices

Chapter 9 Jordan Block Matrices Chapter 9 Jorda Block atrces I ths chapter we wll solve the followg problem. Gve a lear operator T fd a bass R of F such that the matrx R (T) s as smple as possble. f course smple s a matter of taste.

More information

X ε ) = 0, or equivalently, lim

X ε ) = 0, or equivalently, lim Revew for the prevous lecture Cocepts: order statstcs Theorems: Dstrbutos of order statstcs Examples: How to get the dstrbuto of order statstcs Chapter 5 Propertes of a Radom Sample Secto 55 Covergece

More information

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America SOLUTION OF SYSTEMS OF SIMULTANEOUS LINEAR EQUATIONS Gauss-Sedel Method 006 Jame Traha, Autar Kaw, Kev Mart Uversty of South Florda Uted States of Amerca kaw@eg.usf.edu Itroducto Ths worksheet demostrates

More information

LINEAR REGRESSION ANALYSIS

LINEAR REGRESSION ANALYSIS LINEAR REGRESSION ANALYSIS MODULE V Lecture - Correctg Model Iadequaces Through Trasformato ad Weghtg Dr. Shalabh Departmet of Mathematcs ad Statstcs Ida Isttute of Techology Kapur Aalytcal methods for

More information

Qualifying Exam Statistical Theory Problem Solutions August 2005

Qualifying Exam Statistical Theory Problem Solutions August 2005 Qualfyg Exam Statstcal Theory Problem Solutos August 5. Let X, X,..., X be d uform U(,),

More information

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x CS 75 Mache Learg Lecture 8 Lear regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Lear regresso Fucto f : X Y s a lear combato of put compoets f + + + K d d K k - parameters

More information

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971)) art 4b Asymptotc Results for MRR usg RESS Recall that the RESS statstc s a specal type of cross valdato procedure (see Alle (97)) partcular to the regresso problem ad volves fdg Y $,, the estmate at the

More information

Lecture 1 Review of Fundamental Statistical Concepts

Lecture 1 Review of Fundamental Statistical Concepts Lecture Revew of Fudametal Statstcal Cocepts Measures of Cetral Tedecy ad Dsperso A word about otato for ths class: Idvduals a populato are desgated, where the dex rages from to N, ad N s the total umber

More information

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity ECONOMETRIC THEORY MODULE VIII Lecture - 6 Heteroskedastcty Dr. Shalabh Departmet of Mathematcs ad Statstcs Ida Isttute of Techology Kapur . Breusch Paga test Ths test ca be appled whe the replcated data

More information

Announcements. Recognition II. Computer Vision I. Example: Face Detection. Evaluating a binary classifier

Announcements. Recognition II. Computer Vision I. Example: Face Detection. Evaluating a binary classifier Aoucemets Recogto II H3 exteded to toght H4 to be aouced today. Due Frday 2/8. Note wll take a whle to ru some thgs. Fal Exam: hursday 2/4 at 7pm-0pm CSE252A Lecture 7 Example: Face Detecto Evaluatg a

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

ENGI 4421 Propagation of Error Page 8-01

ENGI 4421 Propagation of Error Page 8-01 ENGI 441 Propagato of Error Page 8-01 Propagato of Error [Navd Chapter 3; ot Devore] Ay realstc measuremet procedure cotas error. Ay calculatos based o that measuremet wll therefore also cota a error.

More information

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem Joural of Amerca Scece ;6( Cubc Nopolyomal Sple Approach to the Soluto of a Secod Order Two-Pot Boudary Value Problem W.K. Zahra, F.A. Abd El-Salam, A.A. El-Sabbagh ad Z.A. ZAk * Departmet of Egeerg athematcs

More information

Classification : Logistic regression. Generative classification model.

Classification : Logistic regression. Generative classification model. CS 75 Mache Lear Lecture 8 Classfcato : Lostc reresso. Geeratve classfcato model. Mlos Hausrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Lear Bar classfcato o classes Y {} Our oal s to lear to classf

More information

ε. Therefore, the estimate

ε. Therefore, the estimate Suggested Aswers, Problem Set 3 ECON 333 Da Hugerma. Ths s ot a very good dea. We kow from the secod FOC problem b) that ( ) SSE / = y x x = ( ) Whch ca be reduced to read y x x = ε x = ( ) The OLS model

More information

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015 Fall 05 Homework : Solutos Problem : (Practce wth Asymptotc Notato) A essetal requremet for uderstadg scalg behavor s comfort wth asymptotc (or bg-o ) otato. I ths problem, you wll prove some basc facts

More information

The equation is sometimes presented in form Y = a + b x. This is reasonable, but it s not the notation we use.

The equation is sometimes presented in form Y = a + b x. This is reasonable, but it s not the notation we use. INTRODUCTORY NOTE ON LINEAR REGREION We have data of the form (x y ) (x y ) (x y ) These wll most ofte be preseted to us as two colum of a spreadsheet As the topc develops we wll see both upper case ad

More information

Arithmetic Mean and Geometric Mean

Arithmetic Mean and Geometric Mean Acta Mathematca Ntresa Vol, No, p 43 48 ISSN 453-6083 Arthmetc Mea ad Geometrc Mea Mare Varga a * Peter Mchalča b a Departmet of Mathematcs, Faculty of Natural Sceces, Costate the Phlosopher Uversty Ntra,

More information

Lecture 02: Bounding tail distributions of a random variable

Lecture 02: Bounding tail distributions of a random variable CSCI-B609: A Theorst s Toolkt, Fall 206 Aug 25 Lecture 02: Boudg tal dstrbutos of a radom varable Lecturer: Yua Zhou Scrbe: Yua Xe & Yua Zhou Let us cosder the ubased co flps aga. I.e. let the outcome

More information

Estimation of the Loss and Risk Functions of Parameter of Maxwell Distribution

Estimation of the Loss and Risk Functions of Parameter of Maxwell Distribution Scece Joural of Appled Mathematcs ad Statstcs 06; 4(4): 9- http://www.scecepublshggroup.com/j/sjams do: 0.648/j.sjams.060404. ISSN: 76-949 (Prt); ISSN: 76-95 (Ole) Estmato of the Loss ad Rsk Fuctos of

More information

F. Inequalities. HKAL Pure Mathematics. 進佳數學團隊 Dr. Herbert Lam 林康榮博士. [Solution] Example Basic properties

F. Inequalities. HKAL Pure Mathematics. 進佳數學團隊 Dr. Herbert Lam 林康榮博士. [Solution] Example Basic properties 進佳數學團隊 Dr. Herbert Lam 林康榮博士 HKAL Pure Mathematcs F. Ieualtes. Basc propertes Theorem Let a, b, c be real umbers. () If a b ad b c, the a c. () If a b ad c 0, the ac bc, but f a b ad c 0, the ac bc. Theorem

More information

Generalized Linear Regression with Regularization

Generalized Linear Regression with Regularization Geeralze Lear Regresso wth Regularzato Zoya Bylsk March 3, 05 BASIC REGRESSION PROBLEM Note: I the followg otes I wll make explct what s a vector a what s a scalar usg vec t or otato, to avo cofuso betwee

More information

ρ < 1 be five real numbers. The

ρ < 1 be five real numbers. The Lecture o BST 63: Statstcal Theory I Ku Zhag, /0/006 Revew for the prevous lecture Deftos: covarace, correlato Examples: How to calculate covarace ad correlato Theorems: propertes of correlato ad covarace

More information

Third handout: On the Gini Index

Third handout: On the Gini Index Thrd hadout: O the dex Corrado, a tala statstca, proposed (, 9, 96) to measure absolute equalt va the mea dfferece whch s defed as ( / ) where refers to the total umber of dvduals socet. Assume that. The

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew for the prevous lecture: Theorems ad Examples: How to obta the pmf (pdf) of U = g (, Y) ad V = g (, Y) Chapter 4 Multple Radom Varables Chapter 44 Herarchcal Models ad Mxture Dstrbutos Examples:

More information