Dimensionality Reduction

Size: px
Start display at page:

Download "Dimensionality Reduction"

Transcription

1 Dmesoalty Reducto Sav Kumar, Google Research, NY EECS-6898, Columba Uversty - Fall, 010 Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 1

2 Curse of Dmesoalty May learg techques scale poorly wth data dmesoalty (d) Desty estmato For example, Gaussa Mxture Models (GMM) eed to estmate covarace matrces O( d ) Nearest Neghbor Search O(d) Also, performace of trees ad hashes suffers wth hgh dmesoalty Optmzato techques Frst order methods scale O (d) whle secod order O( d ) Clusterg, classfcato, regresso, Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg

3 Curse of Dmesoalty May learg techques scale poorly wth data dmesoalty (d) Desty estmato For example, Gaussa Mxture Models (GMM) eed to estmate covarace matrces O( d ) Nearest Neghbor Search O(d) Also, performace of trees ad hashes suffers wth hgh dmesoalty Optmzato techques Frst order methods scale O (d) whle secod order O( d ) Clusterg, classfcato, regresso, Data Vsualzato hard to do hgh-dmesoal spaces Dmesoalty Reducto Key Idea: Data dmesos put space may be statstcally depedet possble to reta most of the formato put space a lower dmesoal space Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 3

4 Dmesoalty Reducto 50 x 50 pxel faces R x 50 pxel radom mages Space of face mages sgfcatly smaller tha Wat to recover the uderlyg low-dmesoal space! Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 4

5 Dmesoalty Reducto Lear echques PCA, Metrc MDS, Radomzed proectos Assume data les a subspace Work well practce may cases Ca be a poor approxmato for some data Nolear echques Mafold learg methods Kerel PCA, LLE, ISOMAP, Assume local learty of data Need desely sampled data as put Other approaches Autoecoders (mult-layer Neural Networks), Computatoally more demadg tha lear methods Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 5

6 Prcpal Compoet Aalyss (PCA) wo vews but same soluto 1. Wat to fd best lear recostructo of the data that mmzes mea squared recostructo error. Wat to fd best subspace that maxmzes the proected data varace { 1 Suppose put data x } =, x R s cetered.e., Goal: o fd a k-dm lear embeddg y such that k < d d x x μ data mea Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 6

7 Prcpal Compoet Aalyss (PCA) wo vews but same soluto 1. Wat to fd best lear recostructo of the data that mmzes mea squared recostructo error. Wat to fd best subspace that maxmzes the proected data varace { 1 Suppose put data x } =, x R s cetered.e., Goal: o fd a k-dm lear embeddg y such that k < d Recostructo Vew arg m B, y d x ~ k x = =1 y b = B y d k k 1 B R, y R ~ = 1 x 1 x x = = By s.t. x μ B B = I data mea Bˆ = arg m X BB X s.t. B B = I ad yˆ = B X B d F data matrx ˆ Soluto: Get top k left sgular vectors of X (O(d )) ad proect data o them (O(kd)) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 7

8 Prcpal Compoet Aalyss (PCA) Max-Varace Vew Wat to fd k-dm lear proecto y = B x such that Bˆ = argmaxr B ( ) B XX B s.t. B B = I assumg data s cetered Soluto: Get top k egevectors of XX (O(d +d 3 )) ad proect data (O(kd)) Left sgular vectors of X = Egevectors of XX Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 8

9 Prcpal Compoet Aalyss (PCA) Max-Varace Vew Wat to fd k-dm lear proecto y = B x such that Bˆ = argmaxr B ( ) B XX B s.t. B B = I assumg data s cetered Soluto: Get top k egevectors of XX (O(d +d 3 )) ad proect data (O(kd)) Left sgular vectors of X = Egevectors of XX Statstcal assumpto: Data s ormally dstrbuted More geeral versos (allow ose the data) - Factor Aalyss ad Probablstc PCA Ca be exteded to a olear verso usg kerels * * * * * ** * * * * * * * * * * Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 9

10 Metrc MDS: Gve parwse (Eucldea) dstaces amog pots, fd a low-dm embeddg that preserves the orgal dstaces MultDmesoal Scalg (MDS) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 10 k d y x R R, = Y d y y Y, arg m ˆ k d Y X R R, x x d =

11 MultDmesoal Scalg (MDS) Metrc MDS: Gve parwse (Eucldea) dstaces amog pots, fd a low-dm embeddg that preserves the orgal dstaces Yˆ = Y, Frst, x dstace matrx (D) s coverted to a smlarty matrx (K) K = arg m 1 H D H y y D = d d x 1 H = I 1 1 d R, y d R k X R, Y R d = x x k 1 = [1,...,1] etres Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 11

12 MultDmesoal Scalg (MDS) Metrc MDS: Gve parwse (Eucldea) dstaces amog pots, fd a low-dm embeddg that preserves the orgal dstaces Yˆ = arg m Frst, x dstace matrx (D) s coverted to a smlarty matrx (K) K = 1 Soluto: Best k-dm (k < d) lear embeddg Y gve by Y, H D H y y D = d d x 1 H = I 1 1 d R, y d R k X R, Y R d = x x k 1 = [1,...,1] etres Y Y = K U k Σ k U k Y 1/ = Σk U k Embeddg detcal to that from PCA o X Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 1

13 Kerel (Nolear) PCA Key Idea: Istead of fdg lear proectos the orgal put space, do t the (mplct) feature space duced by a mercer kerel Φ k( x, z) = Φ( x) ( x) Scholkopf et al. [5] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 13

14 Kerel (Nolear) PCA Key Idea: Istead of fdg lear proectos the orgal put space, do t the (mplct) feature space duced by a mercer kerel Let s focus o 1-dm proecto, Φ k( x, z) = Φ( x) ( x) data covarace PCA Assumpto: cetered data = 1 = X1 = 0 best drecto b C = XX kerel PCA = 1 X = x Φ( x ) = Φ( )1 0 C = Φ( X ) Φ( X ) Cb = λb = Φ( x )( Φ( x ) b) λb 1 = 1 = [1,...,1] etres Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 14

15 Kerel (Nolear) PCA Key Idea: Istead of fdg lear proectos the orgal put space, do t the (mplct) feature space duced by a mercer kerel Let s focus o 1-dm proecto, data covarace PCA Assumpto: cetered data = 1 = X1 = 0 best drecto b C = XX kerel PCA = 1 X = x Φ( x ) = Φ( )1 0 C = Φ( X ) Φ( X ) Cb = λb = Φ( x )( Φ( x ) b) λb = 1 ( ) Φ( x ) b / λ ) = Φ k( x, z) = Φ( x) ( x) b = = 1 Φ( x = 1αΦ( x ) λ 0 = Φ(X )α b les the spa of mapped put pots! 1 = [1,...,1] etres Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 15

16 Kerel (Nolear) PCA Key Idea: Istead of fdg lear proectos the orgal put space, do t the (mplct) feature space duced by a mercer kerel Let s focus o 1-dm proecto, data covarace PCA Assumpto: cetered data = 1 = X1 = 0 best drecto b C = XX kerel PCA = 1 X = x Φ( x ) = Φ( )1 0 C = Φ( X ) Φ( X ) Cb = λb = Φ( x )( Φ( x ) b) λb Φ( X ) Φ( X ) Φ( X ) α = λφ( X ) α k( x, z) = Φ( x) ( x) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 16 = 1 ( ) Φ( x ) b / λ ) = Φ b = = 1 Φ( x = 1αΦ( x ) λ 0 = Φ(X )α b les the spa of mapped put pots! Premultply by Φ(X ) ad replace Φ ( X ) Φ( X ) = K K α = λkα Kα = λα K s postve-defte (otherwse, other solutos are ot of terest) 1 = [1,...,1] etres

17 Kerel (Nolear) PCA Ma Computato: Fd top k egevectors of kerel matrx Fal soluto: b = Φ( X ) α K α = λα O( k)! but eed to have ut-legth Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 17

18 Kerel (Nolear) PCA Ma Computato: Fd top k egevectors of kerel matrx Fal soluto: b = Φ( X ) α K α = λα O( k)! but eed to have ut-legth b b = 1 α Kα =1 λ α α = 1 α 1 = λ Proecto of a pot x y = Φ( x ) b = (1/ λ ) K α th colum of K Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 18

19 Kerel (Nolear) PCA Ma Computato: Fd top k egevectors of kerel matrx Fal soluto: b = Φ( X ) α K α = λα O( k)! but eed to have ut-legth b b = 1 α Kα =1 λ α α = 1 α 1 = λ Proecto of a pot x y = Φ( x ) b = (1/ λ ) K α th colum of K What f we wat to fd a proecto for a ew pot ot see durg trag? Kow as out-of-sample exteso Not as straghtforward as for lear PCA Ca be thought of as addg aother row ad colum kerel Matrx o avod recomputg egedecomposto of exteded Kerel matrx, use Nystrom method to approxmate the ew embeddg (recall matrx approxmatos) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 19

20 Ceterg Feature Space We assumed that data was cetered feature space Easy to do wth features {x } How to do t mapped feature space {Φ(x )} as explct mappg may be ukow? Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 0

21 Ceterg Feature Space We assumed that data was cetered feature space Easy to do wth features {x } How to do t mapped feature space {Φ(x )} as explct mappg may be ukow? We wat: Φ = (1/ ) = 1 Φ( x ) Φ x ) Φ( x ) Φ ( But we eed data oly through kerel matrx, so get cetered kerel matrx ~ K = K 1 K K1 + 1 K1 ( 1 ) = 1/,, = 1,..., Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 1

22 Ceterg Feature Space We assumed that data was cetered feature space Easy to do wth features {x } How to do t mapped feature space {Φ(x )} as explct mappg may be ukow? We wat: Φ = (1/ ) = 1 Φ( x ) Φ x ) Φ( x ) Φ ( But we eed data oly through kerel matrx, so get cetered kerel matrx Iterpretato ~ K = K 1 K K1 + 1 K1 ( 1 ) = 1/,, = 1,..., ~ K = K m m + m K m mea of th row mea of th col mea of all etres K m m Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg

23 Locally Lear Embeddg (LLE) Key Idea: Gve suffcet samples, each data pot ad ts eghbors are assumed to le close to a locally lear patch. ry to recostruct each data pot from ts t eghbors O( d) x ~ w x ~ dcates eghbors of Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 3

24 Locally Lear Embeddg (LLE) Key Idea: Gve suffcet samples, each data pot ad ts eghbors are assumed to le close to a locally lear patch. ry to recostruct each data pot from ts t eghbors O( d) x ~ w x ~ dcates eghbors of Lear the weghts by solvg argm x w x s.t. ~ w = 1 w O( dt ~ 3 ) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 4

25 Locally Lear Embeddg (LLE) Key Idea: Gve suffcet samples, each data pot ad ts eghbors are assumed to le close to a locally lear patch. ry to recostruct each data pot from ts t eghbors O( d) x ~ w x ~ dcates eghbors of Lear the weghts by solvg argm x w x s.t. ~ w = 1 w O( dt ~ Assumpto: Same weghts recostruct the low-dm embeddg also costruct a sparse x matrx argm y w y s.t. y = 0 Y M ~ Get bottom k egevectors gorg the last O( k) Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 5 3 ) = ( I W ) ( I W ) (1/ ) y y = I

26 PCA vs LLE PCA A face mage traslated space agast radom backgroud = 961, d = 3009, t = 4, k = LLE Rowes & Saul [5] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 6

27 ISOMAP Fd the low-dmesoal represetato that best preserves geodesc dstaces betwee pots MDS wth geodesc dstaces Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 7

28 ISOMAP Fd the low-dmesoal represetato that best preserves geodesc dstaces betwee pots MDS wth geodesc dstaces y y Output co-ordates Yˆ = arg m Y, y y Δ Geodesc dstace Recovers true (covex) mafold asymptotcally! Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 8

29 ISOMAP Gve put pots: 1. Fd t earest eghbors for each pot : O( ). Fd shortest path dstace for every (, ), Δ : O( log ) 3. Costruct matrx K wth etres as cetered Δ K s a dese matrx 4. Optmal k reduced dms: Σ 1/ k U O( k)! Egevalues Egevectors k Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 9

30 ISOMAP Expermet Face mage take wth two pose varatos (left-rght ad up-dow), ad 1-D llumato drecto, d = 4096, = 698 aebaum et al. [7] Issue: Qute sestve to false edges the graph ( short-crcut ) Oe wrog edge may cause the shortest paths to chage drastcally Better to use expected commute tme betwee two odes Laplaca Egemaps Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 30

31 Laplaca Egemaps Mmze weghted dstaces betwee eghbors Yˆ = arg m Y ~ W y D y D D = W Aother formulato Yˆ = s.t arg mr[ Y Y Y DY = I LY ] L = D W Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 31

32 Laplaca Egemaps Mmze weghted dstaces betwee eghbors 1. Fd t earest eghbors for each pot : O( ). Compute weght matrx W: W = exp( x x / σ 0 otherwse ) f ~ 3. Compute ormalzed laplaca K 4. Optmal k reduced dms: U k 1/ 1/ = I D WD where D = W Bottom egevectors of K gorg last O( k) but ca do much faster usg Arold s/laczos method sce matrx s sparse Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 3

33 Maxmum Varace Ufoldg (MVU) Key Idea: Fd embeddg wth maxmum varace that Preserves agles ad legths for edges betwee earest eghbors Agles/dstaces preservato costrat y y = x x If there s a edge (, ) the graph formed by parwse coectg all t earest eghbors Weberger ad Saul [1] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 33

34 Maxmum Varace Ufoldg (MVU) Key Idea: Fd embeddg wth maxmum varace that Preserves agles ad legths for edges betwee earest eghbors Agles/dstaces preservato costrat y y = x x If there s a edge (, ) the graph formed by parwse coectg all t earest eghbors Ceterg costrat (for traslatoal varace) y = 0 Weberger ad Saul [1] Optmzato Crtero Maxmze squared parwse dstaces betwee embeddgs argmax y y Y, Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 34 s.t. above costrats Same as maxmzg varace of the outputs!

35 Maxmum Varace Ufoldg (MVU) Reformulato: Usg a kerel K, such that K = y y Agles/dstaces preservato K K + K = d = x x Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 35

36 Maxmum Varace Ufoldg (MVU) Reformulato: Usg a kerel K, such that K = y y Agles/dstaces preservato K K + K = d = x x Ceterg costrat = K 0 y = = y 0 Symmetrc Postve-Defte costrat Sem-Defte Program! O( 3 +c 3 ) # of costrats Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 36

37 Maxmum Varace Ufoldg (MVU) Reformulato: Usg a kerel K, such that K = y y Agles/dstaces preservato K K + K = d = x x Ceterg costrat = K 0 y = = y 0 Symmetrc Postve-Defte costrat Sem-Defte Program! O( 3 +c 3 ) Max-varace obectve fucto r(k) # of costrats Fal soluto Y 1/ Σk U k = op k egevalues ad egevectors of K Ca relax the hard costrats va slack varables! Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 37

38 PCA vs MVU refol kot, = 1617, d = 3, t = 5, k = A teapot vewed rotated 180 deg a plae, = 00, d = 308, t = 4, k = 1 Weberger ad Saul [1] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 38

39 Large-Scale Face Mafold Learg Costruct Web dataset Extracted 18M faces from.5b teret mages ~15 hours o 500 maches Faces ormalzed to zero mea ad ut varace Graph costructo Approx Nearest Neghbor Spll rees 5 NN, ~ days Ca be doe much faster usg approprate hashes! alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 39

40 Neghborhood Graph Costructo Coect each ode (face) wth ts eghbors Is the graph coected? Depth-Frst-Search to fd largest coected compoet 10 mutes o a sgle mache Largest compoet depeds o umber of NN ( t ) alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 40

41 Samples from coected compoets From Largest Compoet From Smaller Compoets alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 41

42 Graph Mapulato Approxmatg Geodescs Shortest paths betwee pars of face mages Computg for all pars feasble O( log )! Key Idea: Need oly a few colums of K for samplg-based spectral decomposto requre shortest paths betwee a few ( l ) odes ad all other odes 1 hour o 500 maches (l = 10K) Computg Embeddgs (k = 100) Nystrom: 1.5 hours, 500 mache Col-Samplg: 6 hours, 500 maches Proectos: 15 ms, 500 maches alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 4

43 CMU-PIE Dataset 68 people, 13 poses, 43 llumatos, 4 expressos 35,47 faces detected by a face detector Classfcato ad clusterg o poses Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 43

44 Optmal D embeddgs alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 44

45 Clusterg K-meas clusterg after trasformato (k = 100) K fxed to be the same as umber of classes wo metrcs Purty - pots wth a cluster come from the same class Accuracy - pots from a class form a sgle cluster Matrx K s ot guarateed to be postve sem-defte Isomap! - Nystrom: EVD of W (ca gore egatve egevalues) - Col-samplg: SVD of C (sgs are lost)! alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 45

46 Expermets - Classfcato K-Nearest Neghbor Classfcato after Embeddg (%) Classfcato error for 10 radom splts alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 46

47 18M-Mafold D Nystrom Isomap alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 47

48 Shortest Paths o Mafold 18M samples ot eough! alwalkar, Kumar, Rowley [13] Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 48

49 People Hopper Iterface Orkut Gadget Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 49

50 Mafold Learg Ope Questos Does a mafold really exst for a gve dataset? Is t really coected or covex? Istead of lyg o a mafold, may be data lves small clusters dfferet subspaces? Ay practcal beefts of olear dmesoalty reducto (mafold learg) clusterg/classfcato? Most of the results o toy data, o real practcal utlty so far I practce, PCA eough to gve most of the beefts (f ay) Istead of lookg for yet aother mafold learg method, better to focus o solvg f a mafold exsts ad how to quatfy that Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 50

51 Refereces 1. K. Pearso, "O Les ad Plaes of Closest Ft to Systems of Pots Space". Phlosophcal Magaze (6): C. Spearma, Geeral Itellgece, Obectvely Determed ad Measured, Amerca Joural of Psychology, (factor aalyss) 3. I.. Jollffe. Prcpal Compoet Aalyss. Sprger-Verlag. pp. 487, Cox, & M. Cox. Multdmesoal scalg. Chapma & Hall, B. Schölkopf, A. Smola, K.-R. Muller, Kerel Prcpal Compoet Aalyss, I: Berhard Schölkopf, Chrstopher J. C. Burges, Alexader J. Smola (Eds.), Advaces Kerel Methods-Support Vector Learg, 1999, MI Press Cambrdge, MA, USA, S.. Rowes ad L. K. Saul, Nolear Dmesoalty Reducto by Locally Lear Embeddg, Scece, December J. B. eebaum, V. de Slva ad J. C. Lagford, A Global Geometrc Framework for Nolear Dmesoalty Reducto, Scece 90 (5500): , M. Belk ad P. Nyog, Laplaca Egemaps ad Spectral echques for Embeddg ad Clusterg, Advaces Neural Iformato Processg Systems 14, 001, p D. Dooho ad C. Grmes, "Hessa egemaps: Locally lear embeddg techques for hghdmesoal data" Proc Natl Acad Sc U S A. 003 May 13; 100(10): Y. Bego, J F Paemet, P. Vcet, O. Delalleau, N. Le Roux, M. Oumet, Out-of-sample extesos for lle, somap, mds, egemaps, ad spectral clusterg, NIPS, G. E. Hto* ad R. R. Salakhutdov, Reducg the Dmesoalty of Data wth Neural Networks, Scece, 006, Vol o. 5786, pp K. Q. Weberger ad L. K. Saul, Usupervsed Learg of Image Mafolds by Semdefte Programmg, Iteratoal Joural of Computer Vso (IJCV), 70(1), A. alwalkar, S. Kumar ad H. Rowley, Large Scale Mafold Learg, CVPR, B. Shaw ad. Jebara, Structure Preservg Embeddg, ICML, 009. Sav Kumar 11/16/010 EECS6898 Large Scale Mache Learg 51

An Introduction to. Support Vector Machine

An Introduction to. Support Vector Machine A Itroducto to Support Vector Mache Support Vector Mache (SVM) A classfer derved from statstcal learg theory by Vapk, et al. 99 SVM became famous whe, usg mages as put, t gave accuracy comparable to eural-etwork

More information

3D Geometry for Computer Graphics. Lesson 2: PCA & SVD

3D Geometry for Computer Graphics. Lesson 2: PCA & SVD 3D Geometry for Computer Graphcs Lesso 2: PCA & SVD Last week - egedecomposto We wat to lear how the matrx A works: A 2 Last week - egedecomposto If we look at arbtrary vectors, t does t tell us much.

More information

Kernel-based Methods and Support Vector Machines

Kernel-based Methods and Support Vector Machines Kerel-based Methods ad Support Vector Maches Larr Holder CptS 570 Mache Learg School of Electrcal Egeerg ad Computer Scece Washgto State Uverst Refereces Muller et al. A Itroducto to Kerel-Based Learg

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Announcements. Recognition II. Computer Vision I. Example: Face Detection. Evaluating a binary classifier

Announcements. Recognition II. Computer Vision I. Example: Face Detection. Evaluating a binary classifier Aoucemets Recogto II H3 exteded to toght H4 to be aouced today. Due Frday 2/8. Note wll take a whle to ru some thgs. Fal Exam: hursday 2/4 at 7pm-0pm CSE252A Lecture 7 Example: Face Detecto Evaluatg a

More information

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning Prcpal Compoets Aalss A Method of Self Orgazed Learg Prcpal Compoets Aalss Stadard techque for data reducto statstcal patter matchg ad sgal processg Usupervsed learg: lear from examples wthout a teacher

More information

Multivariate Transformation of Variables and Maximum Likelihood Estimation

Multivariate Transformation of Variables and Maximum Likelihood Estimation Marquette Uversty Multvarate Trasformato of Varables ad Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Assocate Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Copyrght 03 by Marquette Uversty

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

QR Factorization and Singular Value Decomposition COS 323

QR Factorization and Singular Value Decomposition COS 323 QR Factorzato ad Sgular Value Decomposto COS 33 Why Yet Aother Method? How do we solve least-squares wthout currg codto-squarg effect of ormal equatos (A T A A T b) whe A s sgular, fat, or otherwse poorly-specfed?

More information

ENGI 3423 Simple Linear Regression Page 12-01

ENGI 3423 Simple Linear Regression Page 12-01 ENGI 343 mple Lear Regresso Page - mple Lear Regresso ometmes a expermet s set up where the expermeter has cotrol over the values of oe or more varables X ad measures the resultg values of aother varable

More information

Pinaki Mitra Dept. of CSE IIT Guwahati

Pinaki Mitra Dept. of CSE IIT Guwahati Pak Mtra Dept. of CSE IIT Guwahat Hero s Problem HIGHWAY FACILITY LOCATION Faclty Hgh Way Farm A Farm B Illustrato of the Proof of Hero s Theorem p q s r r l d(p,r) + d(q,r) = d(p,q) p d(p,r ) + d(q,r

More information

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model Lecture 7. Cofdece Itervals ad Hypothess Tests the Smple CLR Model I lecture 6 we troduced the Classcal Lear Regresso (CLR) model that s the radom expermet of whch the data Y,,, K, are the outcomes. The

More information

Dimensionality reduction Feature selection

Dimensionality reduction Feature selection CS 750 Mache Learg Lecture 3 Dmesoalty reducto Feature selecto Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 750 Mache Learg Dmesoalty reducto. Motvato. Classfcato problem eample: We have a put data

More information

Tema 5: Aprendizaje NO Supervisado: CLUSTERING Unsupervised Learning: CLUSTERING. Febrero-Mayo 2005

Tema 5: Aprendizaje NO Supervisado: CLUSTERING Unsupervised Learning: CLUSTERING. Febrero-Mayo 2005 Tema 5: Apredzae NO Supervsado: CLUSTERING Usupervsed Learg: CLUSTERING Febrero-Mayo 2005 SUPERVISED METHODS: LABELED Data Base Labeled Data Base Dvded to Tra ad Test Choose Algorthm: MAP, ML, K-Nearest

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postpoed exam: ECON430 Statstcs Date of exam: Jauary 0, 0 Tme for exam: 09:00 a.m. :00 oo The problem set covers 5 pages Resources allowed: All wrtte ad prted

More information

Out of sample extensions of PCA, kernel PCA, and MDS

Out of sample extensions of PCA, kernel PCA, and MDS Out of sample extesos of PCA, erel PCA, ad MDS Math 85 Project, Fall 05 Wlso A. Florero-Salas Da L ABLE OF CONENS. Itroducto.... Prcpal Compoet Aalyss (PCA).... he out of sample exteso of PCA... 3.3 he

More information

Algebraic-Geometric and Probabilistic Approaches for Clustering and Dimension Reduction of Mixtures of Principle Component Subspaces

Algebraic-Geometric and Probabilistic Approaches for Clustering and Dimension Reduction of Mixtures of Principle Component Subspaces Algebrac-Geometrc ad Probablstc Approaches for Clusterg ad Dmeso Reducto of Mxtures of Prcple Compoet Subspaces ECE842 Course Project Report Chagfag Zhu Dec. 4, 2004 Algebrac-Geometrc ad Probablstc Approach

More information

Binary classification: Support Vector Machines

Binary classification: Support Vector Machines CS 57 Itroducto to AI Lecture 6 Bar classfcato: Support Vector Maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Supervsed learg Data: D { D, D,.., D} a set of eamples D, (,,,,,

More information

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines CS 675 Itroducto to Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Mdterm eam October 9, 7 I-class eam Closed book Stud materal: Lecture otes Correspodg chapters

More information

Simple Linear Regression

Simple Linear Regression Statstcal Methods I (EST 75) Page 139 Smple Lear Regresso Smple regresso applcatos are used to ft a model descrbg a lear relatoshp betwee two varables. The aspects of least squares regresso ad correlato

More information

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames KLT Tracker Tracker. Detect Harrs corers the frst frame 2. For each Harrs corer compute moto betwee cosecutve frames (Algmet). 3. Lk moto vectors successve frames to get a track 4. Itroduce ew Harrs pots

More information

Linear Regression Linear Regression with Shrinkage. Some slides are due to Tommi Jaakkola, MIT AI Lab

Linear Regression Linear Regression with Shrinkage. Some slides are due to Tommi Jaakkola, MIT AI Lab Lear Regresso Lear Regresso th Shrkage Some sldes are due to Tomm Jaakkola, MIT AI Lab Itroducto The goal of regresso s to make quattatve real valued predctos o the bass of a vector of features or attrbutes.

More information

Econometric Methods. Review of Estimation

Econometric Methods. Review of Estimation Ecoometrc Methods Revew of Estmato Estmatg the populato mea Radom samplg Pot ad terval estmators Lear estmators Ubased estmators Lear Ubased Estmators (LUEs) Effcecy (mmum varace) ad Best Lear Ubased Estmators

More information

Support vector machines

Support vector machines CS 75 Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Outle Outle: Algorthms for lear decso boudary Support vector maches Mamum marg hyperplae.

More information

TESTS BASED ON MAXIMUM LIKELIHOOD

TESTS BASED ON MAXIMUM LIKELIHOOD ESE 5 Toy E. Smth. The Basc Example. TESTS BASED ON MAXIMUM LIKELIHOOD To llustrate the propertes of maxmum lkelhood estmates ad tests, we cosder the smplest possble case of estmatg the mea of the ormal

More information

Simulation Output Analysis

Simulation Output Analysis Smulato Output Aalyss Summary Examples Parameter Estmato Sample Mea ad Varace Pot ad Iterval Estmato ermatg ad o-ermatg Smulato Mea Square Errors Example: Sgle Server Queueg System x(t) S 4 S 4 S 3 S 5

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information

Unsupervised Learning and Other Neural Networks

Unsupervised Learning and Other Neural Networks CSE 53 Soft Computg NOT PART OF THE FINAL Usupervsed Learg ad Other Neural Networs Itroducto Mture Destes ad Idetfablty ML Estmates Applcato to Normal Mtures Other Neural Networs Itroducto Prevously, all

More information

Convergence of the Desroziers scheme and its relation to the lag innovation diagnostic

Convergence of the Desroziers scheme and its relation to the lag innovation diagnostic Covergece of the Desrozers scheme ad ts relato to the lag ovato dagostc chard Méard Evromet Caada, Ar Qualty esearch Dvso World Weather Ope Scece Coferece Motreal, August 9, 04 o t t O x x x y x y Oservato

More information

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3. Sampling, sampling distributions, and parameter estimation Lecture 3 Samplg, samplg dstrbutos, ad parameter estmato Samplg Defto Populato s defed as the collecto of all the possble observatos of terest. The collecto of observatos we take from the populato s called

More information

6. Nonparametric techniques

6. Nonparametric techniques 6. Noparametrc techques Motvato Problem: how to decde o a sutable model (e.g. whch type of Gaussa) Idea: just use the orgal data (lazy learg) 2 Idea 1: each data pot represets a pece of probablty P(x)

More information

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression Overvew Basc cocepts of Bayesa learg Most probable model gve data Co tosses Lear regresso Logstc regresso Bayesa predctos Co tosses Lear regresso 30 Recap: regresso problems Iput to learg problem: trag

More information

Radial Basis Function Networks

Radial Basis Function Networks Radal Bass Fucto Netorks Radal Bass Fucto Netorks A specal types of ANN that have three layers Iput layer Hdde layer Output layer Mappg from put to hdde layer s olear Mappg from hdde to output layer s

More information

Ideal multigrades with trigonometric coefficients

Ideal multigrades with trigonometric coefficients Ideal multgrades wth trgoometrc coeffcets Zarathustra Brady December 13, 010 1 The problem A (, k) multgrade s defed as a par of dstct sets of tegers such that (a 1,..., a ; b 1,..., b ) a j = =1 for all

More information

Chapter 14 Logistic Regression Models

Chapter 14 Logistic Regression Models Chapter 4 Logstc Regresso Models I the lear regresso model X β + ε, there are two types of varables explaatory varables X, X,, X k ad study varable y These varables ca be measured o a cotuous scale as

More information

Big Data Analytics. Data Fitting and Sampling. Acknowledgement: Notes by Profs. R. Szeliski, S. Seitz, S. Lazebnik, K. Chaturvedi, and S.

Big Data Analytics. Data Fitting and Sampling. Acknowledgement: Notes by Profs. R. Szeliski, S. Seitz, S. Lazebnik, K. Chaturvedi, and S. Bg Data Aaltcs Data Fttg ad Samplg Ackowledgemet: Notes b Profs. R. Szelsk, S. Setz, S. Lazebk, K. Chaturved, ad S. Shah Fttg: Cocepts ad recpes A bag of techques If we kow whch pots belog to the le, how

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

Support vector machines II

Support vector machines II CS 75 Mache Learg Lecture Support vector maches II Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Learl separable classes Learl separable classes: here s a hperplae that separates trag staces th o error

More information

Supervised learning: Linear regression Logistic regression

Supervised learning: Linear regression Logistic regression CS 57 Itroducto to AI Lecture 4 Supervsed learg: Lear regresso Logstc regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Data: D { D D.. D D Supervsed learg d a set of eamples s

More information

Bayesian Classification. CS690L Data Mining: Classification(2) Bayesian Theorem: Basics. Bayesian Theorem. Training dataset. Naïve Bayes Classifier

Bayesian Classification. CS690L Data Mining: Classification(2) Bayesian Theorem: Basics. Bayesian Theorem. Training dataset. Naïve Bayes Classifier Baa Classfcato CS6L Data Mg: Classfcato() Referece: J. Ha ad M. Kamber, Data Mg: Cocepts ad Techques robablstc learg: Calculate explct probabltes for hypothess, amog the most practcal approaches to certa

More information

Lecture 3. Least Squares Fitting. Optimization Trinity 2014 P.H.S.Torr. Classic least squares. Total least squares.

Lecture 3. Least Squares Fitting. Optimization Trinity 2014 P.H.S.Torr. Classic least squares. Total least squares. Lecture 3 Optmzato Trt 04 P.H.S.Torr Least Squares Fttg Classc least squares Total least squares Robust Estmato Fttg: Cocepts ad recpes Least squares le fttg Data:,,,, Le equato: = m + b Fd m, b to mmze

More information

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best Error Aalyss Preamble Wheever a measuremet s made, the result followg from that measuremet s always subject to ucertaty The ucertaty ca be reduced by makg several measuremets of the same quatty or by mprovg

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Mache Learg Problem set Due Frday, September 9, rectato Please address all questos ad commets about ths problem set to 6.867-staff@a.mt.edu. You do ot eed to use MATLAB for ths problem set though

More information

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x CS 75 Mache Learg Lecture 8 Lear regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Lear regresso Fucto f : X Y s a lear combato of put compoets f + + + K d d K k - parameters

More information

Lecture 9: Tolerant Testing

Lecture 9: Tolerant Testing Lecture 9: Tolerat Testg Dael Kae Scrbe: Sakeerth Rao Aprl 4, 07 Abstract I ths lecture we prove a quas lear lower boud o the umber of samples eeded to do tolerat testg for L dstace. Tolerat Testg We have

More information

Lecture 8: Linear Regression

Lecture 8: Linear Regression Lecture 8: Lear egresso May 4, GENOME 56, Sprg Goals Develop basc cocepts of lear regresso from a probablstc framework Estmatg parameters ad hypothess testg wth lear models Lear regresso Su I Lee, CSE

More information

Special Instructions / Useful Data

Special Instructions / Useful Data JAM 6 Set of all real umbers P A..d. B, p Posso Specal Istructos / Useful Data x,, :,,, x x Probablty of a evet A Idepedetly ad detcally dstrbuted Bomal dstrbuto wth parameters ad p Posso dstrbuto wth

More information

New Schedule. Dec. 8 same same same Oct. 21. ^2 weeks ^1 week ^1 week. Pattern Recognition for Vision

New Schedule. Dec. 8 same same same Oct. 21. ^2 weeks ^1 week ^1 week. Pattern Recognition for Vision ew Schedule Dec. 8 same same same Oct. ^ weeks ^ week ^ week Fall 004 Patter Recogto for Vso 9.93 Patter Recogto for Vso Classfcato Berd Hesele Fall 004 Overvew Itroducto Lear Dscrmat Aalyss Support Vector

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

Singular Value Decomposition. Linear Algebra (3) Singular Value Decomposition. SVD and Eigenvectors. Solving LEs with SVD

Singular Value Decomposition. Linear Algebra (3) Singular Value Decomposition. SVD and Eigenvectors. Solving LEs with SVD Sgular Value Decomosto Lear Algera (3) m Cootes Ay m x matrx wth m ca e decomosed as follows Dagoal matrx A UWV m x x Orthogoal colums U U I w1 0 0 w W M M 0 0 x Orthoormal (Pure rotato) VV V V L 0 L 0

More information

Lecture Notes Types of economic variables

Lecture Notes Types of economic variables Lecture Notes 3 1. Types of ecoomc varables () Cotuous varable takes o a cotuum the sample space, such as all pots o a le or all real umbers Example: GDP, Polluto cocetrato, etc. () Dscrete varables fte

More information

Generative classification models

Generative classification models CS 75 Mache Learg Lecture Geeratve classfcato models Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Data: D { d, d,.., d} d, Classfcato represets a dscrete class value Goal: lear f : X Y Bar classfcato

More information

Block-Based Compact Thermal Modeling of Semiconductor Integrated Circuits

Block-Based Compact Thermal Modeling of Semiconductor Integrated Circuits Block-Based Compact hermal Modelg of Semcoductor Itegrated Crcuts Master s hess Defese Caddate: Jg Ba Commttee Members: Dr. Mg-Cheg Cheg Dr. Daqg Hou Dr. Robert Schllg July 27, 2009 Outle Itroducto Backgroud

More information

CLASS NOTES. for. PBAF 528: Quantitative Methods II SPRING Instructor: Jean Swanson. Daniel J. Evans School of Public Affairs

CLASS NOTES. for. PBAF 528: Quantitative Methods II SPRING Instructor: Jean Swanson. Daniel J. Evans School of Public Affairs CLASS NOTES for PBAF 58: Quattatve Methods II SPRING 005 Istructor: Jea Swaso Dael J. Evas School of Publc Affars Uversty of Washgto Ackowledgemet: The structor wshes to thak Rachel Klet, Assstat Professor,

More information

A conic cutting surface method for linear-quadraticsemidefinite

A conic cutting surface method for linear-quadraticsemidefinite A coc cuttg surface method for lear-quadratcsemdefte programmg Mohammad R. Osoorouch Calfora State Uversty Sa Marcos Sa Marcos, CA Jot wor wth Joh E. Mtchell RPI July 3, 2008 Outle: Secod-order coe: defto

More information

Research on SVM Prediction Model Based on Chaos Theory

Research on SVM Prediction Model Based on Chaos Theory Advaced Scece ad Techology Letters Vol.3 (SoftTech 06, pp.59-63 http://dx.do.org/0.457/astl.06.3.3 Research o SVM Predcto Model Based o Chaos Theory Sog Lagog, Wu Hux, Zhag Zezhog 3, College of Iformato

More information

8.1 Hashing Algorithms

8.1 Hashing Algorithms CS787: Advaced Algorthms Scrbe: Mayak Maheshwar, Chrs Hrchs Lecturer: Shuch Chawla Topc: Hashg ad NP-Completeess Date: September 21 2007 Prevously we looked at applcatos of radomzed algorthms, ad bega

More information

Simple Linear Regression

Simple Linear Regression Correlato ad Smple Lear Regresso Berl Che Departmet of Computer Scece & Iformato Egeerg Natoal Tawa Normal Uversty Referece:. W. Navd. Statstcs for Egeerg ad Scetsts. Chapter 7 (7.-7.3) & Teachg Materal

More information

G S Power Flow Solution

G S Power Flow Solution G S Power Flow Soluto P Q I y y * 0 1, Y y Y 0 y Y Y 1, P Q ( k) ( k) * ( k 1) 1, Y Y PQ buses * 1 P Q Y ( k1) *( k) ( k) Q Im[ Y ] 1 P buses & Slack bus ( k 1) *( k) ( k) Y 1 P Re[ ] Slack bus 17 Calculato

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ  1 STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Recall Assumpto E(Y x) η 0 + η x (lear codtoal mea fucto) Data (x, y ), (x 2, y 2 ),, (x, y ) Least squares estmator ˆ E (Y x) ˆ " 0 + ˆ " x, where ˆ

More information

Dr. Shalabh. Indian Institute of Technology Kanpur

Dr. Shalabh. Indian Institute of Technology Kanpur Aalyss of Varace ad Desg of Expermets-I MODULE -I LECTURE - SOME RESULTS ON LINEAR ALGEBRA, MATRIX THEORY AND DISTRIBUTIONS Dr. Shalabh Departmet t of Mathematcs t ad Statstcs t t Ida Isttute of Techology

More information

CSE 5526: Introduction to Neural Networks Linear Regression

CSE 5526: Introduction to Neural Networks Linear Regression CSE 556: Itroducto to Neural Netorks Lear Regresso Part II 1 Problem statemet Part II Problem statemet Part II 3 Lear regresso th oe varable Gve a set of N pars of data , appromate d by a lear fucto

More information

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture) CSE 546: Mache Learg Lecture 6 Feature Selecto: Part 2 Istructor: Sham Kakade Greedy Algorthms (cotued from the last lecture) There are varety of greedy algorthms ad umerous amg covetos for these algorthms.

More information

Comparing Different Estimators of three Parameters for Transmuted Weibull Distribution

Comparing Different Estimators of three Parameters for Transmuted Weibull Distribution Global Joural of Pure ad Appled Mathematcs. ISSN 0973-768 Volume 3, Number 9 (207), pp. 55-528 Research Ida Publcatos http://www.rpublcato.com Comparg Dfferet Estmators of three Parameters for Trasmuted

More information

ECON 5360 Class Notes GMM

ECON 5360 Class Notes GMM ECON 560 Class Notes GMM Geeralzed Method of Momets (GMM) I beg by outlg the classcal method of momets techque (Fsher, 95) ad the proceed to geeralzed method of momets (Hase, 98).. radtoal Method of Momets

More information

b. There appears to be a positive relationship between X and Y; that is, as X increases, so does Y.

b. There appears to be a positive relationship between X and Y; that is, as X increases, so does Y. .46. a. The frst varable (X) s the frst umber the par ad s plotted o the horzotal axs, whle the secod varable (Y) s the secod umber the par ad s plotted o the vertcal axs. The scatterplot s show the fgure

More information

COV. Violation of constant variance of ε i s but they are still independent. The error term (ε) is said to be heteroscedastic.

COV. Violation of constant variance of ε i s but they are still independent. The error term (ε) is said to be heteroscedastic. c Pogsa Porchawseskul, Faculty of Ecoomcs, Chulalogkor Uversty olato of costat varace of s but they are stll depedet. C,, he error term s sad to be heteroscedastc. c Pogsa Porchawseskul, Faculty of Ecoomcs,

More information

Linear Regression with One Regressor

Linear Regression with One Regressor Lear Regresso wth Oe Regressor AIM QA.7. Expla how regresso aalyss ecoometrcs measures the relatoshp betwee depedet ad depedet varables. A regresso aalyss has the goal of measurg how chages oe varable,

More information

Nonparametric Techniques

Nonparametric Techniques Noparametrc Techques Noparametrc Techques w/o assumg ay partcular dstrbuto the uderlyg fucto may ot be kow e.g. mult-modal destes too may parameters Estmatg desty dstrbuto drectly Trasform to a lower-dmesoal

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Marquette Uverst Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Coprght 08 b Marquette Uverst Maxmum Lkelhood Estmato We have bee sag that ~

More information

Application of Global Sensitivity Indices for measuring the effectiveness of Quasi-Monte Carlo methods and parameter estimation. parameter.

Application of Global Sensitivity Indices for measuring the effectiveness of Quasi-Monte Carlo methods and parameter estimation. parameter. Applcato of Global estvty Idces for measurg the effectveess of Quas-Mote Carlo methods ad parameter estmato parameter estmato Kuchereko Emal: skuchereko@cacuk Outle Advatages ad dsadvatages of Mote Carlo

More information

PROJECTION PROBLEM FOR REGULAR POLYGONS

PROJECTION PROBLEM FOR REGULAR POLYGONS Joural of Mathematcal Sceces: Advaces ad Applcatos Volume, Number, 008, Pages 95-50 PROJECTION PROBLEM FOR REGULAR POLYGONS College of Scece Bejg Forestry Uversty Bejg 0008 P. R. Cha e-mal: sl@bjfu.edu.c

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON430 Statstcs Date of exam: Frday, December 8, 07 Grades are gve: Jauary 4, 08 Tme for exam: 0900 am 00 oo The problem set covers 5 pages Resources allowed:

More information

Qualifying Exam Statistical Theory Problem Solutions August 2005

Qualifying Exam Statistical Theory Problem Solutions August 2005 Qualfyg Exam Statstcal Theory Problem Solutos August 5. Let X, X,..., X be d uform U(,),

More information

Probability and. Lecture 13: and Correlation

Probability and. Lecture 13: and Correlation 933 Probablty ad Statstcs for Software ad Kowledge Egeers Lecture 3: Smple Lear Regresso ad Correlato Mocha Soptkamo, Ph.D. Outle The Smple Lear Regresso Model (.) Fttg the Regresso Le (.) The Aalyss of

More information

L5 Polynomial / Spline Curves

L5 Polynomial / Spline Curves L5 Polyomal / Sple Curves Cotets Coc sectos Polyomal Curves Hermte Curves Bezer Curves B-Sples No-Uform Ratoal B-Sples (NURBS) Mapulato ad Represetato of Curves Types of Curve Equatos Implct: Descrbe a

More information

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

LAPLACIAN MATRIX IN ALGEBRAIC GRAPH THEORY

LAPLACIAN MATRIX IN ALGEBRAIC GRAPH THEORY Aryabhatta Joural of Mathematcs & Iformatcs Vol 6, No, Ja-July, 04 ISSN : 0975-739 Joural Impact Factor (03) : 0489 LAPLACIAN MARIX IN ALGEBRAIC GRAPH HEORY ARameshkumar *, RPalakumar ** ad SDeepa ***

More information

Mean is only appropriate for interval or ratio scales, not ordinal or nominal.

Mean is only appropriate for interval or ratio scales, not ordinal or nominal. Mea Same as ordary average Sum all the data values ad dvde by the sample sze. x = ( x + x +... + x Usg summato otato, we wrte ths as x = x = x = = ) x Mea s oly approprate for terval or rato scales, ot

More information

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems Char for Network Archtectures ad Servces Prof. Carle Departmet of Computer Scece U Müche Aalyss of System Performace IN2072 Chapter 5 Aalyss of No Markov Systems Dr. Alexader Kle Prof. Dr.-Ig. Georg Carle

More information

Applications of Multiple Biological Signals

Applications of Multiple Biological Signals Applcatos of Multple Bologcal Sgals I the Hosptal of Natoal Tawa Uversty, curatve gastrectomy could be performed o patets of gastrc cacers who are udergoe the curatve resecto to acqure sgal resposes from

More information

Line Fitting and Regression

Line Fitting and Regression Marquette Uverst MSCS6 Le Fttg ad Regresso Dael B. Rowe, Ph.D. Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Coprght 8 b Marquette Uverst Least Squares Regresso MSCS6 For LSR we have pots

More information

Newton s Power Flow algorithm

Newton s Power Flow algorithm Power Egeerg - Egll Beedt Hresso ewto s Power Flow algorthm Power Egeerg - Egll Beedt Hresso The ewto s Method of Power Flow 2 Calculatos. For the referece bus #, we set : V = p.u. ad δ = 0 For all other

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b CS 70 Dscrete Mathematcs ad Probablty Theory Fall 206 Sesha ad Walrad DIS 0b. Wll I Get My Package? Seaky delvery guy of some compay s out delverg packages to customers. Not oly does he had a radom package

More information

ECE 559: Wireless Communication Project Report Diversity Multiplexing Tradeoff in MIMO Channels with partial CSIT. Hoa Pham

ECE 559: Wireless Communication Project Report Diversity Multiplexing Tradeoff in MIMO Channels with partial CSIT. Hoa Pham ECE 559: Wreless Commucato Project Report Dversty Multplexg Tradeoff MIMO Chaels wth partal CSIT Hoa Pham. Summary I ths project, I have studed the performace ga of MIMO systems. There are two types of

More information

ESS Line Fitting

ESS Line Fitting ESS 5 014 17. Le Fttg A very commo problem data aalyss s lookg for relatoshpetwee dfferet parameters ad fttg les or surfaces to data. The smplest example s fttg a straght le ad we wll dscuss that here

More information

LINEAR REGRESSION ANALYSIS

LINEAR REGRESSION ANALYSIS LINEAR REGRESSION ANALYSIS MODULE V Lecture - Correctg Model Iadequaces Through Trasformato ad Weghtg Dr. Shalabh Departmet of Mathematcs ad Statstcs Ida Isttute of Techology Kapur Aalytcal methods for

More information

Naïve Bayes MIT Course Notes Cynthia Rudin

Naïve Bayes MIT Course Notes Cynthia Rudin Thaks to Şeyda Ertek Credt: Ng, Mtchell Naïve Bayes MIT 5.097 Course Notes Cytha Rud The Naïve Bayes algorthm comes from a geeratve model. There s a mportat dstcto betwee geeratve ad dscrmatve models.

More information

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then Secto 5 Vectors of Radom Varables Whe workg wth several radom varables,,..., to arrage them vector form x, t s ofte coveet We ca the make use of matrx algebra to help us orgaze ad mapulate large umbers

More information

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations Lecture 7 3. Parametrc ad No-Parametrc Ucertates, Radal Bass Fuctos ad Neural Network Approxmatos he parameter estmato algorthms descrbed prevous sectos were based o the assumpto that the system ucertates

More information

Chapter 8. Inferences about More Than Two Population Central Values

Chapter 8. Inferences about More Than Two Population Central Values Chapter 8. Ifereces about More Tha Two Populato Cetral Values Case tudy: Effect of Tmg of the Treatmet of Port-We tas wth Lasers ) To vestgate whether treatmet at a youg age would yeld better results tha

More information

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity ECONOMETRIC THEORY MODULE VIII Lecture - 6 Heteroskedastcty Dr. Shalabh Departmet of Mathematcs ad Statstcs Ida Isttute of Techology Kapur . Breusch Paga test Ths test ca be appled whe the replcated data

More information

PTAS for Bin-Packing

PTAS for Bin-Packing CS 663: Patter Matchg Algorthms Scrbe: Che Jag /9/00. Itroducto PTAS for B-Packg The B-Packg problem s NP-hard. If we use approxmato algorthms, the B-Packg problem could be solved polyomal tme. For example,

More information

ECON 482 / WH Hong The Simple Regression Model 1. Definition of the Simple Regression Model

ECON 482 / WH Hong The Simple Regression Model 1. Definition of the Simple Regression Model ECON 48 / WH Hog The Smple Regresso Model. Defto of the Smple Regresso Model Smple Regresso Model Expla varable y terms of varable x y = β + β x+ u y : depedet varable, explaed varable, respose varable,

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information

Median as a Weighted Arithmetic Mean of All Sample Observations

Median as a Weighted Arithmetic Mean of All Sample Observations Meda as a Weghted Arthmetc Mea of All Sample Observatos SK Mshra Dept. of Ecoomcs NEHU, Shllog (Ida). Itroducto: Iumerably may textbooks Statstcs explctly meto that oe of the weakesses (or propertes) of

More information

Multiple Regression. More than 2 variables! Grade on Final. Multiple Regression 11/21/2012. Exam 2 Grades. Exam 2 Re-grades

Multiple Regression. More than 2 variables! Grade on Final. Multiple Regression 11/21/2012. Exam 2 Grades. Exam 2 Re-grades STAT 101 Dr. Kar Lock Morga 11/20/12 Exam 2 Grades Multple Regresso SECTIONS 9.2, 10.1, 10.2 Multple explaatory varables (10.1) Parttog varablty R 2, ANOVA (9.2) Codtos resdual plot (10.2) Trasformatos

More information

Outline. Point Pattern Analysis Part I. Revisit IRP/CSR

Outline. Point Pattern Analysis Part I. Revisit IRP/CSR Pot Patter Aalyss Part I Outle Revst IRP/CSR, frst- ad secod order effects What s pot patter aalyss (PPA)? Desty-based pot patter measures Dstace-based pot patter measures Revst IRP/CSR Equal probablty:

More information