Dimensionality Reduction Algorithms!! Lecture 15!
|
|
- Cuthbert Jerome Anthony
- 5 years ago
- Views:
Transcription
1 Dimesioality Reductio Alorithms!! Lecture 15! 1!
2 Hih-dimesioal Spaces! Couter-ituitive properties! Volume of a d dimesioal sphere of radius r V d (r) = 2 d / 2 r d d# d & ) % 2( where Γ() is the amma fuctio. 2!
3 Hih-dimesioal Spaces! Couter-ituitive properties! Volume of a d dimesioal sphere of radius r V d (r) = 2 d / 2 r d d# d & ) % 2( r ε r-ε where Γ() is the amma fuctio. Cosider the outermost shell of thickess ε What is the fractio of volume cotaied i this shell? 3!
4 Hih-dimesioal Spaces! Couter-ituitive properties! Volume of a d dimesioal sphere of radius r V d (r) = 2 d / 2 r d d# d & ) % 2( r ε r-ε where Γ() is the amma fuctio. Cosider the outermost shell of thickess ε f d = V (r) V d d (r #) V d (r) = rd (r #) d r d =1 1 # & ) % r( d 4!
5 Hih-dimesioal Spaces! Couter-ituitive properties! Volume of a d dimesioal sphere of radius r V d (r) = 2 d / 2 r d d# d & ) % 2( Fractio of volume i outer most shell (e) e=0.02 e=0.05 e=0.10 e= 0.20 where Γ() is the amma fuctio Number of dimesioa Cosider the outermost shell of thickess ε f d = V (r) V d d (r #) V d (r) = rd (r #) d r d =1 1 # & ) % r( d 5!
6 Hih-dimesioal Spaces! Couter-ituitive properties! Cosider the outermost shell of thickess ε f d = V (r) V d d (r #) V d (r) Fractio of volume i outer most shell (e) e=0.02 e=0.05 e=0.10 e= Number of dimesioa This meas that i hih dimesioal spaces most of the volume is cocetrated aroud the surface (withi ε) of the hypersphere, ad the ceter is essetially void.! = rd (r #) d r d =1 1 # & ) % r( d 6!
7 Curse of Dimesioality! The curse of dimesioality! A term coied by Bellma i 1961 Refers to the problems associated with multivariate data aalysis as the dimesioality icreases We will illustrate these problems with a simple example Cosider a 3-class patter recoitio problem! A simple approach would be to Divide the feature space ito uiform bis Compute the ratio of examples for each class at each bi ad, For a ew example, fid its bi ad choose the predomiat class i that bi I our toy problem we decide to start with oe sile feature ad divide the real lie ito 3 semets After doi this, we otice that there exists too much overlap amo the classes, so we decide to icorporate a secod feature to try ad improve separability Notes borrowed from Gutierrez-Osua 7!
8 Curse of Dimesioality! We decide to preserve the raularity of each axis, which raises the umber of bis from 3 (i 1D) to 3 2 =9 (i 2D)! At this poit we eed to make a decisio: do we maitai the desity of examples per bi or do we keep the umber of examples we had for the oe-dimesioal case? Choosi to maitai the desity icreases the umber of examples from 9 (i 1D) to 27 (i 2D) Choosi to maitai the umber of examples results i a 2D scatter plot that is very sparse Movi to three features makes the problem worse:! The umber of bis rows to 3 3 =27 For the same desity of examples the umber of eeded examples becomes 81 For the same umber of examples, well, the 3D scatter plot is almost empty Notes borrowed from Gutierrez-Osua 8!
9 Curse of Dimesioality! Obviously, our approach to divide the sample space ito equally spaced bis was quite iefficiet! There are other approaches that are much less susceptible to the curse of dimesioality, but the problem still exists How do we beat the curse of dimesioality?! By icorporati prior kowlede By providi icreasi smoothess of the taret fuctio By reduci the dimesioality I practice, the curse of dimesioality meas that, for a ive sample size, there is a maximum umber of features above which the performace of our classifier will derade rather tha improve! I most cases, the additioal iformatio that is lost by discardi some features is (more tha) compesated by a more accurate mappi i the lower dimesioal space Notes borrowed from Gutierrez-Osua 9!
10 # Dimesioality Reductio! Two approaches are available to perform dimesioality reductio! x 1 x 2 x N Feature extractio: creati a subset of ew features by combiatios of the existi features Feature selectio: choosi a subset of all the features (the oes more iformative) % feature selectio ((((() & # x i1 x i2 x im % & # x 1 x 2 x N % feature extractio (((( () & # y 1 y 2 y M % = f & # x 1 x 2 x N % & The problem of feature extractio ca be stated as:! Give a feature space x i R N fid a mappi y=f(x):r N R M with M<N such that the trasformed feature vector y i R M preserves (most of) the iformatio or structure i R N. 10! Notes borrowed from Gutierrez-Osua
11 Dimesioality Reductio! I eeral, the optimal mappi y=f(x) will be a o-liear fuctio! # However, there is o systematic way to eerate o-liear trasforms The selectio of a particular subset of trasforms is problem depedet For this reaso, feature extractio is commoly limited to liear trasforms: y=wx That is, y is a liear projectio of x x 1 x 2 : : x N % & NOTE: Whe the mappi is a o-liear fuctio, the reduced space is called a maifold feature extractio (((( () # y 1 y 2 : y M % w 11 w w 1N % w 21 :... : = : :... : & # w M 1... w MN & # We will focus o liear feature extractio for ow, ad revisit o-liear techiques if we have time! x 1 x 2 : : x N % & 11! Notes borrowed from Gutierrez-Osua
12 Dimesioality Reductio! The selectio of the feature extractio mappi y=f(x) is uided by a objective fuctio that we seek to maximize (or miimize)! Depedi o the criteria used by the objective fuctio, feature extractio techiques are rouped ito two cateories:! Sial represetatio: The oal of the feature extractio mappi is to represet the samples accurately i a lower-dimesioal space Classificatio: The oal of the feature extractio mappi is to ehace the classdiscrimiatory iformatio i the lower-dimesioal space Withi the realm of liear feature extractio, two techiques are commoly used! Pricipal Compoets Aalysis (PCA) uses a sial represetatio criterio Liear Discrimiat Aalysis (LDA) uses a sial classificatio criterio Notes borrowed from Gutierrez-Osua 12!
13 PCA Letʼs build some ituitio! µ = 0 % ( = 2 2 % # 0& # 2 3& Eievectors of covariace matrix! x = = ! v = #0.7882! & ) v = & ) % ( % ( x1 13!!
14 PCA Recap! The objective of PCA is to perform dimesioality reductio while preservi as much of the variace i the hih-dimesioal space as possible! Let x be a D-dimesioal radom vector, represeted as a liear combiatio of orthoormal basis vectors [ϕ 1 ϕ 2... ϕ D ] To optimally represet x with miimum sum-square error, we will choose the eievectors ϕ i of the covaraice matrix correspodi to the larest eievalues λ i x # i = i # i where Σ x is the covariace matrix 14!
15 PCA Trick: Sapshot PCA! Is some applicatio (e.. imae processi) the umber of dimesios (# of pixels) is far reater tha the data poits (# of imaes)! A D-dimesioal space ad N data poits, where N < D, defies a liear subspace whose dimesioality is at most N-1 Reular PCA! Implies there is o poit applyi PCA for values of M > N-1 D-N+1 of the eievalues will be zero Dataset X is D by N matrix (i.e. colum vectors) Computatioal cost for fidi DxD eievectors is o the order of O(D 3 ) 1 N!1 XX T! i = i! i 15!
16 PCA Trick: Sapshot PCA! Is some applicatio (e.. imae processi) the umber of dimesios (# of pixels) is far reater tha the data poits (# of imaes)! A D-dimesioal space ad N data poit, where N < D, defies a liear subspace whose dimesioality is at most N-1 Implies there is o poit applyi PCA for values of M > N-1 D-N+1 of the eievevalues will be zero Computatioal cost for fidi DxD eievectors is o the order of O(D 3 ) Trick: Sapshot PCA! Dataset X is D by N matrix 1 N!1 X T Xv i =! i v i Pre-mulitply both sides by X 1 N!1 XX T (Xv i ) =! i (Xv i )!#### #### eievalue problem 16!
17 PCA Trick: Sapshot PCA! Is some applicatio (e.. imae processi) the umber of dimesios (# of pixels) is far reater tha the data poits (# of imaes)! A D-dimesioal space ad N data poit, where N < D, defies a liear subspace whose dimesioality is at most N-1 Implies there is o poit applyi PCA for values of M > N-1 D-N+1 of the eievevalues will be zero Computatioal cost for fidi DxD eievectors is o the order of O(D 3 ) Trick: Sapshot PCA! Dataset X is D by N matrix 1 N!1 XX T (! i ) =! i (! i )!## # #### eievalue problem [ ]! i = Xv i Note: X T X is a NxN matrix so the computatioal cost is O(N 3 )< O(D 3 ) 17!
18 PCA Trick: Sapshot PCA! Is some applicatio (e.. imae processi) the umber of dimesios (# of pixels) is far reater tha the data poits (# of imaes)! A D-dimesioal space ad N data poit, where N < D, defies a liear subspace whose dimesioality is at most N-1 Implies there is o poit applyi PCA for values of M > N-1 D-N+1 of the eievevalues will be zero Computatioal cost for fidi DxD eievectors is o the order of O(D 3 ) Trick: Sapshot PCA! Dataset X is N by D matrix Premultiply both sides by X T # 1 N X T X % X T v & i ( ) = ( ( i X T v ) i X t v i is eie vector of the oriial DxD covariace matrix (rescale to obtai uit vectors) 18!
19 Siular Value Decompositio! Ay matrix A ca be writte as:! A = U W m! m!m m! UU T = I m!m VV T = I! W = # T V!!1 0 0! 0 0! 2 0!! r 0 0 % & 19!
20 Siular Value Decompositio! Ay matrix A ca be writte as:! T A = U W V m! m!m m!!! AA T = (UWV T )(UWV T ) T AA T =UWV T VW T U T AA T =UW I W T U T! AA T =U W W T U T m!!m AA T =U W m!m 2 U T Post-multiply both sides U AA T U =U W 2 U T U m!m AA T U =U 20!
21 Siular Value Decompositio! Ay matrix A ca be writte as:! A = U T W V m! m!m m!!! A T A = (UWV T ) T (UWV T ) A T A = VW T U T UWV T A T A = VW T A T A = V W T!m A T A = V W! Post-multiply both sides V I WV T m!m W V T m! 2 V T A T AV = AW 2 V T V! A T AV = V 21!
22 Siular Value Decompositio! Gilbert Stra diaram to represet four fudametal subspaces!! 22!
23 Siular Value Decompositio! Gilbert Stra diaram to represet four fudametal subspaces!! 23!
24 Siular Value Decompositio! Gilbert Stra diaram to represet four fudametal subspaces!! 24!
25 Siular Value Decompositio! Gilbert Stra diaram to represet four fudametal subspaces!! 25!
26 Siular Value Decompositio! Ituitio Gilbert Stra diaram:!! 26!
27 Siular Value Decompositio! Ituitio Gilbert Stra diaram:!! 27!
28 SVD DEMO!! MATLAB commads! load madrill [U W V] = svd(x); fiure;imaesc(x);colormap(map); Xcap = U(:,1:100)*W(1:100,1:100)*V(:,1:100); fiure;imaesc(xcap); 28!
29 Liear Discrimiat Aalysis! The objective of LDA is to perform dimesioality reductio while preservi as much of the class discrimiatory iformatio as possible! Assume we have a set of D-dimesioal samples {x (1, x (2,, x (N }, N 1 of which belo to class C1, ad N 2 to class C2. We seek to obtai a scalar y by projecti the samples x oto a lie Of all the possible lies we would like to select the oe that maximizes the separability of the scalars! This is illustrated for the two-dimesioal case i the followi fiures 29! Notes borrowed from Gutierrez-Osua
30 LDA: Some ituitio first! Samples from 2 classes alo with historam resulti from projectio oto the lie joii the class meas! Whatʼs the problem here? Bishop, PRML 30!
31 LDA: Some ituitio first! Samples from 2 classes alo with historam resulti from LDA projectio! Whatʼs the differece? Bishop, PRML 31!
32 Liear Discrimiat Aalysis! I order to fid a ood projectio vector, we eed to defie a measure of separatio betwee the projectios! The mea vector of each class i x ad y feature space is µ i = 1 # x µ N i = 1 # y = 1 # w T x = w T µ i N i N i i xc 1 We could the choose the distace betwee the projected meas as our objective fuctio xc 1 xc 1 J(w) =!µ 1!!µ 2 = w T (µ 1! µ 2 ) However, the distace betwee the projected meas is ot a very ood measure sice it does ot take ito accout the stadard deviatio withi the classes Notes borrowed from Gutierrez-Osua 32!
33 LDA! The solutio proposed by Fisher is to maximize a fuctio that represets the differece betwee the meas, ormalized by a measure of the withi-class scatter! For each class we defie the scatter, a equivalet of the variace, as s i 2 = y#c1 ( y µ i ) 2 where the quatity s s 2 is called the withi-class scatter of the projected examples The Fisher liear discrimiat is defied as the liear fuctio w T x that maximizes the criterio fuctio! Therefore, we will be looki for a projectio where examples from the same class are projected very close to each other ad, at the same time, the projected meas are as farther apart as possible J(w) = µ 1 µ 2 s s 2 33! Notes borrowed from Gutierrez-Osua
34 LDA! I order to fid the optimum projectio w*, we eed to express J(w) as a explicit fuctio of w! We defie a measure of the scatter i multivariate feature space x, which are scatter matrices S i = x µ i where S W is called the withi-class scatter matrix The scatter of the projectio y ca the be expressed as a fuctio of the scatter matrix i feature space x! s i = ( y µ i )( y µ i ) T = w T x w T 2 µ i = w T ( x µ i )( x µ i ) T w = w T S i w Similarly, the differece betwee the projected meas ca be expressed i terms of the meas i the oriial feature space! y#ci ( )( x µ i ) T x#ci S 1 + S 2 = S W x#ci ( ) The matrix S B is called the betwee-class scatter. Note that, sice S B is the outer product of two vectors, its rak is at most oe We ca fially express the Fisher criterio i terms of S W ad S B as! x#ci s 1 + s 2 = w T S w w µ 1 µ 2 = ( w T µ 1 w T µ 2 ) 2 = w T ( µ 1 µ 2 )( µ 1 µ 2 ) T w = w T S B w J(w) = wt S B w w T S w w 34! Notes borrowed from Gutierrez-Osua
35 LDA! To fid the maximum of J(w) take derivative w.r.t w ad equate umerator to zero:! dj(w) dw = wt S w w d dw wt S B w # w T S w w 2S B w w T S B w 2S w w = 0 # w T S w ws B w = w T S B ws w w ( ) w T S B w d ( ) dw wt S w w Pre-multiply both sides by (w T S w w) -1! S B w = wt S B w w T S w w S w w = S B w = J(w)S w w S! #1 w # S B # w = # J(w)w # eievalue problem Solvi the eeralized eievalue problem (S W -1 S B w=jw) yields!! w* = armax wt S B w w T S w w S #1 W (µ 1 # µ 2 ) This is kow as Fisherʼs Liear Discrimiat (1936), althouh it is ot a discrimiat but rather a specific choice of directio for the projectio of the data dow to oe dimesio Notes borrowed from Gutierrez-Osua 35!
36 LDA: Some ituitio first! Samples from 2 classes alo with historam resulti from LDA projectio! Whatʼs the differece? Bishop, PRML 36!
37 Multi-class LDA! Fisherʼs LDA eeralizes very racefully for C-class problems! Istead of oe projectio y, we will ow seek (C-1) projectios [y 1,y 2,,y C-1 ] by meas of (C-1) projectio vectors w i, which ca be arraed by colums ito a projectio matrix W=[w 1 w 2 w C-1 ]:! The eeralizatio of the! withi-class scatter is! S i = S W = The eeralizatio for the! i=1 betwee-class scatter is! C ( x # µ i )( x # µ i ) T ad µ i = 1 N xci S i y i = w T x i y = w T x xci x C #( )( µ i µ ) T S B = µ i µ i=1 µ = 1 # x = 1 N N x # N i µ i x%ci where S T =S B +S W is called the total scatter matrix 37! Notes borrowed from Gutierrez-Osua
38 Multi-class LDA! Similarly, we defie the mea vector ad scatter matrices for the projected samples as! µ i = 1 N µ = 1 N # y S w = yci ## i=1 yci C ( y µ i ) y # y S B = # N i µ i µ %y ( µ i ) T From our derivatio for the two-class problem, we ca write! S B = w T S B w S w = w T S w w C i=1 ( )( µ i µ ) T Recall that we are looki for a projectio that maximizes the ratio of betwee-class to withi-class scatter. Sice the projectio is o loer a scalar (it has C-1 dimesios), we the use the determiat of the scatter matrices to obtai a scalar objective fuctio:! J(w) = wt S B w w T S w w Ad we will seek the projectio matrix W* that maximizes this ratio! 38!
39 Multi-class LDA! It ca be show that the optimal projectio matrix W* is the oe whose colums are the eievectors correspodi to the larest eievalues of the followi eeralized eievalue problem! [ ] = armax W T S B W W * = w * 1 w * * 2...w C 1 #% &% W T S W W % ( )% * ( S + S B i W )w * i = 0 NOTE:! S B is the sum of C matrices of rak oe or less ad the mea vectors are costraied by C µ = 1 C Therefore, S B will be of rak (C-1) or less This meas that oly (C-1) of the eievalues λ i will be o-zero The projectios with maximum class separability iformatio are the eievectors correspodi to the larest eievalues of S -1 W S B i=1 µ i 39! Notes borrowed from Gutierrez-Osua
40 LDA Multiclass Example! Eievector 2 (S w -1 S B ) Eievector 1 (S w -1 S B ) Eievector 2 (S w -1 SB ) Eievector 1 (S w -1 S B ) 40!
41 LDA limitatios! LDA produces at most C-1 feature projectios! If the classificatio error estimates establish that more features are eeded, some other method must be employed to provide those additioal features LDA is a parametric method sice it assumes uimodal Gaussia likelihoods! If the distributios are siificatly o-gaussia, the LDA projectios will ot be able to preserve ay complex structure of the data, which may be eeded for classificatio LDA will fail whe the discrimiatory iformatio is ot i the mea but rather i the variace of the data! 41! Notes borrowed from Gutierrez-Osua
42 Trial-by-Trial Variability! How does the odor! represetatio chae over trials?! 10 trials Extracellular recordi 42!
43 Dataset! ufoldi! 2-d matrix! Tesor! 43!
44 PCA vs LDA! 44!
45 Caoical Correlatio Aalysis! Suppose we have two datasets X (N,D1) ad Y(N, D2), ad we would like to fid basis vectors w x ad w y such that the correlatio betwee the projectio of the variables oto the basis vector are mutually maximized! x = w T xx y = w T yy! = E! xy T # = E! x 2 # E! # y2 E! w T x XY T w y # E! w T x XX T w x # E! w T yyy T w y #! = w x T C xy w x w x T C xx w x w y T C yy w y 45!
46 Caoical Correlatio Aalysis! CCA:! armax w x,w y = w x T C xy w x w x T C xx w x w y T C yy w y subject to costrait s : w x T C xx w x =1 w y T C yy w y =1 46!
47 Caoical Correlatio Aalysis! Take partial derivatives w.r.t. w x ad w y :! J(w x, w y ) = 1 2 w T x C xy w y +! x ( ) +! x 2 1! w T x C xx w x!j = C xy w y! x C xx w x = 0 C xy w y = x C xx w x!w x pre! mulitply by w x T ( ) 2 1! w T y C yy w y w T x C xy w y = x w T x C xx w x w T x C xy w y = x!j = C yx w x! y C yy w y = 0 C yx w x = y C yy w y # T C xy= = C yx % &!w y T pre! mulitply by w y w y T C yx w x = y w y T C yy w y 47! w y T C yx w x = y
48 Caoical Correlatio Aalysis! Take partial derivatives w.r.t. w x ad w y :! w x T C xy w y = x w y T C yx w x = y Note: w xt C xy w y =(w yt C yx w x ) T!! x = y Also:!!! C yx w x = C yy w y # w y = 1 C 1 yyc yx w x Substituti w y back to et w x! C xy w y = C xx w x # 1 C C 1 xy yyc yx w x = C xx w x # C 1 xx C xy C 1 yy C yx w x = 2 w!## # ### x eievalue problem 48!
49 Caoical Correlatio Aalysis! The caoical correlatios betwee x ad y ca be foud by solvi the eievalue equatios! C 1 xx C xy C 1 yy C yx w x = # 2 w x C 1 yy C yx C 1 xx C xy w y = # 2 w y Whe CCA is performed o a traii set X, ad a dummy matrix represeti roup membership Y, CCA directios are just LDA directios! 49!
50 Caoical Correlatio Aalysis! Eievector 2 Eievector 1 C 1 xx C xy C 1 yy C yx 50!
51 Relatio to other methods! Istead of the two eievalue problems we ca formulate the problem i oe sile eievalue equatio:! Bora, !
52 Relatio to other methods! Bora, !
LECTURE 17: Linear Discriminant Functions
LECURE 7: Liear Discrimiat Fuctios Perceptro leari Miimum squared error (MSE) solutio Least-mea squares (LMS) rule Ho-Kashyap procedure Itroductio to Patter Aalysis Ricardo Gutierrez-Osua exas A&M Uiversity
More informationMachine Learning for Data Science (CS 4786)
Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm
More informationBIOINF 585: Machine Learning for Systems Biology & Clinical Informatics
BIOINF 585: Machie Learig for Systems Biology & Cliical Iformatics Lecture 14: Dimesio Reductio Jie Wag Departmet of Computatioal Medicie & Bioiformatics Uiversity of Michiga 1 Outlie What is feature reductio?
More informationMachine Learning for Data Science (CS 4786)
Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get
More informationFactor Analysis. Lecture 10: Factor Analysis and Principal Component Analysis. Sam Roweis
Lecture 10: Factor Aalysis ad Pricipal Compoet Aalysis Sam Roweis February 9, 2004 Whe we assume that the subspace is liear ad that the uderlyig latet variable has a Gaussia distributio we get a model
More information5.1 Review of Singular Value Decomposition (SVD)
MGMT 69000: Topics i High-dimesioal Data Aalysis Falll 06 Lecture 5: Spectral Clusterig: Overview (cotd) ad Aalysis Lecturer: Jiamig Xu Scribe: Adarsh Barik, Taotao He, September 3, 06 Outlie Review of
More informationInverse Matrix. A meaning that matrix B is an inverse of matrix A.
Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix
More information10-701/ Machine Learning Mid-term Exam Solution
0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationAlgebra of Least Squares
October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal
More informationECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015
ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],
More informationDefinitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.
Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,
More informationLECTURE 7: Kernel Density Estimation
LECTURE 7: erel esity Estimatio o-parametric esity Estimatio Historams Parze Widows Smoot erels Product erel esity Estimatio Te aïve Bayes Classifier Itroductio to Patter Aalysis Ricardo Gutierrez-Osua
More informationLinear Regression Demystified
Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to
More informationChapter Vectors
Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it
More informationOutline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression
REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationDistributional Similarity Models (cont.)
Distributioal Similarity Models (cot.) Regia Barzilay EECS Departmet MIT October 19, 2004 Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical
More informationCEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering
CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio
More informationAxis Aligned Ellipsoid
Machie Learig for Data Sciece CS 4786) Lecture 6,7 & 8: Ellipsoidal Clusterig, Gaussia Mixture Models ad Geeral Mixture Models The text i black outlies high level ideas. The text i blue provides simple
More informationDistributional Similarity Models (cont.)
Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical Last Time EM Clusterig Soft versio of K-meas clusterig Iput: m dimesioal objects X = {
More informationSimple Linear Regression
Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationStatistical Pattern Recognition
Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig
More informationEconomics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator
Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters
More informationMachine Learning for Data Science (CS4786) Lecture 4
Machie Learig for Data Sciece (CS4786) Lecture 4 Caoical Correlatio Aalysis (CCA) Course Webpage : http://www.cs.corell.edu/courses/cs4786/2016fa/ Aoucemet We are gradig HW0 ad you will be added to cms
More informationLet l be an index for latent variables (l=1,2,3,4). Consider the latent variable z. vector of observed covariates (excluding a constant), α
Olie Supplemet to MaaS i Car-Domiated Cities: Modeli the adoptio, frequecy, ad characteristics of ride-haili trips i Dallas, TX Patrícia S. Lavieri ad Chadra R. Bhat (correspodi author) A Overview of the
More informationBivariate Sample Statistics Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 7
Bivariate Sample Statistics Geog 210C Itroductio to Spatial Data Aalysis Chris Fuk Lecture 7 Overview Real statistical applicatio: Remote moitorig of east Africa log rais Lead up to Lab 5-6 Review of bivariate/multivariate
More informationGeometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT
OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca
More informationSupport vector machine revisited
6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector
More informationLinear Classifiers III
Uiversität Potsdam Istitut für Iformatik Lehrstuhl Maschielles Lere Liear Classifiers III Blaie Nelso, Tobias Scheffer Cotets Classificatio Problem Bayesia Classifier Decisio Liear Classifiers, MAP Models
More information10/2/ , 5.9, Jacob Hays Amit Pillay James DeFelice
0//008 Liear Discrimiat Fuctios Jacob Hays Amit Pillay James DeFelice 5.8, 5.9, 5. Miimum Squared Error Previous methods oly worked o liear separable cases, by lookig at misclassified samples to correct
More informationINF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification
INF 4300 90 Itroductio to classifictio Ae Solberg ae@ifiuioo Based o Chapter -6 i Duda ad Hart: atter Classificatio 90 INF 4300 Madator proect Mai task: classificatio You must implemet a classificatio
More informationApply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.
Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α
More informationIntroduction to Machine Learning DIS10
CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig
More informationLinear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d
Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y
More informationCov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n.
CS 189 Itroductio to Machie Learig Sprig 218 Note 11 1 Caoical Correlatio Aalysis The Pearso Correlatio Coefficiet ρ(x, Y ) is a way to measure how liearly related (i other words, how well a liear model
More informationSession 5. (1) Principal component analysis and Karhunen-Loève transformation
200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image
More informationECE 901 Lecture 12: Complexity Regularization and the Squared Loss
ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality
More informationOutput Analysis (2, Chapters 10 &11 Law)
B. Maddah ENMG 6 Simulatio Output Aalysis (, Chapters 10 &11 Law) Comparig alterative system cofiguratio Sice the output of a simulatio is radom, the comparig differet systems via simulatio should be doe
More informationFast Power Flow Methods 1.0 Introduction
Fast ower Flow Methods. Itroductio What we have leared so far is the so-called full- ewto-raphso R power flow alorithm. The R alorithm is perhaps the most robust alorithm i the sese that it is most liely
More information, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)
Cosider the differetial equatio y '' k y 0 has particular solutios y1 si( kx) ad y cos( kx) I geeral, ay liear combiatio of y1 ad y, cy 1 1 cy where c1, c is also a solutio to the equatio above The reaso
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5
CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio
More informationChapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian
Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde
More information1.010 Uncertainty in Engineering Fall 2008
MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval
More informationTopics Machine learning: lecture 2. Review: the learning problem. Hypotheses and estimation. Estimation criterion cont d. Estimation criterion
.87 Machie learig: lecture Tommi S. Jaakkola MIT CSAIL tommi@csail.mit.edu Topics The learig problem hypothesis class, estimatio algorithm loss ad estimatio criterio samplig, empirical ad epected losses
More informationMixtures of Gaussians and the EM Algorithm
Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity
More informationLecture 3: August 31
36-705: Itermediate Statistics Fall 018 Lecturer: Siva Balakrisha Lecture 3: August 31 This lecture will be mostly a summary of other useful expoetial tail bouds We will ot prove ay of these i lecture,
More informationTMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods
TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A
More informationTransient Response Analysis for Temperature Modulated Chemoresistors
Trasiet Respose Aalysis for Temperature Modulated Chemoresistors R. Gutierrez-Osua 1,2, A. Gutierrez 1,2 ad N. Powar 2 1 Texas A&M Uiversity, Collee Statio, TX 2 Wriht State Uiversity, Dayto, OH Multi-frequecy
More informationCALCULATION OF FIBONACCI VECTORS
CALCULATION OF FIBONACCI VECTORS Stuart D. Aderso Departmet of Physics, Ithaca College 953 Daby Road, Ithaca NY 14850, USA email: saderso@ithaca.edu ad Dai Novak Departmet of Mathematics, Ithaca College
More informationFirst, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,
0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical
More informationStatistical Inference Based on Extremum Estimators
T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0
More information6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition
6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationLecture 7: Properties of Random Samples
Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ
More informationMachine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring
Machie Learig Regressio I Hamid R. Rabiee [Slides are based o Bishop Book] Sprig 015 http://ce.sharif.edu/courses/93-94//ce717-1 Liear Regressio Liear regressio: ivolves a respose variable ad a sigle predictor
More informationPH 425 Quantum Measurement and Spin Winter SPINS Lab 1
PH 425 Quatum Measuremet ad Spi Witer 23 SPIS Lab Measure the spi projectio S z alog the z-axis This is the experimet that is ready to go whe you start the program, as show below Each atom is measured
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationTHE KALMAN FILTER RAUL ROJAS
THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider
More information18.S096: Homework Problem Set 1 (revised)
8.S096: Homework Problem Set (revised) Topics i Mathematics of Data Sciece (Fall 05) Afoso S. Badeira Due o October 6, 05 Exteded to: October 8, 05 This homework problem set is due o October 6, at the
More informationProperties and Hypothesis Testing
Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.
More informationRandom Variables, Sampling and Estimation
Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig
More informationLet us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.
Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,
More informationChapter 2 The Monte Carlo Method
Chapter 2 The Mote Carlo Method The Mote Carlo Method stads for a broad class of computatioal algorithms that rely o radom sampligs. It is ofte used i physical ad mathematical problems ad is most useful
More informationCHAPTER 10 INFINITE SEQUENCES AND SERIES
CHAPTER 10 INFINITE SEQUENCES AND SERIES 10.1 Sequeces 10.2 Ifiite Series 10.3 The Itegral Tests 10.4 Compariso Tests 10.5 The Ratio ad Root Tests 10.6 Alteratig Series: Absolute ad Coditioal Covergece
More informationAn Introduction to Randomized Algorithms
A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis
More informationThe axial dispersion model for tubular reactors at steady state can be described by the following equations: dc dz R n cn = 0 (1) (2) 1 d 2 c.
5.4 Applicatio of Perturbatio Methods to the Dispersio Model for Tubular Reactors The axial dispersio model for tubular reactors at steady state ca be described by the followig equatios: d c Pe dz z =
More informationMatrix Representation of Data in Experiment
Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y
More informationMachine Learning Theory Tübingen University, WS 2016/2017 Lecture 12
Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract I this lecture we derive risk bouds for kerel methods. We will start by showig that Soft Margi kerel SVM correspods to miimizig
More informationFilter banks. Separately, the lowpass and highpass filters are not invertible. removes the highest frequency 1/ 2and
Filter bas Separately, the lowpass ad highpass filters are ot ivertible T removes the highest frequecy / ad removes the lowest frequecy Together these filters separate the sigal ito low-frequecy ad high-frequecy
More informationTHE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS
R775 Philips Res. Repts 26,414-423, 1971' THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS by H. W. HANNEMAN Abstract Usig the law of propagatio of errors, approximated
More informationStatistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions
Statistical ad Mathematical Methods DS-GA 00 December 8, 05. Short questios Sample Fial Problems Solutios a. Ax b has a solutio if b is i the rage of A. The dimesio of the rage of A is because A has liearly-idepedet
More informationWe are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n
Review of Power Series, Power Series Solutios A power series i x - a is a ifiite series of the form c (x a) =c +c (x a)+(x a) +... We also call this a power series cetered at a. Ex. (x+) is cetered at
More informationREGRESSION (Physics 1210 Notes, Partial Modified Appendix A)
REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data
More informationStatistical Fundamentals and Control Charts
Statistical Fudametals ad Cotrol Charts 1. Statistical Process Cotrol Basics Chace causes of variatio uavoidable causes of variatios Assigable causes of variatio large variatios related to machies, materials,
More information1 Approximating Integrals using Taylor Polynomials
Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................
More informationReview Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =
Review Problems ICME ad MS&E Refresher Course September 9, 0 Warm-up problems. For the followig matrices A = 0 B = C = AB = 0 fid all powers A,A 3,(which is A times A),... ad B,B 3,... ad C,C 3,... Solutio:
More informationU8L1: Sec Equations of Lines in R 2
MCVU U8L: Sec. 8.9. Equatios of Lies i R Review of Equatios of a Straight Lie (-D) Cosider the lie passig through A (-,) with slope, as show i the diagram below. I poit slope form, the equatio of the lie
More informationLecture 7: Density Estimation: k-nearest Neighbor and Basis Approach
STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.
More informationSection 14. Simple linear regression.
Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science. BACKGROUND EXAM September 30, 2004.
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Departmet of Electrical Egieerig ad Computer Sciece 6.34 Discrete Time Sigal Processig Fall 24 BACKGROUND EXAM September 3, 24. Full Name: Note: This exam is closed
More information1 Review of Probability & Statistics
1 Review of Probability & Statistics a. I a group of 000 people, it has bee reported that there are: 61 smokers 670 over 5 960 people who imbibe (drik alcohol) 86 smokers who imbibe 90 imbibers over 5
More informationAssumptions. Motivation. Linear Transforms. Standard measures. Correlation. Cofactor. γ k
Outlie Pricipal Compoet Aalysis Yaju Ya Itroductio of PCA Mathematical basis Calculatio of PCA Applicatios //04 ELE79, Sprig 004 What is PCA? Pricipal Compoets Pricipal Compoet Aalysis, origially developed
More informationTAMS24: Notations and Formulas
TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =
More informationLecture 15: Learning Theory: Concentration Inequalities
STAT 425: Itroductio to Noparametric Statistics Witer 208 Lecture 5: Learig Theory: Cocetratio Iequalities Istructor: Ye-Chi Che 5. Itroductio Recall that i the lecture o classificatio, we have see that
More informationBHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13
BHW # /5 ENGR Probabilistic Aalysis Beautiful Homework # Three differet roads feed ito a particular freeway etrace. Suppose that durig a fixed time period, the umber of cars comig from each road oto the
More informationNumber of fatalities X Sunday 4 Monday 6 Tuesday 2 Wednesday 0 Thursday 3 Friday 5 Saturday 8 Total 28. Day
LECTURE # 8 Mea Deviatio, Stadard Deviatio ad Variace & Coefficiet of variatio Mea Deviatio Stadard Deviatio ad Variace Coefficiet of variatio First, we will discuss it for the case of raw data, ad the
More informationQuestions and answers, kernel part
Questios ad aswers, kerel part October 8, 205 Questios. Questio : properties of kerels, PCA, represeter theorem. [2 poits] Let F be a RK defied o some domai X, with feature map φ(x) x X ad reproducig kerel
More informationPAPER : IIT-JAM 2010
MATHEMATICS-MA (CODE A) Q.-Q.5: Oly oe optio is correct for each questio. Each questio carries (+6) marks for correct aswer ad ( ) marks for icorrect aswer.. Which of the followig coditios does NOT esure
More informationStat 139 Homework 7 Solutions, Fall 2015
Stat 139 Homework 7 Solutios, Fall 2015 Problem 1. I class we leared that the classical simple liear regressio model assumes the followig distributio of resposes: Y i = β 0 + β 1 X i + ɛ i, i = 1,...,,
More informationECON 3150/4150, Spring term Lecture 3
Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio
More informationChapter 6 Principles of Data Reduction
Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a
More informationChapter 7. Support Vector Machine
Chapter 7 Support Vector Machie able of Cotet Margi ad support vectors SVM formulatio Slack variables ad hige loss SVM for multiple class SVM ith Kerels Relevace Vector Machie Support Vector Machie (SVM)
More informationLecture 4 Conformal Mapping and Green s Theorem. 1. Let s try to solve the following problem by separation of variables
Lecture 4 Coformal Mappig ad Gree s Theorem Today s topics. Solvig electrostatic problems cotiued. Why separatio of variables does t always work 3. Coformal mappig 4. Gree s theorem The failure of separatio
More informationACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics
ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the
More informationFinally, we show how to determine the moments of an impulse response based on the example of the dispersion model.
5.3 Determiatio of Momets Fially, we show how to determie the momets of a impulse respose based o the example of the dispersio model. For the dispersio model we have that E θ (θ ) curve is give by eq (4).
More informationClassification with linear models
Lecture 8 Classificatio with liear models Milos Hauskrecht milos@cs.pitt.edu 539 Seott Square Geerative approach to classificatio Idea:. Represet ad lear the distributio, ). Use it to defie probabilistic
More informationSTATS 306B: Unsupervised Learning Spring Lecture 8 April 23
STATS 306B: Usupervised Learig Sprig 2014 Lecture 8 April 23 Lecturer: Lester Mackey Scribe: Kexi Nie, Na Bi 8.1 Pricipal Compoet Aalysis Last time we itroduced the mathematical framework uderlyig Pricipal
More informationIntroduction to Artificial Intelligence CAP 4601 Summer 2013 Midterm Exam
Itroductio to Artificial Itelligece CAP 601 Summer 013 Midterm Exam 1. Termiology (7 Poits). Give the followig task eviromets, eter their properties/characteristics. The properties/characteristics of the
More information