CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units

Size: px
Start display at page:

Download "CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units"

Transcription

1 (Adaped from Ehem Alpaydn and Tom Mchell) Consder humans: Toal number of neurons Neuron schng me 3 second (vs ) Connecons per neuron 4 5 Scene recognon me second nference seps doesn seem lke enough ) much parallel compuaon Properes of arfcal neural nes (ANNs): Many neuron-lke hreshold schng uns Many eghed nerconnecons among uns Hghly parallel, dsrbued process Emphass on unng eghs auomacally / 5 ssco@cseunledu / 5 Srong dfferences beeen ANNs for ML and ANNs for bologcal modelng When o Consder ANNs Inpu s hgh-dmensonal dscree- or real-valued (eg, ra sensor npu) Oupu s dscree- or real-valued Oupu s a vecor of values Possbly nosy daa Form of arge funcon s unknon Human readably of resul s unmporan Long ranng mes accepable hreshold uns and Percepron algorhm Graden descen Mullayer neorks agaon 3 / 5 4 / 5 Decson Surface x x x n n x = n x = n f > o = { x = - oherse x x x x Percepron Tranng Implemenaon 5 / 5 + f + y = o(x,,x n )= x + + n x n > oherse (somemes use nsead of ) Somemes e ll use smpler vecor noaon: + f x > y = o(x) = oherse Percepron Tranng Implemenaon 6 / 5 (a) (b) Represens some useful funcons Wha eghs represen g(x, x )=AND(x, x )? Bu some funcons no represenable Ie, hose no lnearly separable Therefore, e ll an neorks of neurons

2 Percepron Tranng Where Does he Tranng Come From? Percepron Tranng Implemenaon and Ie, f (r + +, here = (r y ) x r s label of ranng nsance y s percepron oupu on ranng nsance s small consan (eg, ) called learnng rae y ) > hen ncrease r x, else decrease Can prove rule ll converge f ranng daa s lnearly separable and suffcenly small Percepron Tranng Implemenaon Consder smpler lnear un, here oupu y = + x + + n x n (e, no hreshold) For each example, an o compromse beeen correcveness and conservaveness Correcveness: Tendency o mprove on x (reduce error) Conservaveness: Tendency o keep + close o (mnmze dsance) Use cos funcon ha measures boh: U() =ds +, curr ex, ne s z } { + + x A 7 / 5 8 / 5 Graden Descen Graden Descen (con d) Graden-Descen E ( ) E, E[] E ( + ) Percepron Tranng Implemenaon 9 / 5 + η Percepron Tranng Implemenaon / @ n - - Graden Descen (con d) Graden Descen (con d) Percepron Tranng Implemenaon U() = = conserv z } { coef k + k z} { + n + = correcve z } { (r + x ) n = + Take graden r + + ) and se o : n = + x A x = x A Percepron Tranng Implemenaon Approxmae h hch yelds = + n x A x, = z } { = + r y x / 5 / 5

3 Implemenaon Handlng The OR Problem Percepron Tranng Implemenaon 3 / 5 Can use rules on prevous sldes on an example-by-example bass, somemes called ncremenal, sochasc, or on-lne GD Has a endency o ump around more n searchng, hch helps avod geng rapped n local mnma Alernavely, can use sandard or bach GD, n hch he classfer s evaluaed over all ranng examples, summng he error, and hen updaes are made Ie, sum up for all examples, bu don updae unl summaon complee Ths s an nheren averagng process and ends o gve beer esmae of he graden OR General 4 / 5 x B: (,) A: (,) g (x) < > D: (,) < > C: (,) g (x) Represen h nersecon of o lnear separaors x g (x) = x + x / g (x) = x + x 3/ = x R : g (x) > AND g (x) < = x R : g (x), g (x) < OR g (x), g (x) > Handlng The OR Problem (con d) Handlng The OR Problem (con d) OR General 5 / 5 Le z = ( f g (x) < oherse Class (x, x ) g (x) z g (x) z B: (, ) / / C: (, ) / / A: (, ) / 3/ D: (, ) 3/ / No feed z, z no g(z) = z z / z A: (,) D: (,) B, C: (,) < > g(z) z OR General 6 / 5 In oher ords, e remapped all vecors x o z such ha he classes are lnearly separable n he ne vecor space Hdden Layer 3= / z 3= 5= / Σ x 3 53= Inpu Layer x 3= Σ = 5z x 4 4= Σ x 4 54= z 4= 3/ Oupu Layer Ths s a o-layer percepron or o-layer feedforard neural neork Each neuron oupus f s eghed sum exceeds s hreshold, oherse Handlng General The OR General 7 / 5 By addng up o hdden layers of perceprons, can represen any unon of nersecon of halfspaces Frs hdden layer defnes halfspaces, second hdden layer akes nersecon (AND), oupu layer akes unon (OR) Alg Remarks 8 / 5 x x x n n x = n ne = x = (ne) s he logsc funcon Squashes ne no [, ] range Nce propery: d (x) dx + e ne o = (ne) = = (x)( (x)) + ē ne Connuous, dfferenable approxmaon o hreshold

4 Graden Descen Graden Descen (con d) Agan, use squared error for correcveness: E( )= r y (foldng / of correcveness no error @ r y = r y = Snce y s a funcon of ne = x, = = r (ne = r y y + = + y y r y x Alg Alg Remarks 9 / 5 Remarks / 5 Tranng Oupu Alg Remarks / 5 Inpu layer x x x n x n+, = Σ σ ne n+ n+,n n+, n+, n+,n Σ σ ne n+ n+, x = npu from o = from o Hdden layer x n+3,n+ n+3,n+ n+4,n+ ne n+3 Σ σ n+3,n+ n+4,n+ Σ σ ne n+4 Oupu Layer Use sgmod uns snce connuous and dfferenable E = E( )= rk y k koupus y n+3 y n+4 Alg Remarks / 5 Adus egh accordng o E as before For oupu uns, hs s easy snce conrbuon of hen s an oupu un s he same as for sngle neuron = r y y y x = x here = error erm of Ths s because all oher oupus are consans r o E Tranng Hdden Tranng Hdden (con d) Ho can e compue he error erm for hdden layers hen here s no arge oupu r for hese layers? Insead propagae back error values from oupu layer oard npu layers, scalng h he eghs Scalng h he eghs characerzes ho much of he error erm each hdden un s responsble for The mpac ha has on E s only hrough ne and uns mmedaely donsream of : = x kdon() k = x kdon() k k y ( y ) Works for arbrary number of hdden layers Alg Alg Remarks 3 / 5 Remarks 4 / 5

5 agaon Algorhm agaon Algorhm Example Inalze all eghs o small random numbers Unl ermnaon condon sasfed do For each ranng example (r, x ) do Inpu x o he neork and compue he oupus y For each oupu un k Alg Remarks 5 / 5 3 For each hdden un h k y k ( y k)(r k y k) h y h ( y h) 4 Updae each neork egh, here kdon(h),, +,, = x, k,h k Alg Remarks 6 / 5 ea 3 ral ral _ca _cb _c a b cons sum_c 853 y_c _dc _d sum_d y_d agaon Algorhm Remarks agaon Algorhm Alg Remarks 7 / 5 When o sop ranng? When eghs don change much, error rae suffcenly lo, ec (be aare of overfng: use valdaon se) Canno ensure convergence o global mnmum due o myrad local mnma, bu ends o ork ell n pracce (can re-run h ne random eghs) Generally ranng very slo (housands of eraons), use s very fas Seng : Small values slo convergence, large values mgh overshoo mnmum, can adap over me Alg Remarks 8 / 5 Error Error Danger of soppng oo soon! Error versus egh updaes (example ) 9 Tranng se error Valdaon se error Number of egh updaes Error versus egh updaes (example ) 8 7 Tranng se error Valdaon se error Number of egh updaes agaon Algorhm Remarks agaon Algorhm Remarks (con d) Alernave error funcon: cross enropy E = rk ln y k + r k ln y k koupus blos up f rk and y k or vce-versa (vs squared error, hch s alays n [, ]) Regularzaon: penalze large eghs o make space more lnear and reduce rsk of overfng: E = koupus r k y k +, ( ) Represenaonal poer: Any boolean funcon can be represened h layers Any bounded, connuous funcon can be represened h arbrarly small error h layers Any funcon can be represened h arbrarly small error h 3 layers Number of requred uns may be large May no be able o fnd he rgh eghs Alg Alg Remarks 9 / 5 Remarks 3 / 5

6 Recurren NNs Tranng Recurren NNs Recurren (RNNs) used o handle me seres daa (label of curren example depends on pas exs) y( + ) y( + ) x() x() c() (a) Feedforard neork (b) Recurren neork b Unroll he recurrence hrough me and run backprop Tran as one large neork, usng sequences of examples Then average eghs ogeher x() y( + ) c() y() x( ) c( ) y( ) Alg Remarks 3 / 5 Alg Remarks 3 / 5 (c) Recurren neork unfolded n me x( ) c( ) Hypohess Space Hypohess space H s se of all egh vecors (connuous vs dscree of decson rees) Search va : Possble because error funcon and oupu funcons are connuous & dfferenable Inducve bas: (Roughly) smooh nerpolaon beeen daa pons Smlar o ANNs, polynomal classfers, and RBF neorks n ha remaps npus and hen fnds a hyperplane Man dfference s ho orks Feaures of : Maxmzaon of margn Use of kernels Use of problem convexy o fnd classfer (ofen hou local mnma) Alg Remarks 33 / 5 34 / 5 The Percepron Algorhm Revsed 35 / 5 =b vecors (h mnmum margn) unquely defne hyperplane (oher pons no needed) A hyperplane s margn s he shores dsance from o any ranng vecor Inuon: larger margn ) hgher confdence n classfer s ably o generalze Guaraneed generalzaon error bound n erms of / (under approprae assumpons) Defnon assumes lnear separably (more general defnons exs ha do no) 36 / 5, b, m, r {, +} 8 Whle msakes are made on ranng se For = o N (= # ranng vecors) If r ( m x + b m ) apple m+ m + r x b m+ b m + r m m + Fnal predcor: h(x) =sgn ( m x + b m )

7 The Percepron Algorhm Revsed (paral example, = ) The Percepron Algorhm Revsed (paral example) 37 / 5 38 / 5 A hs pon, =(, 6), b = 6, =(7,,, 8,, 4) Can compue = ( r x + r x + 4r 4 x 4 + 5r 5 x 5 + 6r 6 x 6 )= (7()4 + ()5 + 8( ) + ( ) + 4( )3) = = ( r x + r x + 4r 4 x 4 + 5r 5 x 5 + 6r 6 x 6 )= (7() + ()3 + 8( ) + ( ) + 4( ))) = 6 Ie, = N r x = (con d) Anoher ay of represenng predcor: N h(x) =sgn ( x + b) =sgn r x N = sgn r = = x x + b ( = # predcon msakes on x )! x + b! So percepron algorhm has equvalen dual form:, b Whle msakes are made n For loop For = o N (= # ranng vecors) If r P N = r x x + b apple + b b + r Replace egh vecor h daa n do producs So ha? 39 / 5 4 / 5 OR Revsed OR Revsed (con d) B: (,+) x D: (+,+) No consder he hrd and fourh dmensons of he remapped vecor (scalng p o ): y C: (,+) D: (+,+) x A: (, ) C: (+, ) y 4 / 5 Remap o ne space: h (x, x )= x, x, p x x, p x, p x, 4 / 5 B: (, ) A: (+, )

8 OR Revsed (con d) Can easly compue he do produc (x) (z) (here x =[x, x ]) hou frs compung : K(x, z) =(x z + ) =(x z + x z + ) =(x z ) +(x z ) + x z x z + x z + x z + h = x, x, p x x, p x, p x, {z } (x) h z, z, p z z, p z, p z, {z } (z) A kernel s a funcon K such ha 8 x, z, K(x, z) = (x) (z) Eg, prevous slde (quadrac kernel) In general, for degree-q polynomal kernel, compung (x z + ) q akes ` mulplcaons + exponenaon for x, z R` In conras, need over mulplcaons f compue `+q q frs q `+q q 43 / 5 Ie, snce e use do producs n ne Percepron algorhm, e can mplcly ork n he remapped y space va k 44 / 5 (con d) Polynomal 45 / 5 Typcally sar h kernel and ake he feaure mappng ha yelds Eg, Le ` =, x = x, z = z, K(x, z) =sn(x z) By Fourer expanson, sn(x z) =a + a n sn(nx) sn(nz)+ a n cos(nx) cos(nz) n= n= for Fourer coefcens a, a, Ths s he do produc of o nfne sequences of nonlnear funcons: { (x)} = =[, sn(x), cos(x), sn(x), cos(x),] Ie, here are an nfne number of feaures n hs remapped space! 46 / K(x, x) = x x + q 5 5 Gaussan Ohers kx K(x xk, x) =exp s (a) s = (c) s =5 (b) s =5 (d) s = Hyperbolc angen: (no a rue kernel) K(x, x) =anh x x + Also have ones for srucured daa: eg, graphs, rees, sequences, and ses of pons In addon, he sum of o kernels s a kernel, he produc of o kernels s a kernel Fnally, noe ha a kernel s a smlary measure, useful n cluserng, neares hbor, ec 47 / 5 48 / 5

9 Fndng a Hyperplane Fndng a Hyperplane (con d) Can sho ha f daa lnearly separable n remapped space, hen ge maxmum margn classfer by mnmzng subec o r ( x + b) Can reformulae hs n dual form as a convex quadrac program ha can be solved opmally, e, on encouner local opma: maxmze N = s, =,,m N r = = r r K(x, x ), Afer opmzaon, label ne vecors h decson funcon:! N f (x) =sgn r K(x, x )+b = (Noe only need o use x such ha >, e, suppor vecors) Can alays fnd a kernel ha ll make ranng se lnearly separable, bu beare of choosng a kernel ha s oo poerful (overfng) 49 / 5 5 / 5 Fndng a Hyperplane (con d) Fndng a Hyperplane (con d) 5 / 5 If kernel doesn separae, can sofen he margn h slack varables : N mnmze kk + C,b, = s r ((x )+b), =,,N, =,,N The dual s smlar o ha for hard margn: N maxmze r r K(x, x ) s =, apple apple C, =,,N N r = = Can sll solve opmally 5 / 5 If number of ranng vecors s very large, may op o approxmaely solve hese problems o save me and space Use eg, graden ascen and sequenal mnmal opmzaon (SMO) When done, can hro ou non-svs

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Neural Networks. Understanding the Brain

Neural Networks. Understanding the Brain Threshold uns Graden descen Mullayer neworks Backpropagaon Hdden layer represenaons Example: Face Recognon Advanced opcs Neural Neworks Neural Neworks Neworks of processng uns (neurons) wh connecons (synapses)

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

Lecture 2 L n i e n a e r a M od o e d l e s

Lecture 2 L n i e n a e r a M od o e d l e s Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one

More information

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Chapter Lagrangian Interpolation

Chapter Lagrangian Interpolation Chaper 5.4 agrangan Inerpolaon Afer readng hs chaper you should be able o:. dere agrangan mehod of nerpolaon. sole problems usng agrangan mehod of nerpolaon and. use agrangan nerpolans o fnd deraes and

More information

CHAPTER 2: Supervised Learning

CHAPTER 2: Supervised Learning HATER 2: Supervsed Learnng Learnng a lass from Eamples lass of a famly car redcon: Is car a famly car? Knowledge eracon: Wha do people epec from a famly car? Oupu: osve (+) and negave ( ) eamples Inpu

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

( ) [ ] MAP Decision Rule

( ) [ ] MAP Decision Rule Announcemens Bayes Decson Theory wh Normal Dsrbuons HW0 due oday HW o be assgned soon Proec descrpon posed Bomercs CSE 90 Lecure 4 CSE90, Sprng 04 CSE90, Sprng 04 Key Probables 4 ω class label X feaure

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

How about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?

How about the more general linear scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )? lmcd Lnear ransformaon of a vecor he deas presened here are que general hey go beyond he radonal mar-vecor ype seen n lnear algebra Furhermore, hey do no deal wh bass and are equally vald for any se of

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

Notes on the stability of dynamic systems and the use of Eigen Values.

Notes on the stability of dynamic systems and the use of Eigen Values. Noes on he sabl of dnamc ssems and he use of Egen Values. Source: Macro II course noes, Dr. Davd Bessler s Tme Seres course noes, zarads (999) Ineremporal Macroeconomcs chaper 4 & Techncal ppend, and Hamlon

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 0 Canoncal Transformaons (Chaper 9) Wha We Dd Las Tme Hamlon s Prncple n he Hamlonan formalsm Dervaon was smple δi δ Addonal end-pon consrans pq H( q, p, ) d 0 δ q ( ) δq ( ) δ

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Normal Random Variable and its discriminant functions

Normal Random Variable and its discriminant functions Noral Rando Varable and s dscrnan funcons Oulne Noral Rando Varable Properes Dscrnan funcons Why Noral Rando Varables? Analycally racable Works well when observaon coes for a corruped snle prooype 3 The

More information

Supervised Learning in Multilayer Networks

Supervised Learning in Multilayer Networks Copyrgh Cambrdge Unversy Press 23. On-screen vewng permed. Prnng no permed. hp://www.cambrdge.org/521642981 You can buy hs book for 3 pounds or $5. See hp://www.nference.phy.cam.ac.uk/mackay/la/ for lnks.

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

CHAPTER 5: MULTIVARIATE METHODS

CHAPTER 5: MULTIVARIATE METHODS CHAPER 5: MULIVARIAE MEHODS Mulvarae Daa 3 Mulple measuremens (sensors) npus/feaures/arbues: -varae N nsances/observaons/eamples Each row s an eample Each column represens a feaure X a b correspons o he

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm H ( q, p, ) = q p L( q, q, ) H p = q H q = p H = L Equvalen o Lagrangan formalsm Smpler, bu

More information

Appendix to Online Clustering with Experts

Appendix to Online Clustering with Experts A Appendx o Onlne Cluserng wh Expers Furher dscusson of expermens. Here we furher dscuss expermenal resuls repored n he paper. Ineresngly, we observe ha OCE (and n parcular Learn- ) racks he bes exper

More information

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project GMM paraeer esaon Xaoye Lu M290c Fnal rojec GMM nroducon Gaussan ure Model obnaon of several gaussan coponens Noaon: For each Gaussan dsrbuon:, s he ean and covarance ar. A GMM h ures(coponens): p ( 2π

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm Hqp (,,) = qp Lqq (,,) H p = q H q = p H L = Equvalen o Lagrangan formalsm Smpler, bu wce as

More information

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys Dual Approxmae Dynamc Programmng for Large Scale Hydro Valleys Perre Carpener and Jean-Phlppe Chanceler 1 ENSTA ParsTech and ENPC ParsTech CMM Workshop, January 2016 1 Jon work wh J.-C. Alas, suppored

More information

A Generalized Online Mirror Descent with Applications to Classification and Regression

A Generalized Online Mirror Descent with Applications to Classification and Regression Journal of Machne Learnng Research 1 000 1-48 Submed 4/00; Publshed 10/00 A Generalzed Onlne Mrror Descen wh Applcaons o Classfcaon and Regresson Francesco Orabona Toyoa Technologcal Insue a Chcago 60637

More information

Part II CONTINUOUS TIME STOCHASTIC PROCESSES

Part II CONTINUOUS TIME STOCHASTIC PROCESSES Par II CONTINUOUS TIME STOCHASTIC PROCESSES 4 Chaper 4 For an advanced analyss of he properes of he Wener process, see: Revus D and Yor M: Connuous marngales and Brownan Moon Karazas I and Shreve S E:

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecure Sldes for Machne Learnng nd Edon ETHEM ALPAYDIN, modfed by Leonardo Bobadlla and some pars from hp://www.cs.au.ac.l/~aparzn/machnelearnng/ The MIT Press, 00 alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/mle

More information

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue. Lnear Algebra Lecure # Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue. Mah E-b Lecure #0 Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons are

More information

Clustering with Gaussian Mixtures

Clustering with Gaussian Mixtures Noe o oher eachers and users of hese sldes. Andrew would be delghed f you found hs source maeral useful n gvng your own lecures. Feel free o use hese sldes verbam, or o modfy hem o f your own needs. PowerPon

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION HAPER : LINEAR DISRIMINAION Dscmnan-based lassfcaon 3 In classfcaon h K classes ( k ) We defned dsmnan funcon g () = K hen gven an es eample e chose (pedced) s class label as f g () as he mamum among g

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC CH.3. COMPATIBILITY EQUATIONS Connuum Mechancs Course (MMC) - ETSECCPB - UPC Overvew Compably Condons Compably Equaons of a Poenal Vecor Feld Compably Condons for Infnesmal Srans Inegraon of he Infnesmal

More information

General Weighted Majority, Online Learning as Online Optimization

General Weighted Majority, Online Learning as Online Optimization Sascal Technques n Robocs (16-831, F10) Lecure#10 (Thursday Sepember 23) General Weghed Majory, Onlne Learnng as Onlne Opmzaon Lecurer: Drew Bagnell Scrbe: Nahanel Barshay 1 1 Generalzed Weghed majory

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

CS 268: Packet Scheduling

CS 268: Packet Scheduling Pace Schedulng Decde when and wha pace o send on oupu ln - Usually mplemened a oupu nerface CS 68: Pace Schedulng flow Ion Soca March 9, 004 Classfer flow flow n Buffer managemen Scheduler soca@cs.bereley.edu

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Hidden Markov Models

Hidden Markov Models 11-755 Machne Learnng for Sgnal Processng Hdden Markov Models Class 15. 12 Oc 2010 1 Admnsrva HW2 due Tuesday Is everyone on he projecs page? Where are your projec proposals? 2 Recap: Wha s an HMM Probablsc

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

Pattern Classification (III) & Pattern Verification

Pattern Classification (III) & Pattern Verification Preare by Prof. Hu Jang CSE638 --4 CSE638 3. Seech & Language Processng o.5 Paern Classfcaon III & Paern Verfcaon Prof. Hu Jang Dearmen of Comuer Scence an Engneerng York Unversy Moel Parameer Esmaon Maxmum

More information

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas) Lecure 8: The Lalace Transform (See Secons 88- and 47 n Boas) Recall ha our bg-cure goal s he analyss of he dfferenal equaon, ax bx cx F, where we emloy varous exansons for he drvng funcon F deendng on

More information

SVMs: Duality and Kernel Trick. SVMs as quadratic programs

SVMs: Duality and Kernel Trick. SVMs as quadratic programs 11/17/9 SVMs: Dualt and Kernel rck Machne Learnng - 161 Geoff Gordon MroslavDudík [[[partl ased on sldes of Zv-Bar Joseph] http://.cs.cmu.edu/~ggordon/161/ Novemer 18 9 SVMs as quadratc programs o optmzaton

More information

Let s treat the problem of the response of a system to an applied external force. Again,

Let s treat the problem of the response of a system to an applied external force. Again, Page 33 QUANTUM LNEAR RESPONSE FUNCTON Le s rea he problem of he response of a sysem o an appled exernal force. Agan, H() H f () A H + V () Exernal agen acng on nernal varable Hamlonan for equlbrum sysem

More information

Second-Order Non-Stationary Online Learning for Regression

Second-Order Non-Stationary Online Learning for Regression Second-Order Non-Saonary Onlne Learnng for Regresson Nna Vas, Edward Moroshko, and Koby Crammer, Fellow, IEEE arxv:303040v cslg] Mar 03 Absrac he goal of a learner, n sandard onlne learnng, s o have he

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

FI 3103 Quantum Physics

FI 3103 Quantum Physics /9/4 FI 33 Quanum Physcs Aleander A. Iskandar Physcs of Magnesm and Phooncs Research Grou Insu Teknolog Bandung Basc Conces n Quanum Physcs Probably and Eecaon Value Hesenberg Uncerany Prncle Wave Funcon

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Fitting a Conditional Linear Gaussian Distribution

Fitting a Conditional Linear Gaussian Distribution Fng a Condonal Lnear Gaussan Dsrbuon Kevn P. Murphy 28 Ocober 1998 Revsed 29 January 2003 1 Inroducon We consder he problem of fndng he maxmum lkelhood ML esmaes of he parameers of a condonal Gaussan varable

More information

Professor Joseph Nygate, PhD

Professor Joseph Nygate, PhD Professor Joseph Nygae, PhD College of Appled Scence and Technology Aprl, 2018 } Wha s AI and Machne Learnng ML) 10 mnues } Eample ML algorhms 15 mnues } Machne Learnng n Telecom 15 mnues } Do Machnes

More information

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering CS 536: Machne Learnng Nonparamerc Densy Esmaon Unsupervsed Learnng - Cluserng Fall 2005 Ahmed Elgammal Dep of Compuer Scence Rugers Unversy CS 536 Densy Esmaon - Cluserng - 1 Oulnes Densy esmaon Nonparamerc

More information

Political Economy of Institutions and Development: Problem Set 2 Due Date: Thursday, March 15, 2019.

Political Economy of Institutions and Development: Problem Set 2 Due Date: Thursday, March 15, 2019. Polcal Economy of Insuons and Developmen: 14.773 Problem Se 2 Due Dae: Thursday, March 15, 2019. Please answer Quesons 1, 2 and 3. Queson 1 Consder an nfne-horzon dynamc game beween wo groups, an ele and

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

Chapter 4. Neural Networks Based on Competition

Chapter 4. Neural Networks Based on Competition Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

Approximation Lasso Methods for Language Modeling

Approximation Lasso Methods for Language Modeling Approxmaon Lasso Mehods for Language Modelng Janfeng Gao Mcrosof Research One Mcrosof Way Redmond WA 98052 USA jfgao@mcrosof.com Hsam Suzuk Mcrosof Research One Mcrosof Way Redmond WA 98052 USA hsams@mcrosof.com

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Consider processes where state transitions are time independent, i.e., System of distinct states,

Consider processes where state transitions are time independent, i.e., System of distinct states, Dgal Speech Processng Lecure 0 he Hdden Marov Model (HMM) Lecure Oulne heory of Marov Models dscree Marov processes hdden Marov processes Soluons o he hree Basc Problems of HMM s compuaon of observaon

More information