Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Size: px
Start display at page:

Download "Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong"

Transcription

1 Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong

2 Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal detecton theory Based on Bayesan framework Quanttatve fttng usng Bayesan models wth prors Bayesan model wth generc prors Expectaton-Maxmzaton EM) mplementaton

3 Perceve Moton from Two Frames Frame 1

4 Perceve Moton from Two Frames Frame 2

5 Correspondence Problem Dot postons n the frst frame Dot postons n the second frame Stmulus generaton 4 dots move coherently Correspondence uncertanty

6 Random Dot Knematograms RDK) Common stmul used n psychophyscs and physology, Barlow et al. 1979, 1997); Brtten et al. 1992, 1995); ewsome et al. 1989, 1990), etc.

7 Moton Detecton Task Tral 1, Frame 1

8 Moton Detecton Task Tral 1, Frame 2 Do you perceve coherent moton?

9 Moton Detecton Task Tral 2, Frame 1

10 Moton Detecton Task Tral 2, Frame 2 Do you perceve coherent moton?

11 Moton Detecton Task Tral 1 Tral 2 Do you perceve coherent moton?

12 Random Dot Knematograms RDK) Common stmul used n psychophyscs and physology, Barlow et al. 1979, 1997); Brtten et al. 1992, 1995); ewsome et al. 1989, 1990), etc. =20, C=1.0 =20, C=0.5 =20, C=0.1 : number of dots C: coherence rato, the proporton of dots that move wth the same velocty

13 Psychophyscal Experments Manpulate vsual stmul by changng Dot number, Dot dsplacement between two frames umber of frames Human performance measurement Accuracy hgh accuracy ndcates good performance) Coherence threshold s the coherence rato needed to acheve 75% accuracy n a specfc task low coherence threshold ndcates good performance

14 Correspondence Pars Two dots Three dots =2, Ψ = 4 =3, Ψ = 9 # of possble correspondence pars, Ψ = 2

15 Coherence Threshold # of possble correspondence pars, = 2 Dffculty of coherent moton percepton would be expected to ncrease wth the number of dots n RDK.e. coherence threshold would be expected to ncrease wth the number of dots Ψ Dot umber

16 Human Coherence Threshold Intrgung psychophyscal fndng Barlow & Trpathy, 1997) Human coherence threshold s almost constant wth the number of dots n RDK

17 Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal detecton theory Based on Bayesan framework

18 Ideal Observer for Detectng Coherent Moton Ideal observer model provdes optmal performance as a benchmark to evaluate how well humans process moton nformaton on a specfc task Ideal observer usng sgnal detecton theory SDT) Barlow & Trpathy, 1997) Ideal observer model uses expermenter pror,.e. the proporton of dots that move coherently, C velocty of coherent moton, T

19 Ideal Observer Usng SDT Detecton decson s based on umber of matches, ψ,.e. the number of dots n the frst frame that have a correspondng dot wth dsplacement T n the second frame

20 Ideal Observer Usng SDT Observaton Two possbltes Random moton, the two matches are due to chance algnment of randomly movng dots Coherent moton, the two matches are from coherent moton of sgnal dots

21 Ideal Observer Usng SDT Random moton stmulus ψ ~ dots n the dsplay. For each dot n the frst frame, the probablty of a dot at the correspondng poston n the second frame s /Q. Q s the total number of possble postons n the dsplay Moton stmulus wth C dots movng coherently ψ C ~ Bnomal, /Q) Bnomal 1-C), /Q) < ψ 0 >= Q VAR ψ 0) = 1 Q Q <ψ C Assume 2 >= C + 1 C) Q VAR ψ C ) = VAR ψ 0 )

22 Ideal Observer Usng SDT Barlow & Trpathy,, 1997) Compute senstvty and coherence threshold usng SDT Senstvty < ψ C > < ψ 0 > d' = = C Q VAR ψ ) 0 Coherence threshold for d'= 1 C = 1 th Q 1 Q Q >> Ideal observer predcts that coherence threshold s constant wth the number of dots n RDK

23 Bayesan Ideal Observer Lu & Yulle, 2005) Defne correspondence varables Va V a = a a 1, f y = x + T; V = Random moton stmulus otherwse Generatve model for the RDK stmulus Moton stmulus wth C dots movng coherently 0, P { ya},{ x} rand) = P{ ya}) P{ x}) P{ y a },{ x } coh, T ) = V a P{ y a } { x },{ V a }, T ) P{ V wth the constrant a }) P{ x }) V a = C a

24 Bayesan Ideal Observer Generatve model for the RDK stmulus P{ y a },{ x } coh, T ) Pror dstrbuton for the dot postons allows all confguratons of the dots to be equally lkely P{ ya }) = P{ x}) = Q )! Q! 1 Q f Q >> Pror dstrbuton for correspondence varable s unform P{ V a }) = = V a C)!! P{ y a } { x },{ V C)! C)!! Condtonal Dstrbuton gven dot postons n the frst frame Q )! Va P{ ya} { x},{ Va}, T ) = δ ya, x + T )), Q C )! a a }, T ) P{ V a }) P{ x })

25 Bayesan Ideal Observer Bayesan Ideal Observer Generatve model for the RDK stmulus = V a a a a a x P V P T V x y P T coh x y P }) { }) { ) }, },{ } { { ), } },{ { )! )!! )! )!! )! ), )! )! Q Q C C C T x y C Q Q a a V a V a + = δ! )!! )! )!! )! )! )!! )! )! Q Q C C C C C C Q Q = ψ ψ Lkelhood rato )!! )!!! )! ) ), 2 s s Q Q C rand Data P T coh Data P = ψ ψ

26 Ideal Observer Ideal observer usng sgnal detecton theory s an approxmaton to Bayesan deal observer

27 Human Observer vs. Bayesan Ideal Observer Model Ideal observer model predcts the qualtatve trend of human performance Remarkably low human effcency, 0.3~1% Human Coherence Threshold: 30%~50% Ideal Coherence Threshold: 0.5%~1%

28 Why Low Effcency? Ideal observer model uses pror specfc to the experment, because deal observer model knows the proporton of dots that move coherently, C velocty of coherent moton, T 1 P v { x },{ ya}) = P{ ya} { x}, v) P v) Z Ideal observer model uses specfc pror knowledge about the RDK stmulus used n the experment

29 Why Low Effcency? Ideal observer model uses pror specfc to the experment Human observers mght use some generc pror consstent wth moton statstcs n natural scenes 1 P v { x },{ ya}) = P{ ya} { x}, v) P v) Z Ideal observer model uses specfc pror Human observers mght use generc pror

30 Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal detecton theory Based on Bayesan framework Quanttatve fttng usng Bayesan models wth prors Bayesan model wth generc prors Expectaton-Maxmzaton EM) mplementaton

31 Generc Pror: Slow & Smooth Prefer slow and spatally smooth moton Slowness: most objects are statc Smoothness: object features tend to move together atural statstcs of moton flow Roth & Black, 2005) log-hstogram of Horzontal velocty, u Vertcal velocty, v

32 Generc Pror: Slow & Smooth Prefer slow and spatally smooth moton P v Lv m= v) e e 0 = 2m mr 2 σ v r 2 λ m!2 m λ m x Yulle & Grzywacz 1988) = Slowness v = 0 Frst-order Smoothness v = 0 x Second-order Smoothness 2 v = 0 2 x Preferred Less Preferred

33 Bayesan Model wth Prors Ideal observer model uses expermenter pror,.e. the proporton of dots that move coherently, C velocty of coherent moton, T Slow & smooth model estmates velocty feld wthout requrng any knowledge about stmulus parameters

34 otaton n Slow & Smooth Model Bnary correspondence varable M M a a = 1, = 0, M 0 =1, M 0 = 0, f x corresponds otherwse Bnary outler varable to y a, x f the -th dot n the frst frame s an outler wthout a correspondng dot n the second frame otherwse wth a constrant that ensures that dot the frst frame s ether unmatched as an outler, or s matched to a dot a n the second frame, Probablstc correspondence m P M = 1) m P M 1) a = a y 0 = 0 = a a= 0 M a wth the constrant = 1 a= 0 m a = 1

35 Bayesan Model wth Slow & Smooth Pror Goal: maxmze Apply Bayes rule wth the constrant = v v P { x},{ ya}) = P,{ M a} { x},{ ya}) 2 M a ya x v x )) / T 2 1 λ Lv / T ξ M Z e = 1a= 1 of a= 0 M M a = 1, v 1 v v P,{ M a} { x},{ ya}) = P{ ya} { x},,{ M a}) P ) P{ M a}) Z Lkelhood e e SS generc pror on velocty feld = 1 0 / T Bas aganst msmatch

36 Expectaton-Maxmzaton EM) Dempster, Lard, & Rubn 1977) The EM algorthm s a general method of estmatng parameters of an underlyng dstrbuton from a gven data set when the data s ncomplete or has mssng values Two man applcatons of EM When the data ndeed has mssng values, due to problems wth or lmtatons of the observaton process When maxmzng the lkelhood or posteror probablty s analytcally ntractable, but t s doable by assumng hdden varables to smplfy the dstrbuton functon

37 Expectaton-Maxmzaton EM) E-step Replace hdden varables by ther expectatons condtonal on the estmated parameter In the current example, E-step s to evaluate M-step m t) t) t1) a = P M a = 1 { x},{ ya}, v t of the hdden varable M a, condtonal on the observed data and the velocty feld from the last M-step estmate parameters by maxmzng expected log posteror dstrbuton as f hdden varables were observed ) t) v = arg max v E m [ log, { },{ })] t ) P v m x y a a

38 Goal: maxmze EM Implementaton 2 2 M a ya x v x )) / T λ Lv = 1a= 1 = 1 P v x y = e { The classcal EM algorthm s equvalent to mnmzng the free energy functon Frey & Hnton, 1996) },{ 2 F[ m, v] = m y x v x )) + λ Lv + ξ = 1 a= 1 a a }) a M 1 Z wth the constrant of 2 = 1 m 0 + T a= 0 = 1 a= 1 wth the constrant m / T ξ M a log m a= 0 m a a 0 M a = 1, = 1

39 Mnmzng free energy E step: EM Implementaton mnmzng ths energy functon wth respect to m a whle holdng v constant M step: 2 F[ m, v] = m y x v x )) + λ Lv + ξ = 1 a= 1 a a mnmzng ths energy functon wth respect to v whle holdng m a constant 2 = 1 m 0 + T = 1 a= 1 wth the constrant m a log m a= 0 m a a = 1

40 EM Implementaton EM Implementaton E step: update the correspondences = = + = + = a T x v x y T T a T x v x y T T x v x y a a a a e e e m e e e m 1 ) / )) / / 0 1 ) / )) / ) / )) ξ ξ ξ = 0 m a F 0 0 = m F

41 EM Algorthm EM Algorthm m M step: update the velocty feld = = = = + a a a j j j a j x x m m x x G ) ), β λδ = = j j j x x G x v 1 ), ) β = ) exp 2 1 ), σ πσ j j x x x x G where = 0 v E

42 EM Iteraton Steps =50, C=0.5

43 Exp1. Coherence Threshold n Detectng Coherent Moton

44 Model Predcton n a Tral Stmulus =50, C=0.4 Model Predcton

45 Interestng Model Predcton Stmulus =50, C=0.1 Model Predcton What would humans perceve?

46 Exp 2: Moton Percepton n a Sngle Tral =50, C=0.4 Stmulus Human perceved global moton drecton: 2 =50, C=0.4 Model predcted global moton drecton: 1

47 Exp 2: Moton Percepton n a Sngle Tral =50, C=0.1 Stmulus Human perceved global moton drecton: 298 =50, C=0.1 Model predcted global moton drecton: 294

48 Expermental Test of Generc Prors Conjecture Percepton wll be largely domnated by prors when data s extremely nosy, e.g. random moton

49 Exp 2: Moton Percepton n a Sngle Tral =50, C=0 Stmulus Human perceved global moton drecton: 57 =50, C=0% Model predcted global moton drecton: 64

50 Tral-by by-tral Correlaton Between Human Percepton and Model Predcton Subject BR r = 0.83 r = 0.83

51 Tral-by by-tral Correlaton Between Human Percepton and Model Predcton Subject BR r = 0.77 r = 0.74

52 Can humans perceve any structure n random moton? Can Bayesan model wth Slow & Smooth pror predct human percepton?

53 Tral-by by-tral Correlaton Between Human Percepton and Model Predcton Subject BR r = 0.73 r = 0.76

54 Tral-by by-tral Correlaton, =50

55 Tral-by by-tral Correlaton, =500

56 Conclusons Synergy between perceptual experments and Bayesan framework Ideal observer wth specfc expermental pror predcts human performance qualtatvely Bayesan model wth Slow & Smooth generc pror predcts human performance both qualtatvely and quanttatvely Model predcts new phenomena, ncludng percevng structure n random moton

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics /7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore 8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation An Experment/Some Intuton I have three cons n my pocket, 6.864 (Fall 2006): Lecture 18 The EM Algorthm Con 0 has probablty λ of heads; Con 1 has probablty p 1 of heads; Con 2 has probablty p 2 of heads

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X The EM Algorthm (Dempster, Lard, Rubn 1977 The mssng data or ncomplete data settng: An Observed Data Lkelhood (ODL that s a mxture or ntegral of Complete Data Lkelhoods (CDL. (1a ODL(;Y = [Y;] = [Y,][

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data Condtonal Random Felds: Probablstc Models for Segmentng and Labelng Sequence Data Paper by John Lafferty, Andrew McCallum, and Fernando Perera ICML 2001 Presentaton by Joe Drsh May 9, 2002 Man Goals Present

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machne Learnng - Lectures Lecture 1-2: Concept Learnng (M. Pantc Lecture 3-4: Decson Trees & CC Intro (M. Pantc Lecture 5-6: Artfcal Neural Networks (S.Zaferou Lecture 7-8: Instance ased Learnng

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Hierarchical Bayes. Peter Lenk. Stephen M Ross School of Business at the University of Michigan September 2004

Hierarchical Bayes. Peter Lenk. Stephen M Ross School of Business at the University of Michigan September 2004 Herarchcal Bayes Peter Lenk Stephen M Ross School of Busness at the Unversty of Mchgan September 2004 Outlne Bayesan Decson Theory Smple Bayes and Shrnkage Estmates Herarchcal Bayes Numercal Methods Battng

More information

1/10/18. Definitions. Probabilistic models. Why probabilistic models. Example: a fair 6-sided dice. Probability

1/10/18. Definitions. Probabilistic models. Why probabilistic models. Example: a fair 6-sided dice. Probability /0/8 I529: Machne Learnng n Bonformatcs Defntons Probablstc models Probablstc models A model means a system that smulates the object under consderaton A probablstc model s one that produces dfferent outcomes

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecture Sldes for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydn@boun.edu.tr http://www.cmpe.boun.edu.tr/~ethem/2ml3e CHAPTER 3: BAYESIAN DECISION THEORY Probablty

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (

More information

Advanced Statistical Methods: Beyond Linear Regression

Advanced Statistical Methods: Beyond Linear Regression Advanced Statstcal Methods: Beyond Lnear Regresson John R. Stevens Utah State Unversty Notes 2. Statstcal Methods I Mathematcs Educators Workshop 28 March 2009 1 http://www.stat.usu.edu/~rstevens/pcm 2

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information

STATISTICAL MECHANICS

STATISTICAL MECHANICS STATISTICAL MECHANICS Thermal Energy Recall that KE can always be separated nto 2 terms: KE system = 1 2 M 2 total v CM KE nternal Rgd-body rotaton and elastc / sound waves Use smplfyng assumptons KE of

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture 20: Hypothesis testing

Lecture 20: Hypothesis testing Lecture : Hpothess testng Much of statstcs nvolves hpothess testng compare a new nterestng hpothess, H (the Alternatve hpothess to the borng, old, well-known case, H (the Null Hpothess or, decde whether

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Retrieval Models: Language models

Retrieval Models: Language models CS-590I Informaton Retreval Retreval Models: Language models Luo S Department of Computer Scence Purdue Unversty Introducton to language model Ungram language model Document language model estmaton Maxmum

More information

Prediction and Change Detection

Prediction and Change Detection Predcton and Change Detecton Mark Steyvers Scott Brown msteyver@uc.edu scottb@uc.edu Unversty of Calforna, Irvne Unversty of Calforna, Irvne Irvne, CA 92697 Irvne, CA 92697 Abstract We measure the ablty

More information

Introduction to Hidden Markov Models

Introduction to Hidden Markov Models Introducton to Hdden Markov Models Alperen Degrmenc Ths document contans dervatons and algorthms for mplementng Hdden Markov Models. The content presented here s a collecton of my notes and personal nsghts

More information

Diagnostics in Poisson Regression. Models - Residual Analysis

Diagnostics in Poisson Regression. Models - Residual Analysis Dagnostcs n Posson Regresson Models - Resdual Analyss 1 Outlne Dagnostcs n Posson Regresson Models - Resdual Analyss Example 3: Recall of Stressful Events contnued 2 Resdual Analyss Resduals represent

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Semi-Supervised Learning

Semi-Supervised Learning Sem-Supervsed Learnng Consder the problem of Prepostonal Phrase Attachment. Buy car wth money ; buy car wth wheel There are several ways to generate features. Gven the lmted representaton, we can assume

More information

Learning undirected Models. Instructor: Su-In Lee University of Washington, Seattle. Mean Field Approximation

Learning undirected Models. Instructor: Su-In Lee University of Washington, Seattle. Mean Field Approximation Readngs: K&F 0.3, 0.4, 0.6, 0.7 Learnng undrected Models Lecture 8 June, 0 CSE 55, Statstcal Methods, Sprng 0 Instructor: Su-In Lee Unversty of Washngton, Seattle Mean Feld Approxmaton Is the energy functonal

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Expectation Maximization Mixture Models HMMs

Expectation Maximization Mixture Models HMMs -755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood

More information

Efficient, General Point Cloud Registration with Kernel Feature Maps

Efficient, General Point Cloud Registration with Kernel Feature Maps Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton

More information

Methods Lunch Talk: Causal Mediation Analysis

Methods Lunch Talk: Causal Mediation Analysis Methods Lunch Talk: Causal Medaton Analyss Taeyong Park Washngton Unversty n St. Lous Aprl 9, 2015 Park (Wash U.) Methods Lunch Aprl 9, 2015 1 / 1 References Baron and Kenny. 1986. The Moderator-Medator

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEARNING Vasant Honavar Bonformatcs and Computatonal Bology Program Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Study on Active Micro-vibration Isolation System with Linear Motor Actuator. Gong-yu PAN, Wen-yan GU and Dong LI

Study on Active Micro-vibration Isolation System with Linear Motor Actuator. Gong-yu PAN, Wen-yan GU and Dong LI 2017 2nd Internatonal Conference on Electrcal and Electroncs: echnques and Applcatons (EEA 2017) ISBN: 978-1-60595-416-5 Study on Actve Mcro-vbraton Isolaton System wth Lnear Motor Actuator Gong-yu PAN,

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Stochastic Structural Dynamics

Stochastic Structural Dynamics Stochastc Structural Dynamcs Lecture-1 Defnton of probablty measure and condtonal probablty Dr C S Manohar Department of Cvl Engneerng Professor of Structural Engneerng Indan Insttute of Scence angalore

More information

Let the Shape Speak - Discriminative Face Alignment using Conjugate Priors

Let the Shape Speak - Discriminative Face Alignment using Conjugate Priors Let the Shape Speak - Dscrmnatve Face Algnment usng Conjugate Prors Pedro Martns, Ru Casero, João F. Henrques, Jorge Batsta http://www.sr.uc.pt/~pedromartns Insttute of Systems and Robotcs Unversty of

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Hierarchical Bayesian Inference in Networks of Spiking Neurons

Hierarchical Bayesian Inference in Networks of Spiking Neurons To appear n Advances n NIPS, Vol. 17, MIT Press, 25. Herarchcal Bayesan Inference n Networks of Spkng Neurons Raesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton, Seattle,

More information

Why BP Works STAT 232B

Why BP Works STAT 232B Why BP Works STAT 232B Free Energes Helmholz & Gbbs Free Energes 1 Dstance between Probablstc Models - K-L dvergence b{ KL b{ p{ = b{ ln { } p{ Here, p{ s the eact ont prob. b{ s the appromaton, called

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Regression with limited dependent variables. Professor Bernard Fingleton

Regression with limited dependent variables. Professor Bernard Fingleton Regresson wth lmted dependent varables Professor Bernard Fngleton Regresson wth lmted dependent varables Whether a mortgage applcaton s accepted or dened Decson to go on to hgher educaton Whether or not

More information

Communication with AWGN Interference

Communication with AWGN Interference Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013 Feature Selecton & Dynamc Trackng F&P Textbook New: Ch 11, Old: Ch 17 Gudo Gerg CS 6320, Sprng 2013 Credts: Materal Greg Welch & Gary Bshop, UNC Chapel Hll, some sldes modfed from J.M. Frahm/ M. Pollefeys,

More information

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX Populaton Desgn n Nonlnear Mxed Effects Multple Response Models: extenson of PFIM and evaluaton by smulaton wth NONMEM and MONOLIX May 4th 007 Carolne Bazzol, Sylve Retout, France Mentré Inserm U738 Unversty

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Non-Mixture Cure Model for Interval Censored Data: Simulation Study ABSTRACT

Non-Mixture Cure Model for Interval Censored Data: Simulation Study ABSTRACT Malaysan Journal of Mathematcal Scences 8(S): 37-44 (2014) Specal Issue: Internatonal Conference on Mathematcal Scences and Statstcs 2013 (ICMSS2013) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal

More information

Likelihood Methods: A Companion to the NEPPSR analysis project. Colin Gay, Yale University

Likelihood Methods: A Companion to the NEPPSR analysis project. Colin Gay, Yale University Lkelhood Methods: A Companon to the EPPSR analyss project Outlne We have arranged a hands-on mn-course on fttng technques usng the ROOT framework To ft data, we often use the MIUIT package from CER, whch

More information

Lab 4: Two-level Random Intercept Model

Lab 4: Two-level Random Intercept Model BIO 656 Lab4 009 Lab 4: Two-level Random Intercept Model Data: Peak expratory flow rate (pefr) measured twce, usng two dfferent nstruments, for 17 subjects. (from Chapter 1 of Multlevel and Longtudnal

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Advances in Longitudinal Methods in the Social and Behavioral Sciences. Finite Mixtures of Nonlinear Mixed-Effects Models.

Advances in Longitudinal Methods in the Social and Behavioral Sciences. Finite Mixtures of Nonlinear Mixed-Effects Models. Advances n Longtudnal Methods n the Socal and Behavoral Scences Fnte Mxtures of Nonlnear Mxed-Effects Models Jeff Harrng Department of Measurement, Statstcs and Evaluaton The Center for Integrated Latent

More information

Outline for today. Markov chain Monte Carlo. Example: spatial statistics (Christensen and Waagepetersen 2001)

Outline for today. Markov chain Monte Carlo. Example: spatial statistics (Christensen and Waagepetersen 2001) Markov chan Monte Carlo Rasmus Waagepetersen Department of Mathematcs Aalborg Unversty Denmark November, / Outlne for today MCMC / Condtonal smulaton for hgh-dmensonal U: Markov chan Monte Carlo Consder

More information

3) Surrogate Responses

3) Surrogate Responses 1) Introducton Vsual neurophysology has benefted greatly for many years through the use of smple, controlled stmul lke bars and gratngs. One common characterzaton of the responses elcted by these stmul

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30

More information

Properties of Least Squares

Properties of Least Squares Week 3 3.1 Smple Lnear Regresson Model 3. Propertes of Least Squares Estmators Y Y β 1 + β X + u weekly famly expendtures X weekly famly ncome For a gven level of x, the expected level of food expendtures

More information

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn

More information

Explaining the Stein Paradox

Explaining the Stein Paradox Explanng the Sten Paradox Kwong Hu Yung 1999/06/10 Abstract Ths report offers several ratonale for the Sten paradox. Sectons 1 and defnes the multvarate normal mean estmaton problem and ntroduces Sten

More information

Probabilistic Classification: Bayes Classifiers. Lecture 6:

Probabilistic Classification: Bayes Classifiers. Lecture 6: Probablstc Classfcaton: Bayes Classfers Lecture : Classfcaton Models Sam Rowes January, Generatve model: p(x, y) = p(y)p(x y). p(y) are called class prors. p(x y) are called class condtonal feature dstrbutons.

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Discriminative classifier: Logistic Regression. CS534-Machine Learning

Discriminative classifier: Logistic Regression. CS534-Machine Learning Dscrmnatve classfer: Logstc Regresson CS534-Machne Learnng 2 Logstc Regresson Gven tranng set D stc regresson learns the condtonal dstrbuton We ll assume onl to classes and a parametrc form for here s

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

Invariant deformation parameters from GPS permanent networks using stochastic interpolation

Invariant deformation parameters from GPS permanent networks using stochastic interpolation Invarant deformaton parameters from GPS permanent networks usng stochastc nterpolaton Ludovco Bag, Poltecnco d Mlano, DIIAR Athanasos Dermans, Arstotle Unversty of Thessalonk Outlne Startng hypotheses

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information