Inferring Dynamic Dependency with Applications to Link Analysis

Size: px
Start display at page:

Download "Inferring Dynamic Dependency with Applications to Link Analysis"

Transcription

1 Inferring Dynamic Dependency wih Applicaions o Link Analysis Michael R. Siracusa Massachuses Insiue of Technology 77 Massachuses Ave. Cambridge, MA 239 John W. Fisher III Massachuses Insiue of Technology 77 Massachuses Ave. Cambridge, MA 239 Absrac Saisical approaches o modeling dynamics and clusering daa are well sudied research areas. This paper considers a special class of such problems in which one is presened wih muliple daa sreams and wishes o infer heir ineracion as i evolves over ime. This problem is viewed as one of inference on a class of models in which ineracion is described by changing dependency srucures, i.e. he presence or absence of edges in a graphical model, bu for which he full se of parameers are no available. The applicaion domain of dynamic link analysis as applied o racked objec behavior is explored. An approximae inference mehod is presened along wih empirical resuls demonsraing is performance. I. INTRODUCTION Consider a scene in which muliple objecs such as people or vehicles are moving around in an environmen. Given noisy measuremens of heir posiion over ime, a quesion one may ask is, which, if any, of hese objecs are ineracing a each poin in ime. The answer o his ype of quesion is useful for social ineracion analysis, anomalous even deecion and auomaic scene summarizaion. For example, Heider and Simmel [] showed ha when describing a caroon of simple shapes moving on a screen, one will end o described heir behavior wih erms like chasing, following or being independen of one anoher. These semanic labels describe he dependency beween objecs and allow humans o provide a compac descripion of wha hey are observing. We view his problem as a specific example of a general class of problems which we refer o as dynamic dependency ess. A dynamic dependency es answers he following quesion: given muliple daa sreams, how does heir ineracion evolve over ime? Here, ineracion is defined in erms of changing graphical srucures, i.e., he presence or absence of edges in a graphical model. For example, in he objec ineracion case, X is following Y implies a casual dependency beween Y and X. We cas a dynamic dependency es as a problem of inference on a special class of probabilisic models in which a laen sae variable indexes a discree se of possible dependency srucures on measuremens. We refer o his class of models as dynamic dependence models and inroduce a specific implemenaion via a hidden facorizaion Markov model (HFacMM). This model allows us o ake advanage of boh srucural and parameric changes associaed wih changes in he sae of ineracion of a se of objecs. We furher show how inference and learning on an HFacMM can be used in an approximae inference procedure on a more expressive swiching linear dynamic sysems (SLDS) model. The approach presened in his paper fis ino he general caegory of daa clusering and dynamic modeling. Two classic examples in his caegory are fiing mixure models and raining hidden Markov models (HMMs) using he EM algorihm [2]. Typically hese models assume fixed dependency srucure for he observed daa. The sudy of models whose graphical srucure is coningen upon he values/conex of he nodes in he graph can be raced back o Heckerman and Geiger s similariy neworks and mulines [3]. This class of models has been furher explored and formalized by Bouilier e al. s Conex-Specific Independence [4] and more recenly Milch e. al s Coningen Bayesian Neworks [5]. An HFacMM fis ino his class of models and is closely relaed o Bilmes s Dynamic Bayesian Mulines [6]. The focus of [6] was o show how learning sae-indexed srucure using labeled raining daa can yield beer models for classificaion asks. In conras, here he dependency srucures are defined by he problem and no labeled daa is required. HFacHMMs are also closely relaed o SLDS models used by he racking communiy [7]. SLDS models are combinaions of discree Markov models and linear sae-space dynamical sysems. The hidden discree sae chooses beween a predefined number of sae-space models o describe he daa a each poin in ime. SLDSs are primarily used o help improve racking and rack inerpreaion by allowing changes o he sae-space model parameers. Exac inference and learning for such models is difficul. However, in his paper we are only ineresed in changes o he dependency srucure of he observaions and consider he laen dynamic sae in he sae-space

2 model a nuisance variable. An HFacMM explicily models varying dependence srucure over ime and allows for efficien learning and inference. This paper explores he radeoff beween using his simple model and incorporaing a laen dynamic sae using an SLDS model when performing a dynamic dependency es. We begin by describing HFacMMs and laer show how inference on an HFacMM can be used as a subrouine in an approximae inference algorihm for SLDS models. II. HIDDEN FACTORIZATION MARKOV MODEL Le O = {o, o 2,..., o N } be an observaion of N random variables a ime wih o i R di. Le O :T represen O from ime o T. Given O :T, he goal is o label he sequence according o he dependency among he N random variables a each ime. To his end, we propose a ph order hidden facorizaion Markov Model, HFacMM(p), in which we assume ha he observaion O depend on he pas p observaions and a hidden sae S. The saes S :T are Markov yielding p(o :T, S :T ; Θ) = p(s :T ; Θ) TY = p(o O ( p):( ), S ; Θ), () where Θ are he parameers. This model is an HMM when p = and a Markov swiching auo-regressive model (MSAR) for p >. However, i has he special propery ha he value k [...K] of he hidden sae variable S indicaes one of K possible facorizaions F k and paramerizaions Θ k such ha p(o (O ( p):( ), S = k; Θ) = C Y k i= p(f k i, F k i,( p):( ); Θ k ) p Θ k,p(f k ), where F k specifies a pariioning of a full se of N random variables ino C k subses such ha C [ k i= (2) F k i = {o,..., o N } and F k i \ F k j = (3) i, j [...C k ] when i j. For example, given hree objecs, p = and he facorizaion for sae k, F k = {{o, o 2 }, {o 3 }}, yields p Θ k,(f k ) = p(o, o 2 o, o 2 ; Θ k )p(o 3 o 3 ; Θ k ) as he sae condiional disribuion of he observaion a ime. Figure shows an example order HFacMM wih wo possible ineracions (K = 2) in which F = {{o, o 2 }, {o 3 }} and F 2 = {{o 2, o 3 }, {o }}. Noe ha he value of he sae S deermines he probabilisic srucure of he observaions a ime. We consider siuaions in which he model parameers are no known a priori. This necessiaes boh a learning and inference sep. The Baum-Welch/EM algorihm can be used wih a sligh modificaion for learning he parameers of an HFacMM, subsequenly Vierbi decoding can be used for exac inference [2]. We consruc and uilize an HFacMM model in he following way: o S S = S = 2 o 2 o 3 o + S + S = + S = o + + o + Fig.. Example HFacMM(). Condiionally labeled edges are only presen when he condiion is rue, following he noaion in This graph noaion follows ha of CBNs [5]. ) Define he K possible dependency srucures and parameerizaion of he HFacMM for your ask. 2) Learning: Esimae ˆΘ = arg max Θ p (O :T ; Θ) (via EM) 3) Inference: Find ŝ :T = arg max s:t p s :T O :T ; ˆΘ This approach does no use any raining daa and performs boh learning and inference on he daa being analyzed. We make he usual assumpion ha all K saes are visied a leas once (and ypically muliple imes) during he observed sequence. A. Learning Le Θ = {π, Z, Θ,..., Θ K } be he parameer se for he model where π k = p (S = k) are he prior sae probabiliies, Z is a K K marix wih Z ij = p (S + = i S = j), and Θ k are he parameers for facorizaion F k (i.e., parameers for p Θ,p(F k )). As wih k ypical HMMs and MSAR models, he EM algorihm can be applied o models wih his srucure in order o find he parameers, ˆΘ, ha maximize he likelihood funcion [2]. While he E-sep is unchanged, he HFacMM requires a minor change o he M-sep of EM. Since he sae condiional model p Θ,p(F k ) breaks up ino he C k k facors of F k, he srucure of he M-sep updaes simplify accordingly yielding a more srucured learning procedure wih savings in sorage and compuaion. B. Inference Having learned he parameers, ˆΘ, he daa sequence is labeled by finding he mos likely sae sequence {ŝ :T } = arg max s :T p(s :T O :T ; ˆΘ). (4) This can be done efficienly wih he Vierbi algorihm [2]. Vierbi decoding implicily performs an M-ary hypohesis es comparing all M = K T possible sae sequences. This alernaive view exposes how sae sequences disinguish hemselves via boh srucural and general saisical model differences beween he learned sae condiional disribuions. Consider a binary hypohesis es beween wo differen sae sequences S H :T parameers ˆΘ. The es has he form p O :T S H :T ˆL ; ˆΘ H,H 2 log p O :T S H 2 :T ; ˆΘ and SH2 :T H H 2 log given he p p S H 2 :T ; ˆΘ S H :T ; ˆΘ A. (5)

3 Taking he expeced value of he log likelihood raio when H is rue yields i X E H hˆlh,h 2 = D p H F SH ˆΘS,p s.. S H S H 2 + X s.. D S H S H 2 p H F ˆΘS,p p H F p H 2 ˆΘS,p ˆΘS,p F SH 2, where F is he common dependency srucure (se of edges) shared by all F k (e.g., if {o, o 2 } is common o all K srucures hen i will also appear in F ). Noe ha any parameer se Θ k can be applied o he F facorizaion by marginalizing over p Θ k(f k ) appropriaely. An equivalen break down occurs under H 2 (wih a negaion). Noice ha he expeced log likelihood raio decomposes ino wo erms. The firs erm concerns purely srucural differences. I compares he rue srucure wih he common srucure, under he rue parameers. The second erm conains boh srucural and parameer differences. I compares he common srucure wih he rue parameers o he incorrec hypohesis. Typically HMMs and MSAR models assume a fixed dependency srucure and hus only he second erm is non-zero and depends only on parameric differences. The oher exreme occurs when performing windowed dependency ess which only ake advanage of he firs erm [8]. III. INCORPORATING A STATE-SPACE MODEL A HFacMM wih order p > explicily models he dynamics of he observaion sequence. If we assume a ph order Gaussian auo-regressive model he sae condiional disribuion will have he form p(o O ( p):( ), S ) = N(O A S O ( p):( ) ;, Σ). (7) i.e., he observaion a ime will be a linear combinaion of he previous p wih he addiion of Gaussian noise. A S deermines he linear combinaion when he sae is S. This is a simple model in which he dynamic sae of sysem is being direcly observed. However, i is common o assume here is a laen dynamic sae ha is only parially observed hrough a noisy measuremen process. Such sysems are ofen assumed o be linear wih Gaussian noise following a sae-space model of he form X = AX + V O = Y = CX + W, where X is a hidden dynamic sae variable a ime and V and W are independen Gaussian noise sources. An SLDS assumes here is an addiional hidden discree sae S ha deermines A, C, and he covariance of he noise sources. The high level srucure of such models is shown in Figure 2(a). For he problem of objec ineracion analysis S indexes how he disribuion of he dynamic sae X facorizes via he srucure of A and he noise (6) (8) v - x - o - S - S S + X - O - S - S - = 2 v - 2 x - 2 o - v x o X X + O O + (a) S S = (b) Fig. 2. High level SLDS srucure (a) and deailed srucure of model used in experimens (b) source V. Noe ha he dynamics on he observaions are induced via he dynamics on he laen dynamic sae X. Learning and exac inference on an SLDS model is difficul. For example, if S can ake K discree values which indexes K differen Gaussian models for X, he disribuion p(x O : ) is a mixure of K Gaussians, each one associaed wih a possible sae sequence S :. This exponenial growh can be deal wih in many ways from collapsing he K mixure ino a smaller mixure a each [9], [], o using a variaional approach which fis a simpler model o maximizing a lower bound on he likelihood of he daa [7]. As an alernaive we ake an ieraive coordinae ascen approach. If one were given he sae sequence S :T EM can be used o learn he parameers of he model and inference on he dynamic sae X :T can be performed using Rauch-Tung-Srieber smoohing wih he appropriae model parameers (which are specified a each ime by S ). Similarly if one were given he dynamic sae X :T learning and inference for he S :T is simple. Tha is, condiioned on X :T, O can be ignored and one is lef wih an HFacMM of order which reas X :T as is observaion. Thus we begin by firs iniializing o a random sae sequence Ŝ:T, hen perform learning and inference on X :T o produce an esimae ˆX :T. We nex condiion on his esimae o updae Ŝ:T and repea. I is imporan o remember ha for he applicaion of ineres S indexes a paricular srucure via facorizaion F S. For his saespace model his facorizaion describes how he dynamic sae X :T evolves. We will give a specific example in he nex secion. 2 v 2 x 2 o v + x + o + S + S + = 2 v + 2 x + 2 o +

4 IV. EXPERIMENTS Here we consider he N = 2 case in which we have noisy observaions of posiion for wo objecs. We assume here are wo saes of ineracion specified by he hidden discree sae S. If S = we assume hey are moving independenly and if S = hey are ineracing. We begin wih a consan velociy model where a laen dynamic sae evolves via x A =, (9) x 2 A 2 x x 2 + v v 2 where he dynamic sae is posiion and velociy in 2D, x i = [ T, x i y i ẋ i ẏ] i and A = A 2 I I =, () I where I and are 2x2 ideniy and zero marices. Noisy observaions of posiion are obained via o C x o 2 = w C 2 x 2 + w 2, () where C = C 2 = [ I 2 ] and he observaion noise is independen of he sae S, i.e., p(w, w 2 S ) = p(w, w 2 ) = N w ;, σ wi N w 2 ;, σ wi (2) Furhermore, in his simple model we assume ha all informaion abou he ineracive sae is capured via he driving noise. Tha is, when S = he driving noise is independen and facorizes as p(v, v 2 S = ) = N v ;, Σ v N v 2 ;, Σ v (3) and when S = he noise is fully dependen p(v, v 2 S = ) = N v v 2 ; Σv ρσ ;, D v ρσ v Σ v (4) wih correlaion ρ and a facor D more variance. Here ρ describes he amoun of dependency when hey are ineracing. I is he only parameer ha affecs he firs erm in Equaion 6. The parameer D changes only he second erm in Equaion 6. The srucure of his model is shown in Figure 2(b). This give us a simple model wih parameers ha are easy o inerpre. I is imporan o noe ha he HFacMM and SLDS models used in his paper are no limied o his form and can capure oher ypes of ineracion (e.g., via changing a A marix). A. Illusraive Example For he purpose of esing he approach under known condiions, we draw muliple sample pahs from he model described above wih various seings of ρ, D and observaion noise variance σ w. We do his wih a fixed dynamic on he sae S wih π = π =.5, Z = Z =.95 and Z 2 = Z 2 =.5. For each seing of ρ, D, and σ w we draw T = 4 samples (of boh X and O ) from he model. This is performed 5 imes for each seing and he average performance of a dynamic dependency es using 3 differen approaches is recorded. As a baseline, he firs approach is given he rue dynamic sae X :T as is observaions and uses an HFacMM() model o do learning and inference. The full se of parameers used for his baseline model are Θ = {π, Z, Θ, Θ } where he independen (S = ) parameers are Θ = {A, A 2, Σ v, Σ 2 v}, and he ineracing/dependen (S = ) parameers are Θ = {A 2, Σ 2 v }. The independen model parameers mach Equaion 9, while a more generic model is used for he dependen case wih X = A 2 X + V and V drawn from a zero mean Gaussian wih full covariance Σ 2 v. Given X :T he baseline model learns he mos likely se of parameers ˆΘ and hen oupus he mos likely sae sequence. The second approach is given only he observaion sequence O :T and performs coordinae ascen on he full SLDS model. I has he same parameer se as he baseline approach wih he addiion of he observaion marix C and observaion noise variance σ w. For simpliciy we assume hese observaion process parameers are known. We se a maximum number of ieraions of coordinae ascen o bu found ha i usually converges in less han 5 ieraions for our experimens. The hird approach is given only he observaion sequence and fis a higher order HFacMM(3). While his model yields efficien inference and learning i does no model he laen dynamic sae X. I models he effecs of any laen dynamics hrough a higher order auoregressive model on he observaions. Again, noe ha for he problems of ineres in his paper, X is a nuisance variable. Our primary focus is on accuraely esimaing he ineracive sae S and no on producing accurae esimaes of he dynamic sae X. A quesion we wish o explore is wheher no his simple model is useful even when here is a rue underlying laen dynamic sae. Figure 3(a) shows for fixed D and σ w he average probabiliy of error for hese hree echniques as a funcion ρ. As expeced all hree approaches improve dramaically as ρ increases. A similar rend occurs for a fixed ρ and σ w and an increasing D in Figure 3(b). Figure 3(c) shows how observaion noise degrades he performance of he approaches ha do no have he benefi of direcly observing he dynamic sae. In all of he figures i is clear even hough he coordinae ascen echnique comes wih more compuaional complexiy and approximae inference i ouperforms he simple HFacMM(3) by incorporaing a laen dynamic sae. However his performance gap disappears when he sae condiional models are easily disinguishable (i.e., high ρ or D). B. Ineracing People Nex we analyzed he ineracion of wo objecs moving in a real environmen. These wo objecs are individuals wearing has designed o be easy o visually rack. The

5 5 4 D=2, Noise Var=. Coord Descen HFacMM(3) 5 4 rho=.6, Noise Var=. Coord Descen HFacMM(3) 5 4 rho=.95, D=2 Coord Descen HFacMM(3) Fig ρ D (a) (b) (c) (d) Avg. probabiliy of error for synheic daa as a funcion of (a) ρ (b) D (c) σ w. (d) Sample frame from ineracing people daa. Obs Noise Var TABLE I RESULTS FOR INTERACTING PEOPLE DATA. CV=CONSTANT VELOCITY. B=BROWNIAN MOTION Probabiliy of Error Sequence Coord. Asc. CV Coord. Asc. B HFacMM() wo individuals move around an enclosed environmen and are randomly insruced o inerac for a limi period of ime. When ineracing hey are eiher following or rying o mirror each oher s acions. Each sequence was over 2.5 minues long and conained beween 8 o 2 ransiions beween ineracing and moving independenly. In he firs sequence he individuals move faser when ineracing. This is no he case for he second sequence in which individuals mainain he same speed when ineracing and moving independenly. Boh sequences were recorded using a camcorder a 2 fps. Simple background subracion and color filering wih smoohing provided a simple blob racker ha was furher labeled/racked by hand o provide (x, y) posiions for each individual for each frame. Ground ruh of ineracion sae sequence was also esablished by hand labeling. Figure 3(d) shows a sample frame from one of he sequences. We ran dynamic dependency ess on boh of hese sequences using he coordinae ascen approach assuming a consan velociy model (4 dimensional dynamic sae iniialized wih an A marix wih consan velociy srucure). The resuls are shown in he second column of Table I. Afer analyzing he learned parameers for each sequence we found ha he dynamic sae ransiion marix A was close o diagonal indicaing ha a Brownian moion model may be more appropriae. Rerunning he dependency es assuming a simple Brownian moion model (he sae is posiion only) we obained he improved resuls in column hree of Table I. This indicaes ha a consan velociy model may no be correc for his daa in which individuals make many urns in a confined space. Lasly if we apply a simple firs order HFacMM o he daa we do jus as well. This indicaes ha his daa has srong dependency informaion when he individuals are ineracing (i.e., like he high ρ case above). V. CONCLUSION In his paper we looked a he problem of inferring laen dynamic dependency srucure o deermine he ineracion among muliple moving objecs. We have shown ha by modeling dependency as a dynamic process one can exploi boh srucural and parameer differences o disinguish beween hypohesized saes of dependency/ineracion. We discussed he use of an HFacMM for such asks and showed how i can be used o perform approximae inference on an SLDS model which incorporaes a sae-space descripion of objec dynamics. Experimens were performed on boh conrolled synheic and recorded daa of people/objecs ineracing. Empirical performance demonsraed he uiliy of approximae inference based on coordinae ascen for deermining ineracion saes. REFERENCES [] F. Heider and M. Simmel, An experimenal sudy of apparen behavior, in American Journal of Psychology, 944, vol. 57, pp [2] L.R. Rabiner, A uorial on hidden markov models and seleced applicaions in speech recogniion, in Proc. IEEE, 989, vol. 77 No. 2, pp [3] D. Geiger and D. Heckerman, Knowledge represenaion and inference in similariy neworks and bayesian mulines, in Arificial Ineglligence, 996, vol. 82, pp [4] Craig Bouilier, Nir Friedman, Moises Goldszmid, and Daphne Koller, Conex-specific independence in Bayesian neworks, in UAI, 996, pp [5] B. Milch, B. Marhi, D. Sonag, S. Russell, D.L. Ong, and A. Kolobov, Approximae inference for infinie coningen bayesian neworks, in h Inl. Workshop on Arificial Inelligence and Sas., 25. [6] J. A. Bilmes, Dynamic bayesian mulines, in Proc. of 6h conf. on Uncerainy in Arificial Inelligence, 2, pp [7] Z. Ghahramani and G. E. Hinon, Variaional learning for swiching sae-space models, in Neural Compuaion, 998, vol. 2, pp [8] A.T. Ihler, J.W. Fisher, and A.S. Willsky, Nonparameeric hypohesis ess for saisical dependency, in Trans. on signal processing, special issue on machine learning, 24. [9] Y. Bar-Shalom and X. Li, Esimaion and Tracking: Principles, Techniques and Sofware, Arech House, 993. [] C-J. Kim, Dynamic linear models wih markov-swiching, in Journal of Economerics, 994, vol. 6, pp. 22. This maerial is based upon work suppored by he AFOSR under Award No. FA Any opinions, findings, and conclusions or recommendaions expressed in his publicaion are hose of he auhor(s) and do no necessarily reflec he views of he Air Force.

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

Dynamic Dependency Tests: Analysis and Applications to Multi-modal Data Association

Dynamic Dependency Tests: Analysis and Applications to Multi-modal Data Association Dynamic Dependency Tess: Analysis and Applicaions o Muli-modal Daa Associaion Michael R. iracusa John W. Fisher III Compuer cience and Arificial Inelligence Lab Massachuses Insiue of Technology Cambridge,

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Tracking. Announcements

Tracking. Announcements Tracking Tuesday, Nov 24 Krisen Grauman UT Ausin Announcemens Pse 5 ou onigh, due 12/4 Shorer assignmen Auo exension il 12/8 I will no hold office hours omorrow 5 6 pm due o Thanksgiving 1 Las ime: Moion

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

Using the Kalman filter Extended Kalman filter

Using the Kalman filter Extended Kalman filter Using he Kalman filer Eended Kalman filer Doz. G. Bleser Prof. Sricker Compuer Vision: Objec and People Tracking SA- Ouline Recap: Kalman filer algorihm Using Kalman filers Eended Kalman filer algorihm

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides Hidden Markov Models Adaped from Dr Caherine Sweeney-Reed s slides Summary Inroducion Descripion Cenral in HMM modelling Exensions Demonsraion Specificaion of an HMM Descripion N - number of saes Q = {q

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS SEIF, EnKF, EKF SLAM Pieer Abbeel UC Berkeley EECS Informaion Filer From an analyical poin of view == Kalman filer Difference: keep rack of he inverse covariance raher han he covariance marix [maer of

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H.

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H. ACE 564 Spring 2006 Lecure 7 Exensions of The Muliple Regression Model: Dumm Independen Variables b Professor Sco H. Irwin Readings: Griffihs, Hill and Judge. "Dumm Variables and Varing Coefficien Models

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

EKF SLAM vs. FastSLAM A Comparison

EKF SLAM vs. FastSLAM A Comparison vs. A Comparison Michael Calonder, Compuer Vision Lab Swiss Federal Insiue of Technology, Lausanne EPFL) michael.calonder@epfl.ch The wo algorihms are described wih a planar robo applicaion in mind. Generalizaion

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Probabilisic reasoning over ime So far, we ve mosly deal wih episodic environmens Excepions: games wih muliple moves, planning In paricular, he Bayesian neworks we ve seen so far describe

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006 2.160 Sysem Idenificaion, Esimaion, and Learning Lecure Noes No. 8 March 6, 2006 4.9 Eended Kalman Filer In many pracical problems, he process dynamics are nonlinear. w Process Dynamics v y u Model (Linearized)

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

How to Deal with Structural Breaks in Practical Cointegration Analysis

How to Deal with Structural Breaks in Practical Cointegration Analysis How o Deal wih Srucural Breaks in Pracical Coinegraion Analysis Roselyne Joyeux * School of Economic and Financial Sudies Macquarie Universiy December 00 ABSTRACT In his noe we consider he reamen of srucural

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing

More information

Lab #2: Kinematics in 1-Dimension

Lab #2: Kinematics in 1-Dimension Reading Assignmen: Chaper 2, Secions 2-1 hrough 2-8 Lab #2: Kinemaics in 1-Dimension Inroducion: The sudy of moion is broken ino wo main areas of sudy kinemaics and dynamics. Kinemaics is he descripion

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

Overview. COMP14112: Artificial Intelligence Fundamentals. Lecture 0 Very Brief Overview. Structure of this course

Overview. COMP14112: Artificial Intelligence Fundamentals. Lecture 0 Very Brief Overview. Structure of this course OMP: Arificial Inelligence Fundamenals Lecure 0 Very Brief Overview Lecurer: Email: Xiao-Jun Zeng x.zeng@mancheser.ac.uk Overview This course will focus mainly on probabilisic mehods in AI We shall presen

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1 Vecorauoregressive Model and Coinegraion Analysis Par V Time Series Analysis Dr. Sevap Kesel 1 Vecorauoregression Vecor auoregression (VAR) is an economeric model used o capure he evoluion and he inerdependencies

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Temporal probability models

Temporal probability models Temporal probabiliy models CS194-10 Fall 2011 Lecure 25 CS194-10 Fall 2011 Lecure 25 1 Ouline Hidden variables Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic

More information

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A Licenciaura de ADE y Licenciaura conjuna Derecho y ADE Hoja de ejercicios PARTE A 1. Consider he following models Δy = 0.8 + ε (1 + 0.8L) Δ 1 y = ε where ε and ε are independen whie noise processes. In

More information

Block Diagram of a DCS in 411

Block Diagram of a DCS in 411 Informaion source Forma A/D From oher sources Pulse modu. Muliplex Bandpass modu. X M h: channel impulse response m i g i s i Digial inpu Digial oupu iming and synchronizaion Digial baseband/ bandpass

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

Chapter 11. Heteroskedasticity The Nature of Heteroskedasticity. In Chapter 3 we introduced the linear model (11.1.1)

Chapter 11. Heteroskedasticity The Nature of Heteroskedasticity. In Chapter 3 we introduced the linear model (11.1.1) Chaper 11 Heeroskedasiciy 11.1 The Naure of Heeroskedasiciy In Chaper 3 we inroduced he linear model y = β+β x (11.1.1) 1 o explain household expendiure on food (y) as a funcion of household income (x).

More information

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems. Mah 2250-004 Week 4 April 6-20 secions 7.-7.3 firs order sysems of linear differenial equaions; 7.4 mass-spring sysems. Mon Apr 6 7.-7.2 Sysems of differenial equaions (7.), and he vecor Calculus we need

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models Journal of Saisical and Economeric Mehods, vol.1, no.2, 2012, 65-70 ISSN: 2241-0384 (prin), 2241-0376 (online) Scienpress Ld, 2012 A Specificaion Tes for Linear Dynamic Sochasic General Equilibrium Models

More information

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests Ouline Ouline Hypohesis Tes wihin he Maximum Likelihood Framework There are hree main frequenis approaches o inference wihin he Maximum Likelihood framework: he Wald es, he Likelihood Raio es and he Lagrange

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

2016 Possible Examination Questions. Robotics CSCE 574

2016 Possible Examination Questions. Robotics CSCE 574 206 Possible Examinaion Quesions Roboics CSCE 574 ) Wha are he differences beween Hydraulic drive and Shape Memory Alloy drive? Name one applicaion in which each one of hem is appropriae. 2) Wha are he

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

A Bayesian Approach to Spectral Analysis

A Bayesian Approach to Spectral Analysis Chirped Signals A Bayesian Approach o Specral Analysis Chirped signals are oscillaing signals wih ime variable frequencies, usually wih a linear variaion of frequency wih ime. E.g. f() = A cos(ω + α 2

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

DEPARTMENT OF STATISTICS

DEPARTMENT OF STATISTICS A Tes for Mulivariae ARCH Effecs R. Sco Hacker and Abdulnasser Haemi-J 004: DEPARTMENT OF STATISTICS S-0 07 LUND SWEDEN A Tes for Mulivariae ARCH Effecs R. Sco Hacker Jönköping Inernaional Business School

More information

Probabilistic Robotics

Probabilistic Robotics Probabilisic Roboics Bayes Filer Implemenaions Gaussian filers Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel Gaussians : ~ π e p N p - Univariae / / : ~ μ μ μ e p Ν p d π Mulivariae

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

SPH3U: Projectiles. Recorder: Manager: Speaker:

SPH3U: Projectiles. Recorder: Manager: Speaker: SPH3U: Projeciles Now i s ime o use our new skills o analyze he moion of a golf ball ha was ossed hrough he air. Le s find ou wha is special abou he moion of a projecile. Recorder: Manager: Speaker: 0

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Roboica Anno accademico 2006/2007 Davide Migliore migliore@ele.polimi.i Today Eercise session: An Off-side roblem Robo Vision Task Measuring NBA layers erformance robabilisic Roboics Inroducion The Bayesian

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

MCMC-Based Particle Filtering for Tracking a Variable Number of Interacting Targets. Zia Khan, Tucker Balch, and Frank Dellaert

MCMC-Based Particle Filtering for Tracking a Variable Number of Interacting Targets. Zia Khan, Tucker Balch, and Frank Dellaert 1 MCMC-Based Paricle Filering for Tracking a Variable Number of Ineracing Targes Zia Khan, Tucker Balch, and Frank Dellaer 2 Absrac We describe a paricle filer ha effecively deals wih ineracing arges -

More information

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion

More information

Generalized Least Squares

Generalized Least Squares Generalized Leas Squares Augus 006 1 Modified Model Original assumpions: 1 Specificaion: y = Xβ + ε (1) Eε =0 3 EX 0 ε =0 4 Eεε 0 = σ I In his secion, we consider relaxing assumpion (4) Insead, assume

More information

2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS

2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS Andrei Tokmakoff, MIT Deparmen of Chemisry, 2/22/2007 2-17 2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS The mahemaical formulaion of he dynamics of a quanum sysem is no unique. So far we have described

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS Name SOLUTIONS Financial Economerics Jeffrey R. Russell Miderm Winer 009 SOLUTIONS You have 80 minues o complee he exam. Use can use a calculaor and noes. Try o fi all your work in he space provided. If

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes WHAT IS A KALMAN FILTER An recursive analyical echnique o esimae ime dependen physical parameers in he presence of noise processes Example of a ime and frequency applicaion: Offse beween wo clocks PREDICTORS,

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

Optimal Paired Choice Block Designs. Supplementary Material

Optimal Paired Choice Block Designs. Supplementary Material Saisica Sinica: Supplemen Opimal Paired Choice Block Designs Rakhi Singh 1, Ashish Das 2 and Feng-Shun Chai 3 1 IITB-Monash Research Academy, Mumbai, India 2 Indian Insiue of Technology Bombay, Mumbai,

More information