IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 20YY 1

Size: px
Start display at page:

Download "IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 20YY 1"

Transcription

1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY 1 Time Varying Dynamic Bayesian Nework for Non-Saionary Evens Modeling and Online Inference Zhaowen Wang*, Ercan E. Kuruoğlu, Senior Member, IEEE, Xiaokang Yang, Senior Member, IEEE, Yi Xu, and Thomas S. Huang, Life Fellow, IEEE Absrac This paper presens a novel Time Varying Dynamic Bayesian Nework (TVDBN) model for he analysis of nonsaionary sequences which are of ineres in many fields. The changing nework srucure and parameer in TVDBN are reaed as random processes whose values a each ime epoch deermine a saionary DBN model; his DBN model is hen used o specify he disribuion of daa sequence a he ime epoch. Under such a hierarchical formulaion, he changing sae of nework can be incorporaed ino he Bayesian framework sraighforwardly. The nework sae is assumed o ransi smoohly in he join space of numerical parameer and graphical opology so ha we can achieve robus online nework learning even wihou abundan observaions. Paricle filering is employed o dynamically updae curren nework sae as well as infer hidden daa values. We implemen our ime varying model for daa sequences of mulinomial and Gaussian disribuions, while he general model framework can be used for any oher disribuion. Simulaions on synheic daa and evaluaions on video sequences boh demonsrae ha he proposed TVDBN is effecive in modeling non-saionary sequences. Comprehensive comparisons have been made agains exising non-saionary models, and our proposed model is shown o be he op performer. Index Terms Bayesian neworks, ime varying, paricle filers, even recogniion. I. INTRODUCTION MODELING he evoluion of emporal sequences is of grea ineres in many areas such as signal processing, auomaion, finance, compuaional biology, ec. Among he numerous ools designed for he analysis of emporal sequences, Dynamic Bayesian Neworks (DBNs) [1] have been he mos successful ones. A DBN is he exension of a Bayesian Nework (BN) o emporal domain, in which condiional dependencies are modeled beween random variables boh wihin and across ime slos. The condiional disribuions are assumed o be homogeneous in DBN; ha is, he srucure and parameer of DBN are fixed hroughou he ime. Under his assumpion, a DBN is effecively consruced by unrolling Copyrigh (c) 21 IEEE. Personal use of his maerial is permied. However, permission o use his maerial for any oher purposes mus be obained from he IEEE by sending a reques o pubs-permissions@ieee.org. Z. Wang and T. S. Huang are wih he Deparmen of Elecrical and Compuer Engineering, Universiy of Illinois a Urbana-Champaign, Urbana, IL, 6181 USA ( wang38@illinois.edu; -huang1@illinois.edu). E. E. Kuruoğlu is wih he Isiuo di Scienza e Tecnologie dell Informazione (ISTI), CNR, Pisa, Ialy ( ercan.kuruoglu@isi.cnr.i). X. Yang and Y. Xu are wih he Insiue of Image Communicaion and Informaion Processing, Shanghai Jiao Tong Universiy, Shanghai, 224 China ( xkyang@sju.edu.cn; xuyi@sju.edu.cn). a BN in ime axis, and he model learning procedure can be grealy simplified. However, his bold assumpion limis he power of DBN in modeling many non-saionary sequences, where he inrinsic relaionships among variables change from ime o ime. Such non-saionary sequences may arise in all aspecs of our life. Some examples include: he seering paern of a vehicle under differen road condiions; he appearance of an objec across muliple cameras; he gene ineracions in differen sages of a life circle; and he sock prices in differen economic periods. A fixed saisical model is obviously inadequae o model hese daa sequences a all ime insances. Incorporaing he emporal variaion of nework srucure and parameer ino DBN is a naural way o handle nonsaionary sequences. However, learning and inferring such a ime varying nework is a non-rivial job. The naive approach o learn a DBN independenly for each ime insance is no feasible since in mos applicaions oo few observaions can be obained from jus one ime insance. One way o ameliorae his daa scarciy problem is o pre-segmen emporal sequences ino saionary epochs, in each of which daa are generaed from he same probabiliy disribuion. Bu he segmenaion iself is hard due o he lack of knowledge for models in each epoch and he large soluion space growing exponenially wih he lengh of sequence. On he oher hand, our observaion daa are ofen corruped wih noise, and here will ineviably be some perurbaions associaed wih he saisics derived from hem. The noise perurbaion can be indisinguishable from he rue variaion of underlying daa disribuion, paricularly when he disribuion is changing gradually. Furhermore, real ime processing is a crucial consrain in some applicaions such as sock price predicion and video surveillance. The demand for adaping DBN model o new daa online poses even greaer challenges. A. Previous Work Faced wih all he difficulies menioned above, researchers have been rying o exend DBN o non-saionary scenarios by imposing various condiions on he form of nework and he way i can change. Earlier works have been mainly focused on non-saionary models wih fixed srucure. Among hem he mos sudied one is he ime varying auoregression (TVAR) model [2], which describes non-saionary linear dynamic sysems wih coninuously changing linear coefficiens and

2 2 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY noise variances. Normalized leas square algorihm can be used o esimae he regression parameers recursively, and he esimaion error is shown bounded when he parameers change smoohly [3]. Due o he well-esablished heory, TVAR model has found wide applicaion in he researches of equiy marke [4], gene expression [5], and elecroencephalogram (EEG) races [6]. Exensions on TVAR have also been made for oher ime varying processes such as non-gaussian auoregression [7] and Poisson couning process [8]. Anoher class of non-saionary model ha has been exensively sudied is he swiching linear dynamic sysem (SLDS), in which a laen Markov chain is employed o describe he piecewise change of linear sysem. Time varying observaion marices are formulaed in [9]; and ime varying dynamic ransiion marices are formulaed in [1], [11]. An SLDS wih arbirary sae duraion disribuion is proposed in [12]. An SLDS wih an unknown number of saionary modes is proposed in [13] by using hierarchical Dirichle process prior. There exis a bunch of learning algorihms for SLDS; besides he commonly used EM mehod, variaional approximaion [9], Grassmann manifold [14], and auxiliary paricle filer [15] have also been applied for efficien and/or online learning. The models of TVAR and SLDS only consider nonsaionary parameer change in dynamic sysems. Recenly, graphical models ha change dynamically in boh parameer and srucure have received more aenion. Wih he assumpion ha daa sequence is piecewise saionary in ime, nonsaionary models are consruced as a cascade of saionary models each of which is learn from a pre-segmened saionary sub-inervals. A number of mehods have been proposed o find hese sub-inervals. In [16], Gaussian graphical model is used o represen he dependency among variables, and hen he exac poserior disribuion of swiching imes beween saionary inervals is evaluaed wih he mehod of [17]. Curve manifold is employed o express ime varying sequences in [18], and segmenaion is carried ou according o he geomeric srucure of manifold. Markov chain Mone Carlo (MCMC) sampling mehods are used in [19], [2], which search MAP saionary sub-inervals hrough ieraive local movemens on nework configuraion. When he dimension of model parameer is no fixed, reversible jump MCMC can be uilized [21]. Change deecion echnique is used o monior he finess beween incoming daa and curren nework [22], and local adapaion is applied o nework when large discrepancy is deeced. In [23], nework changes are represened by a hidden conroller, whose opimal value is searched wih a random hill climbing algorihm. Since he piecewise saionary models are sill no general enough for any applicaions, several more sophisicaed nonsaionary models have been developed wih coninuously changing nework parameers aken ino accoun. In [24], he srucure of a binary nework is reaed as a hidden sae, and is dynamic ransiion follows an exponenial random graph process. Consrains of emporal smoohness and srucural sparsiy are imposed on ime varying linear regression neworks [25] [27], in which linear coefficiens are joinly opimized by minimizing lasso objecive funcions. In [28], he auhors propose o model he dynamics of mixed membership vecor using a logisic normal disribuion whose hyper parameers can evolve over ime according o linear Gaussian model. Unforunaely, all hese algorihms can only be used for off-line nework learning, bu canno adap o new daa sequence on he fly. B. Proposed Mehod In his paper, we propose a novel Time Varying Dynamic Bayesian Nework (TVDBN) model for online inference of he underlying disribuion of non-saionary sequences. We exend he basic DBN model so ha boh he srucure and parameer of nework become random variables ha can change hrough ime. These random variables are reaed as addiional hidden nodes in our graph model, and a smooh ransiion prior is imposed on heir emporal variaion o ease he problem of daa scarciy. This novel represenaion of changing nework allows a unified modeling of boh daa and nework iself under he same dynamic Bayesian framework. In conrary o mos off-line learning mehods repored in lieraures, we employ paricle filer o dynamically infer he hidden saes of nework as well as missing daa, if here are any. This key feaure enables he applicaion of our model in siuaions wih real ime consrain. The framework of our ime varying model is general enough for daa sequence of any disribuion ype; in his work, mulinomial and Gaussian disribuions are sudied in paricular. The effeciveness of he proposed TVDBN model is validaed on boh simulaed non-saionary daa and video sequences, wih preliminary applicaions o racking and even recogniion. I is worh noing he differences beween our model and hose closely relaed o i. A hidden variable is used o represen he change of nework in [23], bu i only serves as an auxiliary variable o faciliae implemenaion. In our model, he srucure and parameer nodes are indispensable componens of he whole graph model, and hey represen he saisical aribues of curren nework. The online adapaion mehods in [15], [22] can only model piecewise consan variaion of nework; while our mehod deals wih boh coninuous change in parameer and discree swich in srucure. Smooh change of nework is ensured in [25] wih a kernel window applied on daa sequences; we achieve similar goal via a smooh ransiion model, which is more favorable from a Bayesian perspecive. In he Gaussian graphical model [16], nework parameer is marginalized ou and nework srucure is he only hing o invesigae. Under our seing, he saes of boh srucure and parameer are inferred o give a full descripion of curren nework. The remainder of his aricle is organized as follows. Firs, we inroduce he overall framework of TVDBN model in Secion II. The ransiion disribuion of nework is deailed in Secion III, in he conex of wo popular daa disribuions: mulinomial and Gaussian. Secion IV shows he procedure of inferring hidden variables in TVDBN using paricle filer algorihm. A se of experimenal resuls are repored in Secion V. Finally, in Secion VI, we conclude he paper and discuss he poenial exension o large scale neworks.

3 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 3 II. TIME VARYING DYNAMIC BAYESIAN NETWORK A. Dynamic Bayesian Nework A Bayesian Nework (BN) [29] is a graph model describing he saisical relaionships among a group of n random variables X = {X i } i=1...n. A BN is deermined by is graph srucure G and disribuion parameer Θ. G is a direced acyclic graph wih n nodes corresponding o random variables X. An edge in G direced from node i o j encodes X j s condiional dependence on X i, and X i is called a paren of X j. A variable X i is independen of is non-descendans given all is parens Pa(X i ) in G. Therefore, he join probabiliy disribuion over X can be decomposed by he chain rule: n p(x) = p(x i Pa(X i )) (1) i=1 The parameer se Θ = {Θ i } i=1...n specifies he parameers of each condiional disribuion p(x i Pa(X i )) in Eq. (1). The meaning of Θ i is inerpreed according o he specific form of he disribuion. When he disribuion is mulinomial, Θ i is simply a condiional probabiliy able; when he disribuion is Gaussian, Θ i may conain he values of mean and variance. A Dynamic Bayesian Nework (DBN) [1], [3] is he exension of a BN o model emporal processes. In DBN, a se of random processes are represened by he X = {X i } i=1...n, and X i [] is he random variable of process X i a discree ime. The nework srucure G now defines he dependency among variables over a period of ime as well as hose wihin he same ime epoch. Usually, he processes X are assumed o be Markovian and causal. So a node in graph G is only allowed o be linked from he oher nodes in he same or previous epoch, i.e., Pa(X i []) {X[ 1], X[]}. Also, all he condiional disribuions are assumed o be saionary. Thus we have: p(x[ + 1] X[ : ]) = p(x[ + 1] X[]), =, 1, 2... (2) The model parameer Θ is defined similarly as in BN. Θ and G are kep consan over ime under he saionariy assumpion. B. Time Varying Nework Represenaion The saionariy assumpion has been favored for is simpliciy in mos applicaions of DBN so far. However, his assumpion is no always valid in real life. For example, a driver will change his driving syle according o differen raffic condiions, and a consan velociy dynamic model is apparenly incapable of describing he movemen of vehicle. Such ime varying characerisic is crucial o a beer undersanding of emporal evens. To his end, we inroduce a Time Varying Dynamic Bayesian Nework (TVDBN) model whose nework srucure and parameer can vary as he underlying disribuion of emporal sequence changes. In our ime varying formulaion, he nework srucure and parameer are modeled as random processes whose values a ime are denoed as G[] and Θ[], respecively. These random variables, regarded as srucure nodes and parameer nodes in our model, are used o consruc a graph ogeher wih daa nodes X[]. Since here is no way o observe he srucure nodes and he parameer nodes direcly, hese nodes are hidden in naure. We can only infer hem from he observaions available on daa nodes. A each ime epoch, we have n parameer nodes: Θ[] = {Θ i []} i=1...n. Each Θ i [] is linked o node X i [] and deermines is condiional disribuion joinly wih oher daa nodes linked o i. Therefore, in TVDBN, he parens of a daa node X i [] are defined as: Pa(X i []) { Pa(X i []), Θ i []} (3) where Pa(X), called X s non-parameer parens, is he se of daa nodes ha are linked o X. The linkage beween daa nodes, or equivalenly heir probabilisic dependency, is deermined by he nework srucure G[]. Here G[] is represened as a se of direced edges: G[] { e ji, e ji i, j = 1...n } (4) where e ji sands for an edge poining from X j [] o X i [], and e ji sands for an edge poining from X j[ 1] o X i []. We only consider he nodes a ime 1 and as X i [] s poenial parens, as required by he Markovian propery 1. To ensure G[] is an acyclic graph, i is also required ha here is no sequence i 1,..., i k such ha e ij,i j+1 G[] for 1 j < k and e ik,i 1 G[]. Given G[], we can rerieve X i [] s nonparameer parens as: Pa(X i []) = {X j [] e ji G[]} {X j [ 1] e ji G[]} (5) I should be noed ha he dependency of X i [] on Θ i [] is fixed and will no be eliminaed over ime. To model he changing processes of Θ[] and G[] in TVDBN, we adop a firs order Markov model p(θ[ + 1], G[ + 1] Θ[], G[]) as heir join ransiion disribuion. Assume ha given srucure G[], each parameer Θ i [] ransis independenly, and we have p(θ[ + 1], G[ + 1] Θ[], G[]) = p(g[ + 1] G[])p(Θ[ + 1] Θ[], G[ + 1]) n = p(g[ + 1] G[]) p(θ i [ + 1] Θ i [], G[ + 1]) (6) i=1 The dependency of Θ[+1] on G[+1] and G[] is due o he fac ha he dimension of disribuion parameer is affeced by nework srucure. An example of TVDBN wih changing parameer and srucure is illusraed graphically in Fig. 1. All he black edges represen he probabilisic dependency among nodes, which play he same role as he edges in a BN or DBN. While he green and blue edges in Fig. 1 have essenially differen meanings. A green edge links a srucure node G[] o he slice of nework enclosed in a corresponding green recangle, which conains all he edges conneced owards X[] from oher daa nodes of ime 1 and. The presence and absence of any of hese edges, or equivalenly all he Pa(X i []) s, are 1 The Markovian resricion is imposed here for convenience in discussion. In more general siuaion, any node a ime earlier han can be paren of X j [].

4 4 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY X 1[1] X 1[2] X 1[3] X 1[4] X 2[1] X 2[2] X 2[3] X 2[4] Θ 1[1] Θ 1[2] Θ 1[3] Θ 1[4] Θ 2[1] Θ 2[2] Θ 2[3] Θ 2[4] G[1] G[2] G[3] G[4] Fig. 1. Graphical represenaion of a TVDBN over ime = 1, 2, 3, 4. A each ime epoch, here are wo daa nodes (above he dash line), wo parameer nodes, and one srucure node (below he dash line). The black edges indicae he probabilisic dependency among nodes. The green edges indicae he nework opologies inside he green recangles are deermined by he srucure nodes. The blue edges indicae he dimensions of parameer nodes are deermined by he srucure nodes. deermined by he saus of node G[]. A blue edge links a srucure node G[] o a parameer node Θ i [] of he same ime epoch, and indicaes ha he dimension of parameer Θ i [] is deermined by he saus of G[]. The dependency encoded in a green or blue edge is deerminisic, as opposed o he probabilisic dependency encoded in a black edge. Therefore, our TVDBN model is differen from radiional DBN model, and a new mehod should be designed for is online inference. Before addressing ha in Secion IV, we will firs discuss he ransiion models for he srucure and parameer in ime varying nework. III. NETWORK TRANSITION DISTRIBUTION The ransiion disribuion in Eq. (6) plays a crucial role in our ime varying model. The firs erm on he righ side of Eq. (6), p(g[ + 1] G[]), characerizes he dynamic ransiion of nework srucure. Dynamic graph processes have been sudied exensively in random graph heory. Some rules have been formulaed o model he ypical evoluion of graphs over ime, such as preferenial aachmen [31], copying [32], and preferenial deleion [33]. However, mos dynamic graph processes are devoed o modeling social and web-like neworks, which usually feaure differen behaviors han Bayesian neworks. Therefore, he specialized rules for random graphs are no applicable o our TVDBN model. Transiion kernels for Bayesian nework are proposed in [34] ha make local modificaion o nework srucure in an ieraive manner. Such ransiion kernel is sufficien for off-line learning of a saionary nework, bu is no flexible enough o model dynamically changing nework. Here we propose o model he sequence of nework srucure {G[]} as a Markov chain, whose sae space G = {G i } consiss of all valid direced acyclic graphs ha connec daa nodes X[] across adjacen ime epochs or wihin he same ime epoch. When he number of daa nodes n is fixed and moderaely large, using Markov chain model is a reasonable approach because he finie se G can be enumeraed. Given he curren nework srucure G i, he probabiliy of ransiing o srucure G j in nex epoch is se o be: p(g[+1] = G j G[] = G i ) exp ( λ 1 G j λ 2 G j G i ) (7) where G j denoes he number of edges in graph G j, and G j G i denoes he number of edges ha have been changed (added or deleed) from G i o G j. λ 1 and λ 2 are parameers conrolling he relaive imporance of he wo erms. The ransiion model of G[] is designed in such a way o favor sparse nework srucure and smooh nework ransiion. Sparseness requires he ime varying model o adap o daa disribuion wih a nework srucure as simple as possible. In his way, he problem of model over-fiing can be avoided. This idea is inspired by Bayesian Informaion Crierion (BIC) score [35], which penalizes complex models according o he number of model parameers. The smoohness resricion is inroduced o deal wih daa scarciy. Typically, he daa observed in one ime epoch alone are far from enough for us o learn he unknown nework srucure. A good prior knowledge of how he nework changes will enable us o esimae is srucure using boh curren and previous daa observaion. Therefore, we impose he resricion ha he nework can only change smoohly over ime; i.e., he oal number of edges being changed from one ime epoch o he nex is expeced o be small. This smoohness assumpion is valid for many daa sequences in real life. For example, in video applicaions we have a much higher signal sampling rae (frame rae) han even occurring rae (human behavior), so he underlying daa disribuion can be hough as changing slowly and smoohly. Fig. 2 illusraes he srucure ransiion model of a TVDBN wih wo daa nodes. To save space, we have only ploed par of all possible nework opologies. The arrows indicae he mos likely ransiions beween differen srucures, which are realized by adding or removing a single edge in he nework. The second erm on he righ side of Eq. (6) is he produc of ransiion probabiliies of all nework parameers. Under he same smoohness assumpion as above, we require ha he parameer ransiion disribuion p(θ i [ + 1] Θ i [], G[ + 1]) should have a large mass in he neighborhood around Θ i []. In his way, we can learn he parameers of nework robusly even in he absence of abundan observaion daa. On he oher hand, he ransiion of Θ i [] also depends on how i parameerizes he disribuion of daa node X i []. We should cusomize he form of p(θ i [ + 1] Θ i [], G[ + 1]) for differen daa disribuions. In he following, we shall discuss his poin in he conex of wo paricular cases where he condiional disribuion of X i [] is mulinomial/gaussian. Daa disribuions of oher ypes can also be incorporaed in he framework of our TVDBN model as long as suiable parameer ransiion models are devised. A. Time Varying Mulinomial DBN When he variables X i [] s are discree, he simples way o represen he condiional dependency among hem is using mulinomial disribuion. The parameer Θ i [] of mulinomial disribuion is a condiional probabiliy able conaining a collecion of probabiliy vecors {Θ ij []}, where Θ ij [] corresponds o he j h configuraion of X i [] s non-parameer

5 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 5 G=4 G=6 G=9 X 1[-1] X 1[] X 2[-1] X 2[] G=21 G=14 G=15 G=17 Fig. 2. Some possible srucures of a TVDBN wih wo daa nodes. Indices are shown for each srucure. The mos likely ransiion pahs beween hem are indicaed by grey arrows. parens Pa j (X i []). Vecor Θ ij [] conains he probabiliies for all possible values of X i [] given he paren: p(x k i [] Pa j (X i []), Θ i []) = θ ijk [] (8) where X k i [] is he kh possible value of X i[], and θ ijk [] is he kh elemen in Θ ij []. The probabiliy vecors are assumed o propagae independenly, so ha he ransiion disribuion of Θ i [] can be decomposed as: p(θ i [ + 1] Θ i [], G[ + 1]) = n i [+1] j=1 p(θ ij [ + 1] Θ i [], G[ + 1]) (9) where n i [ + 1] is he number of possible configuraions of Pa(X i [ + 1]) given he curren nework srucure G[ + 1]. The emporal evoluion of each probabiliy vecor is furher modeled by a hierarchical Dirichle disribuion [36]: p(θ ij [ + 1] Θ i [], G[ + 1]) Dir(Θ ij [ + 1]; α Θ ij ) k θ ijk [ + 1] α θ ijk 1 (1) where Dir(; ) denoes he Dirichle disribuion, Θ ij is he disribuion cener o be derived from Θ i [], and α is a smoohing coefficien. The Dirichle disribuion is well suied for modeling he densiy of probabiliy vecors, since is pdf is suppored on he probabiliy simplex {Θ ij [ + 1] θ ijk [ + 1], k θ ijk[+1] = 1}. Moreover, he hierarchical srucure can resric he variaion of Θ ij [+1] wihin he space around Θ ij, a mechanism keeping he nework parameer change smoohly over ime. The smoohness can be easily adjused via he coefficien α. When he new nework srucure G[ + 1] does no change from is previous value G[] or more precisely, X i [ + 1] shares he same se of paren nodes wih X i [] (wih a shif in ) he Θ ij in Eq. (1) is simply chosen o be he previous probabiliy vecor ha corresponds o he same paren configuraion: Θ ij = Θ ij []. However, if G[ + 1] changes from G[] wih some edges being added and/or deleed, furher elaboraion is required in he design of Θ ij. Consider he node X i [] and is non-parameer parens Pa(X i []). Suppose he nework srucure G[ + 1] changes from G[] such ha Pa(X i [ + 1]) becomes differen from Pa(X i []) wih an addiion of nodes Y and a deleion of nodes Z. More formally, le us denoe Pa(X i []) = Pa(X i [, + 1]) Z[] (11) Pa(X i [ + 1]) = Pa(X i [ + 1, ]) Y[ + 1] (12) where Pa(Xi [, + 1]) and Pa(X i [ + 1, ]) are he nonparameer parens shared by X i [] and X i [ + 1] (wih a shif in ime) a and + 1, respecively. Under he assumpion of smooh model changing, he condiional disribuion of X i [ + 1] can be approximaed as follows: p(x i [ + 1] Pa(X i [ + 1])) p(x i [ + 1] Pa(X i [ + 1, ])) p(x i [] Pa(X i [, + 1])) p(x i [] Pa(X i [, + 1]), Z[])p(Z[] Pa(X i [, + 1])) Z[] Z[] p(x i [] Pa(X i []))p(z[]) (13) We have assumed weak dependency of X i [ + 1] on he newly added parens Y[ + 1]; and he condiional probabiliy p(z[] Pa(X i [, + 1])) is subsiued by p(z[]) for evaluaion convenience. The error of hese approximaions can be oleraed if an appropriae value is chosen for α in Eq. (1). Thus, a good choice of Θ ij can be: Θ ij = p(z k [])Θ il(j,k) [], k l(j,k) s.. Pa (Xi []) = [ Pa j (X i [ + 1, ]), Z k []] (14) j where Pa (Xi [ + 1, ]) is equal o he sub-vecor of Pa j (X i [ + 1]) wih he elemens of Y[ + 1] runcaed. B. Time Varying Gaussian DBN When daa nodes X[] represen coninuous variables, linear Gaussian sysem is ofen used o describe heir join dynamic ransiion: X[] = F[]X[ 1] + v[] (15) where F[] is a n n marix wih elemens {f ij []}, and v[] is a Gaussian noise sampled from N (v[];, Σ[]). Since X[] are join Gaussian, hey can be equivalenly represened by a

6 6 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY Gaussian belief nework [37], and he condiional disribuion of a single node X i [] can be expressed as: p(x i [] Pa(X i []), Θ i []) = N (X i []; m i []+ X j[] Pa(X i[]) b ij [](X j [] m j []), v i []) (16) where m i [] = j f ij[]x j [ 1] is he uncondiional mean of X i []; v i [] is he condiional variance of X i [] given he value of paren nodes Pa(Xi []). Pa(Xi []) are specified by nework srucure G[] for all i s; and b ij [] is a linear coefficien measuring he srengh of dependency of X i [] on X j [] Pa(X i []). The parameer se Θ i [] for node X i [] in his ime varying Gaussian DBN is defined as: Θ i [] = {{f ij []}, v i [], {b ij []}} (17) Obviously, we have v i [] > for all i, and b ij [] = for X j [] / Pa(X i []). As will be seen below in he definiion of parameer ransiion disribuion, i is convenien o adop he form of parameers as in Eq. (17). The ransformaion from {{v i []}, {b ij []}} o Σ[] can be accomplished using he procedure described in [38]. We assume ha all he parameers in Θ i [] propagae independenly: p(θ i [ + 1] Θ i [], G[ + 1]) = p(v i [ + 1] v i []) p(f ij [ + 1] f ij [], G[ + 1]) j p(b ij [ + 1] b ij [], G[ + 1]) (18) Also, Eq. (16) can be inerpreed as a linear regression of X i [] wih coefficiens f ij [], b ij [], and variance v i []. The ime varying auo-regression model [2] offers a convenien way o model he ransiion of hese ime varying parameers. According o [2], he variaion of linear coefficiens can be modeled by random walk processes: p(f ij [ + 1] f ij [], G[ + 1]) = N (f ij [ + 1]; f ij [], σ 2 f ) (19) p(b ij [ + 1] b ij [], G[ + 1]) = N (b ij [ + 1]; b ij [], σ 2 b ), if e ji G[ + 1] (2) where σ f and σ b are sandard deviaions. The coefficiens are se o zero when he corresponding edges are missing in G[+ 1]. To model he change of non-negaive variance v i [], we employ a muliplicaive random walk process as in [2]: v i [] p(v i [ + 1] v i []) = Bea( d; a, b) (21) v i [ + 1] where a and b are he parameers of Bea disribuion, and d is a discoun facor. IV. SEQUENTIAL MONTE CARLO INFERENCE OF TVDBN Afer consrucing he ime varying dynamic Bayesian nework model and is ransiion disribuion, we are ready o do on-line inference of he unknown variables in nework. The unknown variables of our ineres a each ime include unobserved daa 2, nework srucure, and nework parameer. We combine hem as a hidden sae s : s = [X[], G[], Θ[]] (22) The dynamic ransiion disribuion of sae s can be decomposed ino he produc of he ransiion disribuions defined in Secion III: p(s s 1 ) = p(g[] G[ 1])p(Θ[] Θ[ 1], G[]) p(x[] X[ 1], Θ[], G[]) (23) A each ime epoch, we acquire an observaion o, which measures par or all of he daa nodes. The relaionship beween observaion and sae is governed by an observaion disribuion p(o s ). Given he iniial sae s, we are going o recursively esimae he curren sae poserior p(s o 1: ) using he observaions up o now. The dynamic evoluion of s is very complicaed, because s may conain boh discree and coninuous variables, and even is dimensionaliy is no fixed (due o he varying dimension of Θ[]). Therefore, i is generally impossible o ge a closed-form soluion for he poserior disribuion of s. We can only resor o some numerical mehod such as paricle filering o find an approximaed soluion. Paricle filering (or sequenial Mone Carlo) [39], [4] is employed here o esimae he sae poserior because of is capabiliy o handle arbirary sysem and observaion models. In a paricle filer, he poserior disribuion is approximaed by a finie se of N s sae samples (paricles) {s i } and he associaed weighs {w i }: N s p(s o 1: ) wδ(s i s i ) (24) i=1 When N s approaches infiniy, he approximaion can be arbirarily close o he rue disribuion. A each ime epoch, we sar he filering wih he sample se {s i 1, w 1} i i=1...ns of previous ime. New samples are firs drawn from a proposal disribuion: s i q(s s i 1, o ) (25) The proposal disribuion q( ) has is suppor over he whole sae space of s. In his way, differen nework srucures and nework parameers of differen dimensions can be explored wihin he same framework as hidden daa are inferred. This rans-dimensional proposal densiy is similar in spiri o he ransiion kernel of RJMCMC [34]; he difference is ha we have used he idea o learn disribuion in a dynamic sysem. Afer obaining he new samples {s i }, we measure he likelihood of each of hem wih observaion model, and hen updae heir weighs according o w i w 1 i p(o s i )p(s i s i 1) q(s i s i 1, o ) (26) Finally, a resample sep will be aken if he variance of paricle weighs is oo large [4]. The resulan sample se 2 here we denoe unobserved daa using X[] abusively, even if some daa nodes are observable.

7 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 7 TABLE I NETWORK STRUCTURE AND PARAMETERS OF THE SYNTHETIC TIME VARYING DBN. G[] Θ 1 [] Θ 2 [] [1, 5] 6 [51, 1] 4 [11, 15] 15 ( ( ( ) ) ) ( ) {s i, w i } i=1...ns can give poserior esimaions of curren daa and nework. Choosing a good proposal densiy q(s s 1, o ) is criical o he performance of paricle filer. Sae ransiion disribuion p(s s 1 ) is a popular choice for proposal densiy, and i reduces he weigh updaing procedure in Eq. (26) ino simple accumulaion of likelihoods. We will use his disribuion as proposal densiy in mos of our experimens. However, when he dimension of he sae space is high, mos of he samples drawn from he ransiion disribuion will have very small weighs a problem known as degeneracy. In such a siuaion, we should inegrae he knowledge of o ino he proposal densiy in order o increase sampling efficiency. An example of how o do his for mulinomial daa disribuion will be shown in Secion V-A. V. EXPERIMENTAL RESULTS In his secion, several experimens are conduced o validae he effeciveness of he proposed TVDBN model on mulinomial and Gaussian daa disribuions. Our es daa come from boh simulaion and video sequences. In all he experimens we have achieved online model adapaion, which is a key advanage over mos exising algorihms. To ensure fair comparison, we apply all he off-line models in our experimens ( [16], [23], [25]) in a pseudo online manner; i.e., hey are re-esimaed a each epoch from scrach using he observaions up o curren ime. A. Simulaion We firs consider a wo-node TVDBN model as shown in Fig. 2. Boh nodes are discree and observable, and heir condiional disribuions are mulinomial. X 1 [] has 2 saes; X 2 [] has 3 saes. Parameers Θ 1 [] and Θ 2 [] are defined as he condiional probabiliy ables for he wo nodes, which are represened by marices wih 2 and 3 columns respecively. The nework srucure G[] is indexed by a posiive ineger. There are a oal of 21 srucural variaions, some of which are ploed in Fig. 2. Our es daa are generaed by a synheic ime varying DBN over 15 ime epochs. The nework shows hree differen srucures during he whole period, and he corresponding parameers eiher change linearly or keep piecewise consan. A full descripion of his ime varying nework is given in Table I. We draw 3 sample sequences of X[] from he nework and use hem as observaions. We use 5 paricles o esimae he poserior disribuion of nework srucure and parameers. The iniial sae of nework is assumed o be known. Since all he daa nodes are observable, he hidden sae in his experimen is jus s = [G[], Θ[]]. The dimension of mulinomial parameer Θ i [] is usually high, so we ry o improve he sampling efficiency of paricle filer wih a sub-opimal proposal densiy: q(s s 1, o ) = p(g[] G[ 1])q(Θ[] o, Θ[ 1], G[]) n n i[] = p(g[] G[ 1]) q(θ ij [] o, Θ i [ 1], G[]) (27) where i=1 j=1 q(θ ij [] o, Θ i [ 1], G[]) Dir ( Θ ij []; α Θ ij + ss ij (o ) ) (28) The only difference beween his proposal densiy and he sae ransiion disribuion is he inroducion of sufficien saisics ss ij (o ), which couns he insances of X i [] Pa j (X i []) observed from o. Wih his addiional erm, he proposal densiy becomes closer o is opimal form p(s s 1, o ) [39], and a he same ime remains simple o manipulae. The MAP expecaion of paricle filering resul a each ime sep is calculaed as he esimaed nework sae Ĝ[] and ˆΘ[]. 1) Coninuous Non-saionary Change: We firs examine he performance of TVDBN in esimaing parameer Θ 1 [], which changes smoohly over ime. I is shown in Fig. 3 (a) ha he esimaed ˆΘ 1 [] follows closely o ground ruh values. To ge a principled measuremen on how well he esimaed nework parameer maches he rue underlying condiional disribuion of node X 1 [], Kullback-Leibler (KL) divergence beween hem is evaluaed a each ime insance and ploed in Fig. 4 (a). The resulan KL divergence of our mehod is compared wih hose of several oher algorihms including he fully conneced dynamic Bayesian nework (DBN) [1], he kernel re-weighed TVDBN (K-TVDBN) [25], he adapive

8 8 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY θ211[] θ212[] θ213[] ˆθ211[] ˆθ212[] ˆθ213[] θ111[] θ121[] ˆθ111[] ˆθ121[] (a) (b) Fig. 3. Par of he esimaed nework parameers and ground ruh: (a) Θ 11 [] and Θ 12 []; (b) Θ 21 [] PF TVDBN DBN K TVDBN abn SLDS HCHC 1 1 PF TVDBN DBN K TVDBN abn SLDS HCHC (a) (b) Fig. 4. KL Divergences beween ground ruh and node disribuions esimaed wih differen mehods: PF-TVDBN (our mehod); DBN [1]; K-TVDBN [25]; abn [22]; SLDS [11]; HCHC [23]. (a) divergence of p(x 1 [] ); (b) divergence of p(x 2 [] ) Bayesian nework (abn) [22], he swiching linear dynamic sysem (SLDS) [11], and he hidden conroller hill climb algorihm (HCHC) [23]. A separae daa se of 2 sequences is used o rain he las wo models before online inference. I is seen from Fig. 4 (a) ha our mehod achieves he lowes divergence from rue disribuion mos of he ime expec for he ime period [51, 1]. The HCHC and abn mehods perform slighly beer during ha period because heir piecewise saionariy assumpion is consisen wih he unchanging nework sae a ha ime. Alhough he TVDBN model is more suscepible o observaion noise in such siuaion, is adapabiliy is heavily rewarded during non-saionary period when coninuous nework change akes place. 2) Piecewise Saionary Change: The parameer Θ 2 [] changes in a piecewise consan way in he synheic nework, and is esimaion can shed some ligh on TVDBN s performance in piecewise saionary nework. Some elemens of he esimaed ˆΘ 2 [] are ploed in Fig. 3 (b). I is observed ha TVDBN can correcly rack Θ 21 [] and quickly converge o he rue value even when abrup changes occur a ime 51 and 11. The quick response o abrup parameer change is aribued o an adapive selecion of smoohing coefficien α in he parameer ransiion model Eq. (1). We have chosen a large α (1) for parameer ransiion condiioned on unchanging srucure (G[ + 1] = G[]); and a relaively small α (2) for parameer ransiion condiioned on changing srucure (G[ + 1] G[]). In his way, we can obain sable parameer esimaion wihin saionary period and remain sensiive o abrup change poins simulaneously. The KL divergences for he disribuions of node X 2 [] esimaed wih differen mehods are also compared in Fig. 4 (b). The performance of TVDBN is close o piecewise saionary models such as abn and HCHC, and is much beer han oher models under comparison. For piecewise saionary models, large esimaion error will be incurred if a wrong saionary mode is seleced. However, TVDBN is free of his rouble because i can change boh abruply and coninuously.

9 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 9 TABLE II AVERAGE KL DIVERGENCES FOR EACH NODE DISTRIBUTION AND THE OVERALL DISTRIBUTION. DIFFERENT MODELS ARE COMPARED: PF-TVDBN (OUR METHOD); DBN [1]; K-TVDBN [25]; ABN [22]; SLDS [11]; HCHC [23]. Mehod p(x 1 [] ) p(x 2 [] ) p(x[] ) PF-TVDBN DBN K-TVDBN abn SLDS HCHC G[] Ĝ[] iniial srucure index Fig. 5. Esimaed nework srucure index and ground ruh. Fig. 8. The ime needed o converge o rue nework saus from differen iniial nework srucures PF TVDBN DBN K TVDBN abn SLDS HCHC Fig. 6. KL Divergences beween ground ruh and overall daa disribuions p(x[] ) esimaed wih mehods: PF-TVDBN (our mehod); DBN [1]; K- TVDBN [25]; abn [22]; SLDS [11]; HCHC [23]. 3) Overall Performance: The esimaed nework srucure Ĝ[] is ploed in Fig. 5 ogeher wih ground ruh. I can be seen he esimaed value is correc mos of he ime, excep for a few errors around model swiching poins. The effecive exploraion of nework srucure space can guide he esimaion of nework parameers, which is anoher reason for TVDBN s beer performance over is compeiors. Fig. 6 shows he comparison of KL divergences for he disribuion over all nodes. The average KL divergences over ime are summarized in Table II. Our mehod ouperforms all he ohers and has very small KL divergence on average. 4) Random Iniializaion: To furher invesigae he online adapabiliy of TVDBN, we ry o employ i o rack ime varying nework wih unknown iniial saus. Daa sequences generaed by he above synheic nework in ime period [11, 15] are used as observaions. The paricle filer is iniialized wih random nework srucures and parameers. For each of he 21 possible iniial nework srucures, he algorihm is run for 1 rials wih randomly sampled parameers. The esimaed nework saus a = 15 is compared wih ground ruh in Fig. 7. As shown in (a), he rue srucure G[15] = 15 can be recovered wih high probabiliy from mos of he random iniial srucures. In some rials iniialized wih srucure 6, 7, 12, 14, he esimaed nework converges o a wrong mode wih G[15] = 11, which only differs from rue srucure by one edge. For hose rials converging o correc srucure, he daa disribuion represened by he esimaed parameers is furher compared wih ground ruh, and he KL divergences beween hem are ploed in (b). The divergence is low for mos rials, which shows he esimaed parameers also converge o correc values. In Fig. 8, he ime needed o reach convergence is ploed for each iniial nework srucure. Convergence is defined o occur when he poserior probabiliy of rue srucure is higher han.95 and he KL divergence from rue disribuion is less han.1. On average we can reach convergence in less han 2 ime seps from any iniial sae, wih he nework sill evolving in he meanime. 5) Tuning Parameers: In all he experimens above, we have se λ 1 = 1. and λ 2 = 2.5 for he srucure ransiion model in Eq. (7). To ge a beer undersanding of heir

10 1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY iniial srucure index iniial srucure index (a) (b) Fig. 7. Esimaed nework disribuion a ime 15 wih random srucure iniializaion a ime 11. (a) average poserior probabiliy of rue nework srucure G[15] = 15; (b) KL divergence for p(x[15] ). Median, minimal, maximal rials are shown λ λ λ λ2.1 (a) (b) Fig. 9. The abiliy of TVDBN o converge o rue srucure G[] = 15 from a fully conneced srucure, esed wih differen parameer se {λ 1, λ 2 }. (a) percenage of convergence; (b) average ime aken o reach convergence. roles played in TVDBN model, we ry differen combinaions of λ 1 and λ 2, and repea he previous random iniializaion experimens wih a fully conneced iniial nework srucure. For each pair of λ 1 and λ 2, he percenage and ime of convergence o rue srucure are ploed in Fig. 9. I can be seen ha as he penaly on srucure complexiy (λ 1 ) decreases and he penaly on srucure change (λ 2 ) increases, our model will have a beer chance o converge, bu he ime aken o reach convergence will be longer. Therefore, parameers λ 1 and λ 2 can be uned o conrol he radeoff beween change sensiiviy and esimaion sabiliy. B. Acive Camera Tracking Video objec racking is a popular applicaion of dynamic Bayesian nework where arge sae is described by hidden nodes X[]. In he simples case, le X[] = (X 1 [], X 2 []) T be he 2D image coordinaes of arge posiion a ime. The arge dynamics can be specified by a second order auoregression model of Gaussian disribuion, and he likelihood of sae can be measured from image feaures such as HSV color hisogram [41]. For real ime racking, i is ofen a desirable feaure ha camera can acively follow he moving arge. However, he inconsisen self-moion of camera will be superimposed on arge s moion in image plane, which makes i no longer suiable o model he arge dynamics wih a saionary auoregression process. Therefore, we propose a ime varying Gaussian dynamic disribuion for objec racking wih acive camera: p(x[] X[ 2 : 1], v[]) = N (X[]; 2X[ 1] X[ 2], v[]) (29) where v[] = [v 1 [], ;, v 2 []] is he ime varying covariance marix. The emporal variaion of v 1 [] and v 2 [] inerpres he changing accuracy of auoregression predicion caused by irregular camera moion. In his ime varying Gaussian DBN, v 1 [] and v 2 [] consiue he nework parameer Θ[]. The ransiion of v i [] follows he muliplicaive random walk

11 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 11 (a) (b) Fig. 1. Tracking resuls on acive camera video sequence. (a) frame 146, arge is labeled by a red recangle; (b) frame 16, each paricle is visualized by a blue recangle ˆv 1 [] ˆv 2 [] 2 15 d x [] d y [] (a) (b) Fig. 11. (a) inferred ime varying dynamic variances; (b) iner-frame ranslaion in x/y direcion found by moion esimaion algorihm. disribuion in Eq. (21), wih a = 5, b = 5, d = 1/2. Nework srucure G[] says consan in his experimen. The ime varying Gaussian TVDBN model is esed on a video sequence recorded by a il-pan camera. A yellow oy car in he video is racked using paricle filer and is iniial posiion is labeled manually. Alhough boh of he camera and he car are moving rapidly, our racking resuls mach he arge posiions very well, as can be seen from he snapshos in Fig. 1. The disribuion of paricles is visualized for frame 16 in (b), which shows he variance of paricles becomes large in verical direcion when he camera urns upwards, and he uncerain moion in ha direcion is well handled. The srong correlaion beween dynamic variance and camera moion can be seen more clearly in Fig. 11, which plos he esimaed value of ˆv i [] and he iner-frame image ranslaions found by global moion esimaion. The accuracy of racking is quaniaively measured using racking rae [42], which is defined as he percenage of frames whose racking resul overlaps wih ground ruh by a leas 3%. Fig. 12 plos he racking raes achieved by TVDBN model as well as saionary DBN [1], kernel reweighed TVDBN (K-TVDBN) [25], and segmenaion-based DBN K TVDBN Seg GMM PF TVDBN paricle number Fig. 12. Tracking raes achieved by PF-TVDBN (our mehod), DBN [1], K-TVDBN [25], and seg-ggm [16]. Tess wih various numbers of paricles have been run in he experimen. Gaussian graphic model (seg-ggm) [16]. The comparisons are conduced under differen seings of paricle number, and for each seing 1 rials are run. As seen from he figure, our

12 12 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY TABLE III GRAPH STRUCTURE AND NODE TRANSITION DISTRIBUTION FOR DIFFERENT INTERACTION CLASSES. THE INDICES OF GRAPH STRUCTURE ARE DEFINED AS IN FIG. 2. Ineracion G[] p(x i [] Pa(X i []), G[]) wandering G W = 4 X i [] N (X i []; X i [ 1],.1), i = 1, 2 joining G J = 9 X 1 [] N (X 1 []; X 1 [ 1] + b J [](X 2 [ 1] X 1 [ 1]),.2) X 2 [] N (X 2 []; X 2 [ 1] + b J [](X 1 [ 1] X 2 [ 1]),.2) moving as a group G M = 17 X 1 [] N (X 1 []; X 1 [ 1],.4) X 2 [] N (X 2 []; X 2 [ 1] + b M [](X 1 [] X 1 [ 1]),.3) p(g[] = GJ Z) p(g[] = GW Z) 3.4 p(g[] = GM Z).2 25 Fig. 13. Trajecories of wo arges (marked by colored cross and circle) racked using TVDBN model. The color bar illusraes he ime of rajecory poins. mehod achieves much higher racking raes han he oher mehods under all paricle seings. C. Muliple Targes Ineracion Recogniion In his experimen, TVDBN model is used o recognize he class of ineracion among muliple arges based on he ransiion disribuion of heir rajecories. The similar problem has been invesigaed in [43] using DBN model. Consider he case of wo arges, whose coordinaes in image a ime are represened by (X 1 [], Y 1 []) and (X 2 [], Y 2 []), respecively. Suppose here are hree classes of ineracion beween he arges: wandering, joining, and moving as a group. The condiional dependency among arge saes will be differen for each ineracion class, and can be represened by a disinc nework srucure. In a video, he ineracion beween arges may change from ime o ime, and heir rajecories are modeled by TVDBN as non-saionary sequences. We can recognize he ineracion class by inferring he curren nework srucure of TVDBN model. The nework srucure G[] and he ransiion disribuion of node X i [] are lised in Table III for all ineracions. The ransiion disribuion of Y i [] is defined similarly as ha of X i []. When wo arges are wandering, here is no ineracion beween hem, and he ransiion disribuion is modeled by firs order auoregression. When wo arges are joining, hey approach each oher hrough he line connecing heir previous posiions. The speed of joining is conrolled by ime varying linear coefficien b J []. In he case where wo arges move ogeher as a group, we assume ha X 1 [] is he sae of leader arge which moves by is own will, and X 2 [] is he sae of follower arge which mimics he movemen of Fig. 14. The inferred poserior disribuion of nework srucure G[] over hree possible values. GT PF TVDBN DBN K TVDBN seg GGM NLMS HCHC SLDS Ineracing Fig. 15. GT (ground ruh) of ineracion ype versus ime, and he recogniion resuls found by PF-TVDBN (our mehod), DBN [1], K-TVDBN [25], seg- GGM [16], NLMS [3], HCHC [23], and SLDS [11]. leader arge. The exen o which he follower is affeced is conrolled by ime varying linear coefficien b M []. Therefore, we have wo ime varying parameers in his TVDBN model: Θ[] = {b J [], b M []}. The ransiion of hese parameers follows he random walk model in Eq. (2), wih σ bj =.5 and σ bm =.5. The experimen of ineracion recogniion is carried ou on a video from he CAVIAR daabase [44]. We use 4 paricles o rack he posiions and ineracion class of wo people simulaneously. The arge posiions are racked successfully hroughou he video, as shown by he rajecories in Fig. 13. To recognize he ineracion beween arges, he poserior

13 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE Ineracing Ineracing Ineracing Ineracing Ineracing Ineracing Ineracing Ineracing (a) (b) (c) (d) Ineracing Ineracing Ineracing Ineracing Ineracing Ineracing (e) (f) (g) Fig. 16. Confusion marices for ineracion recogniion using: (a) PF-TVDBN (our mehod); (b) DBN [1]; (c) K-TVDBN [25]; (d) seg-ggm [16]; (e) NLMS [3]; (f) HCHC [23]; (g) SLDS [11]. TABLE IV PRECISION OF INTERACTION RECOGNITION ACHIEVED BY DIFFERENT METHODS. Mehod PF-TVDBN DBN [1] K-TVDBN [25] seg-ggm [16] NLMS [3] HCHC [23] SLDS [11] Precision disribuion of srucure G[] is evaluaed as shown in Fig. 14. The ineracion corresponding o he MAP mode of G[] is recognized as he curren ineracion ype. Our recogniion resuls (PF-TVDBN) are ploed in Fig. 15, ogeher wih he ground ruh. The recogniion resuls generaed by several oher non-saionary models are also given for comparison, which include he saionary dynamic Bayesian nework (DBN) [1], he kernel re-weighed TVDBN (K-TVDBN) [25], he segmenaion-based Gaussian graphic model (seg-ggm) [16], he normalized leas square algorihm (NLMS) [3], he hidden conroller hill climb algorihm (HCHC) [23], and he swiching linear dynamic sysem (SLDS) [11]. The confusion marices for each mehod are shown in Fig. 16, and he recogniion precisions are compared in Table IV. Our mehod is shown o give he bes resul among all. Furhermore, he wo models wih performance closes o ours - DBN and NLMS - canno describe ineracion class explicily; heir recogniion is done hrough a careful hreshold on he ime varying parameers. Our TVDBN model is free of such adhoc hreshold and is inference resuls are self-explanaory. VI. CONCLUSION AND DISCUSSION We have proposed a new ime varying dynamic Bayesian nework model ha is capable of describing he evoluion of non-saionary emporal sequences. The nework srucure and parameer are assumed o change smoohly over ime, and heir ransiion disribuions are designed accordingly. Paricle filering is employed for online inference of he hidden nodes and changing nework, so ha daa space and nework configuraion can be explored simulaneously. Our conribuion is hreefold. Firs, he proposed TVDBN model works online. I can adapively rack he curren sae of nework wih he laes daa observaion. Second, our algorihm provides a general framework ha can be applied o non-saionary sequences of any disribuion. The cases of mulinomial and Gaussian disribuions have been sudied in deail. Third, we have validaed he effeciveness of he proposed model wih exensive experimens, including boh simulaion and ess on video sequences. The resuls demonsrae he superioriy of our mehod over oher non-saionary models, and show is prospecive applicaions in acive camera racking and muliple arges ineracion recogniion. Discussion on Scalabiliy Our work is a preliminary exploraion of modeling nonsaionary dynamic neworks. One of a few imporan problems ha remain o be invesigaed is how o model large-scale ime varying neworks, such as geneic nework and social nework. As he number of nodes grows in a nework, he number of possible nework opologies and he dimension of nework parameer will all grow exponenially. This will aggravae he daa scarciy problem and makes online nework adapaion more challenging. One way o resric he exploding soluion space is making use of prior knowledge on nework srucure. For example, in social nework, he maximal degree of a single node is usually bounded and does no increase wih he size of nework. Also, a huge social nework may be decomposed ino many small-sized cliques which are loosely conneced wih each oher. If we can cluser all he nodes ino such cliques (which is also an open quesion), hen modeling he nework will be reduced o modeling each of hese cliques individually. On he oher hand, someimes we are more ineresed in

14 14 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. XX, MMMMM 2YY he global behavior of he nodes in a large-scale nework han he characerisic of an individual node. In such cases, all he nodes can be regarded as homogeneous and a common saisic model can be employed o describe heir disribuions. For example, if he relaionship beween any wo linked nodes is governed by a common poenial funcion, he nework will reduce o a pairwise Markov random field, and he number of is parameers will no longer increase wih nework size. Compuaional complexiy is anoher hing o consider if TVDBN model is used for large-scale neworks. As he size of nework increases, a prohibiively large number of paricles will be required for robus inference, which hampers he real ime applicaion of our model. In order o reduce he number of required paricles, we should design more efficien proposal disribuion by leveraging domain knowledge and hisorical daa. Techniques such as Rao-Blackwellizaion may be used o resric sampling space. The burgeoning field of cloud compuing may also provide a soluion o he compuaion burden associaed wih large neworks. ACKNOWLEDGMENT This work was suppored in par by Hi-Tech Research and Developmen Program of China 863 (26AA1Z124), NSFC (69273, 68281, 61255), 973 Program (21CB73141) and he 111 Projec (B722). Research was sponsored by he Army Research Laboraory and was accomplished under Cooperaive Agreemen Number W911NF The views and conclusions conained in his documen are hose of he auhors and should no be inerpreed as represening he official policies, eiher expressed or implied, of he Army Research Laboraory or he U.S. Governmen. The U.S. Governmen is auhorized o reproduce and disribue reprins for Governmen purposes nowihsanding any copyrigh noaion here on. E. E. Kuruoğlu graefully acknowledges he suppor of he Insiue of Image Communicaion and Informaion Processing, SJTU under 111 projec and parial suppor from CNR-shor erm mobiliy program. REFERENCES [1] K. Murphy, Dynamic Bayesian neworks: Represenaion, inference and learning, Ph.D. disseraion, UC Berkeley, Compuer Science Division, July 22. [2] R. Prado, G. Huera, and M. Wes, Bayesian ime-varying auoregressions: heory, mehods and applicaions, Resenhas, vol. 4, pp , 21. [3] E. Moulines, P. Prioure, and F. Roueff, On recursive esimaion for ime varying auoregressive processes, Annals of Saisics, vol. 33, no. 6, pp , 25. [4] L. D. Johnson and G. Sakoulis, Maximizing equiy marke secor predicabiliy in a Bayesian ime-varying parameer model, Compuaional Saisics and Daa Analysis, vol. 52, no. 6, pp , 28. [5] A. Rao, D. J. Saes, and J. D. Engel, Inferring ime-varying nework opologies from gene expression daa, EURASIP Journal on Bioinformaics and Sysems Biology, vol. 27, p. 12, 27. [6] M. Wes, R. Prado, and A. D. Krysal, Evaluaion and comparison of EEG races: Laen srucure in nonsaionary ime series, Journal of he American Saisical Associaion, vol. 94, no. 446, pp , [7] D. Gencaga, E. E. Kuruoglu, and A. Eruzun, Modeling non-gaussian ime-varying vecor auoregressive processes by paricle filering, Mulidimensional Sysems and Signal Processing, vol. 21, no. 1, pp , 21. [8] A. Ihler, J. Huchins, and P. Smyh, Adapive even deecion wih ime-varying Poisson processes, Proceedings of he 12h ACM SIGKDD inernaional conference on knowledge discovery and daa mining, pp , Augus 26. [9] Z. Ghahramani and G. E. Hinon, Variaional learning for swiching sae-space models, Neural Compuaion, vol. 12, pp , 2. [1] A. Blake, B. Norh, and M. Isard, Learning muli-class dynamics, Neural Informaion Processing Sysems (NIPS), pp , [11] V. Pavlovic, J. M. Rehg, and J. Maccormick, Learning swiching linear models of human moion, Neural Informaion Processing Sysems (NIPS), pp , 2. [12] S. M. Oh, J. M. Rehg, T. Balch, and F. Dellaer, Learning and inferring moion paerns using parameric segmenal swiching linear dynamic sysems, Inernaional Journal of Compuer Vision, vol. 77, pp , 28. [13] E. Fox, E. Sudderh, M. Jordan, and A. Willsky, Nonparameric Bayesian learning of swiching linear dynamical sysems, in Advances in Neural Informaion Processing Sysems 21, 29, pp [14] P. Turaga and R. Chellappa, Locally ime-invarian models of human aciviies using rajecories on he grassmannian, IEEE Conference on Compuer Vision and Paern Recogniion, pp , 29. [15] C. Andrieu, M. Davy, and A. Douce, Efficien paricle filering for jump markov sysems. applicaion o ime-varying auoregressions, IEEE Trans. on Signal Processing, vol. 51, no. 7, pp , 23. [16] X. Xuan and K. Murphy, Modeling changing dependency srucure in mulivariae ime series, in Proceedings of he 24h Inernaional Conference on Machine Learning, 27, pp [17] P. Fearnhead, Exac and efficien bayesian inference for muliple changepoin problems, Saisics and Compuing, vol. 16, pp , 26. [18] K. Wang, J. Zhang, F. Shen, and L. Shi, Adapive learning of dynamic Bayesian neworks wih changing srucures by deecing geomeric srucures of ime series, Knowledge and Informaion Sysems, vol. 17, no. 1, pp , 28. [19] J. W. Robinson and A. J. Haremink, Non-saionary dynamic Bayesian neworks, Neural Informaion Processing Sysems (NIPS), pp , 28. [2] M. Grzegorczyk and D. Husmeier, Non-saionary coninuous dynamic bayesian neworks, in Advances in Neural Informaion Processing Sysems 22 (NIPS), 29, pp [21] E. Punskaya, C. Andrieu, A. Douce, and W. Fizgerald, Bayesian curve fiing using MCMC wih applicaions o signal segmenaion, IEEE Trans. on Signal Processing, vol. 5, no. 3, pp , 22. [22] S. H. Nielsen and T. D. Nielsen, Adaping Bayes nework srucures o non-saionary domains, Inernaional Journal of Approximae Reasoning, vol. 49, no. 2, pp , 28. [23] A. Tucker and X. Liu, A Bayesian nework approach o explaining ime series wih changing srucure, Inelligen Daa Analysis, vol. 8, no. 5, pp , 24. [24] F. Guo, S. Hanneke, W. Fu, and E. P. Xing, Recovering emporally rewiring neworks: a model-based approach, in Proceedings of he 24h Inernaional Conference on Machine Learning, 27, pp [25] L. Song, M. Kolar, and E. P. Xing, Time-varying dynamic bayesian neworks, Proceeding of he 23rd Neural Informaion Processing Sysems (NIPS), 29. [26] M. Kolar, L. Song, and E. P. Xing, Sparsisen learning of varyingcoefficien models wih srucural changes, in Advances in Neural Informaion Processing Sysems 22, 29, pp [27] M. Kolar, L. Song, A. Ahmed, and E. P. Xing, Esimaing ime-varying neworks, Annals of Applied Saisics, vol. 4, pp , 21. [28] W. Fu, L. Song, and E. P. Xing, Dynamic mixed membership block model for evolving neworks, in Proceedings of he 26h Inernaional Conference on Machine Learning, 29. [29] D. Heckerman, A Tuorial on Learning wih Bayesian Neworks. In Learning in Graphical Models, M. Jordan, ed.. MIT Press, Cambridge, MA, [3] N. Friedman, K. Murphy, and S. Russell, Learning he srucure of dynamic probabilisic neworks, in Proceedings of he 14h Annual Conference on Uncerainy in Arificial Inelligence, 1998, pp [31] A.-L. Barabási and R. Alber, Emergence of scaling in random neworks, Science, vol. 286, pp , [32] R. Kumar, P. Raghavan, S. Rajagopalan, D. Sivakumar, A. Tomkins, and E. Upfal, Sochasic models for he web graph, in Proc 41s Annual Symposium on Foundaions of Compuer Science, IEEE Compuer Sociey, 2, pp [33] N. Deo and A. Cami, Preferenial deleion in dynamic models of weblike neworks, Informaion Process Leer, vol. 12, pp , 27.

15 WANG e al.: TIME VARYING DYNAMIC BAYESIAN NETWORK FOR NON-STATIONARY EVENTS MODELING AND ONLINE INFERENCE 15 [34] P. J. Green, Reversible jump Markov Chain Mone Carlo compuaion and Bayesian model deerminaion, Biomerika, vol. 82, pp , [35] G. Schwarz, Esimaing he dimension of a model, The Annals of Saisics, vol. 6, pp , [36] S. Veeramachaneni, D. Sona, and P. Avesani, Hierarchical Dirichle model for documen classificaion, in Inernaional Conference on Machine Learning, Bonn Germany, vol. 119, 25, pp [37] D. Geiger and D. Heckerman, Learning Gaussian neworks, in Proceedings of he 1h Annual Conference on Uncerainy in Arificial Inelligence, 1994, pp [38] R. D. Shacher and C. R. Kenley, Gaussian influence diagrams, Managemen Science, vol. 35, no. 5, pp , [39] A. Douce, S. Godsill, and C. Andrieu, On sequenial Mone Carlo sampling mehods for Bayesian filering, Saisics and Compuing, vol. 1, pp , 2. [4] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, A uorial on paricle filers for on-line nonlinear/non-gaussian Bayesian racking, IEEE Trans. on Signal Processing, vol. 5, no. 2, pp , 22. [41] P. Pérez, C. Hue, J. Vermaak, and M. Gangne, Color-based probabilisic racking, Proceedings of he 7h European Conference on Compuer Vision-Par I, pp , 22. [42] R. Collins, X. Zhou, and S. K. Teh, An open source racking esbed and evaluaion web sie, in IEEE Inernaional Workshop on Performance Evaluaion of Tracking and Surveillance (PETS 25), January 25. [43] A. Dore and C. Regazzoni, Ineracion analysis wih a bayesian rajecory model, IEEE Inelligen Sysems, vol. 25, pp. 32 4, 21. [44] CAVIAR, Conex Aware Vision using Image-based Acive Recogniion projec, 25, EC Funded projec/ist , found a hp://homepages.inf.ed.ac.uk/rbf/caviar/. Zhaowen Wang received he B.E. and M.S. degrees in elecrical engineering from he Shanghai Jiao Tong Universiy, Shanghai, China, in 26 and 29. respecively. He is currenly pursuing Ph.D. degree in he Deparmen of Elecrical and Compuer Engineering, Universiy of Illinois a Urbana-Champaign (UIUC), Urbana. Since spring 21, he has been wih Deparmen of Elecrical and Compuer Engineering, UIUC. His research ineress include compuer vision, saisical learning, and video evens analysis. Xiaokang Yang (M SM 4) received he B.S. degree from Xiamen Universiy, Xiamen, China, in 1994, he M.S. degree from he Chinese Academy of Sciences, Shanghai, China, in 1997, and he Ph.D. degree from Shanghai Jiao Tong Universiy, Shanghai, in 2. He is currenly a Professor and he depuy Direcor of he Insiue of Image Communicaion and Informaion Processing, Deparmen of Elecronic Engineering, Shanghai Jiao Tong Universiy. From Augus 27 o July 28, he visied he Insiue for Compuer Science, Universiy of Freiburg, Germany, as an Alexander von Humbold Research Fellow. From Sepember 2 o March 22, he worked as a Research Fellow a he Cenre for Signal Processing, Nanyang Technological Universiy, Singapore. From April 22 o Ocober 24, he was a Research Scienis a he Insiue for Infocomm Research (I2R), Singapore. He has published over 13 refereed papers, and has filed 14 paens. His curren research ineress include visual processing and communicaion, media analysis and rerieval, and paern recogniion. Dr. Yang received he Microsof Young Professorship Award 26, he Bes Young Invesigaor Paper Award a IS&T/SPIE Inernaional Conference on Video Communicaion and Image Processing (VCIP23), and awards from he A-STAR and Tan Kah Kee foundaions. He is currenly a member of Design and Implemenaion of Signal Processing Sysems (DISPS) Technical Commiee of he IEEE Signal Processing Sociey and a member of Visual Signal Processing and Communicaions (VSPC) Technical Commiee of he IEEE Circuis and Sysems Sociey. Yi Xu received her B.S. and M.S. degrees from Nanjing Universiy of Science and Technology, Nanjing, China, in 1996 and 1999, respecively, and Ph.D. degree in Informaion and Communicaion Engineering from Shanghai Jiao Tong Universiy, in 25. Dr. Xu is currenly an assisan professor in he Insiue of Image Communicaion and Informaion Processing, Deparmen of Elecronic Engineering, Shanghai Jiao Tong Universiy, where she works in he area of video analysis and undersanding. Her research ineress and aciviies cover quaernion wavele ransform, image sequence analysis and video conen undersanding, especially recogniion of moving objecs and abnormal evens. Ercan E. Kuruoğlu (M 98-SM 6) was born in 1969 in Ankara, Turkey. He received he B.Sc. and M.Sc. degrees in elecrical and elecronics engineering from Bilken Universiy, Turkey, in 1991 and 1993, and he M.Phil. and Ph.D. degrees in informaion engineering from he Universiy of Cambridge, Cambridge, U.K., in 1995 and 1998, respecively. Upon compleion of his sudies, he joined he Xerox Research Cener Europe, Cambridge, as a permanen member of he Collaboraive Mulimedia Sysems Group. He was an ERCIM Fellow in 2 a INRIA-Sophia Anipolis, France. In January 22, he joined ISTI-CNR, Pisa, Ialy. He was a visiing professor a Georgia Tech-Shanghai in Auumn 27. He is currenly a Senior Researcher and Associae Professor a ISTI- CNR. His research ineress are in he areas of saisical signal and image processing and informaion and coding heory wih applicaions in asrophysics, bioinformaics, elecommunicaions, and inelligen user inerfaces. Dr. Kuruoğlu was an Associae Edior for he IEEE TRANSACTIONS ON SIGNAL PROCESSING beween He is currenly an Associae Edior for he IEEE TRANSACTIONS ON IMAGE PROCESSING and is on he ediorial board of Digial Signal Processing: A Review Journal. He aced as he echnical chair for EUSIPCO 26. He is a member of he IEEE Technical Commiee on Signal Processing Theory and Mehods. Thomas S. Huang (F 1) received he B.S. degree in elecrical engineering from Naional Taiwan Universiy, Taipei, Taiwan, R.O.C., and he M.S. and Sc.D. degrees in elecrical engineering from he Massachuses Insiue of Technology (MIT), Cambridge. He was on he Faculy of he Deparmen of Elecrical Engineering a MIT from 1963 o 1973; and on he Faculy of he School of Elecrical Engineering and Direcor of is Laboraory for Informaion and Signal Processing a Purdue Universiy from 1973 o 198. In 198, he joined he Universiy of Illinois a Urbana-Champaign, where he is now William L. Everi Disinguished Professor of Elecrical and Compuer Engineering, and Research Professor a he Coordinaed Science Laboraory, and a he Beckman Insiue for Advanced Science he is Technology and Co-Chair of he Insiue s major research heme Human Compuer Inelligen Ineracion. His professional ineress lie in he broad area of informaion echnology, especially he ransmission and processing of mulidimensional signals. He has published 21 books, and over 6 papers in nework heory, digial filering, image processing, and compuer vision. Dr. Huang is a Member of he Naional Academy of Engineering; a Member of he Academia Sinica, Republic of China; a Foreign Member of he Chinese Academies of Engineering and Sciences; and a Fellow of he Inernaional Associaion of Paern Recogniion, IEEE, and he Opical Sociey of America. Among his many honors and awards: Honda Lifeime Achievemen Award, IEEE Jack Kilby Signal Processing Medal, and he KS Fu Prize of he In. Asso. for Paern Recogniion.

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Tracking. Announcements

Tracking. Announcements Tracking Tuesday, Nov 24 Krisen Grauman UT Ausin Announcemens Pse 5 ou onigh, due 12/4 Shorer assignmen Auo exension il 12/8 I will no hold office hours omorrow 5 6 pm due o Thanksgiving 1 Las ime: Moion

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Air Traffic Forecast Empirical Research Based on the MCMC Method

Air Traffic Forecast Empirical Research Based on the MCMC Method Compuer and Informaion Science; Vol. 5, No. 5; 0 ISSN 93-8989 E-ISSN 93-8997 Published by Canadian Cener of Science and Educaion Air Traffic Forecas Empirical Research Based on he MCMC Mehod Jian-bo Wang,

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN Inernaional Journal of Applied Economerics and Quaniaive Sudies. Vol.1-3(004) STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN 001-004 OBARA, Takashi * Absrac The

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Probabilisic reasoning over ime So far, we ve mosly deal wih episodic environmens Excepions: games wih muliple moves, planning In paricular, he Bayesian neworks we ve seen so far describe

More information

Temporal probability models

Temporal probability models Temporal probabiliy models CS194-10 Fall 2011 Lecure 25 CS194-10 Fall 2011 Lecure 25 1 Ouline Hidden variables Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic

More information

EKF SLAM vs. FastSLAM A Comparison

EKF SLAM vs. FastSLAM A Comparison vs. A Comparison Michael Calonder, Compuer Vision Lab Swiss Federal Insiue of Technology, Lausanne EPFL) michael.calonder@epfl.ch The wo algorihms are described wih a planar robo applicaion in mind. Generalizaion

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

SPH3U: Projectiles. Recorder: Manager: Speaker:

SPH3U: Projectiles. Recorder: Manager: Speaker: SPH3U: Projeciles Now i s ime o use our new skills o analyze he moion of a golf ball ha was ossed hrough he air. Le s find ou wha is special abou he moion of a projecile. Recorder: Manager: Speaker: 0

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

A Dynamic Model of Economic Fluctuations

A Dynamic Model of Economic Fluctuations CHAPTER 15 A Dynamic Model of Economic Flucuaions Modified for ECON 2204 by Bob Murphy 2016 Worh Publishers, all righs reserved IN THIS CHAPTER, OU WILL LEARN: how o incorporae dynamics ino he AD-AS model

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Presentation Overview

Presentation Overview Acion Refinemen in Reinforcemen Learning by Probabiliy Smoohing By Thomas G. Dieerich & Didac Busques Speaer: Kai Xu Presenaion Overview Bacground The Probabiliy Smoohing Mehod Experimenal Sudy of Acion

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Inference of Sparse Gene Regulatory Network from RNA-Seq Time Series Data

Inference of Sparse Gene Regulatory Network from RNA-Seq Time Series Data Inference of Sparse Gene Regulaory Nework from RNA-Seq Time Series Daa Alireza Karbalayghareh and Tao Hu Texas A&M Universiy December 16, 2015 Alireza Karbalayghareh GRN Inference from RNA-Seq Time Series

More information

The equation to any straight line can be expressed in the form:

The equation to any straight line can be expressed in the form: Sring Graphs Par 1 Answers 1 TI-Nspire Invesigaion Suden min Aims Deermine a series of equaions of sraigh lines o form a paern similar o ha formed by he cables on he Jerusalem Chords Bridge. Deermine he

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS SEIF, EnKF, EKF SLAM Pieer Abbeel UC Berkeley EECS Informaion Filer From an analyical poin of view == Kalman filer Difference: keep rack of he inverse covariance raher han he covariance marix [maer of

More information

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models Journal of Saisical and Economeric Mehods, vol.1, no.2, 2012, 65-70 ISSN: 2241-0384 (prin), 2241-0376 (online) Scienpress Ld, 2012 A Specificaion Tes for Linear Dynamic Sochasic General Equilibrium Models

More information

Single and Double Pendulum Models

Single and Double Pendulum Models Single and Double Pendulum Models Mah 596 Projec Summary Spring 2016 Jarod Har 1 Overview Differen ypes of pendulums are used o model many phenomena in various disciplines. In paricular, single and double

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data Chaper 2 Models, Censoring, and Likelihood for Failure-Time Daa William Q. Meeker and Luis A. Escobar Iowa Sae Universiy and Louisiana Sae Universiy Copyrigh 1998-2008 W. Q. Meeker and L. A. Escobar. Based

More information

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

Failure of the work-hamiltonian connection for free energy calculations. Abstract

Failure of the work-hamiltonian connection for free energy calculations. Abstract Failure of he work-hamilonian connecion for free energy calculaions Jose M. G. Vilar 1 and J. Miguel Rubi 1 Compuaional Biology Program, Memorial Sloan-Keering Cancer Cener, 175 York Avenue, New York,

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Graphical Event Models and Causal Event Models. Chris Meek Microsoft Research

Graphical Event Models and Causal Event Models. Chris Meek Microsoft Research Graphical Even Models and Causal Even Models Chris Meek Microsof Research Graphical Models Defines a join disribuion P X over a se of variables X = X 1,, X n A graphical model M =< G, Θ > G =< X, E > is

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Recent Developments In Evolutionary Data Assimilation And Model Uncertainty Estimation For Hydrologic Forecasting Hamid Moradkhani

Recent Developments In Evolutionary Data Assimilation And Model Uncertainty Estimation For Hydrologic Forecasting Hamid Moradkhani Feb 6-8, 208 Recen Developmens In Evoluionary Daa Assimilaion And Model Uncerainy Esimaion For Hydrologic Forecasing Hamid Moradkhani Cener for Complex Hydrosysems Research Deparmen of Civil, Consrucion

More information

Maximum Likelihood Parameter Estimation in State-Space Models

Maximum Likelihood Parameter Estimation in State-Space Models Maximum Likelihood Parameer Esimaion in Sae-Space Models Arnaud Douce Deparmen of Saisics, Oxford Universiy Universiy College London 4 h Ocober 212 A. Douce (UCL Maserclass Oc. 212 4 h Ocober 212 1 / 32

More information

DEPARTMENT OF STATISTICS

DEPARTMENT OF STATISTICS A Tes for Mulivariae ARCH Effecs R. Sco Hacker and Abdulnasser Haemi-J 004: DEPARTMENT OF STATISTICS S-0 07 LUND SWEDEN A Tes for Mulivariae ARCH Effecs R. Sco Hacker Jönköping Inernaional Business School

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Optimal Path Planning for Flexible Redundant Robot Manipulators

Optimal Path Planning for Flexible Redundant Robot Manipulators 25 WSEAS In. Conf. on DYNAMICAL SYSEMS and CONROL, Venice, Ialy, November 2-4, 25 (pp363-368) Opimal Pah Planning for Flexible Redundan Robo Manipulaors H. HOMAEI, M. KESHMIRI Deparmen of Mechanical Engineering

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

WATER LEVEL TRACKING WITH CONDENSATION ALGORITHM

WATER LEVEL TRACKING WITH CONDENSATION ALGORITHM WATER LEVEL TRACKING WITH CONDENSATION ALGORITHM Shinsuke KOBAYASHI, Shogo MURAMATSU, Hisakazu KIKUCHI, Masahiro IWAHASHI Dep. of Elecrical and Elecronic Eng., Niigaa Universiy, 8050 2-no-cho Igarashi,

More information

1. VELOCITY AND ACCELERATION

1. VELOCITY AND ACCELERATION 1. VELOCITY AND ACCELERATION 1.1 Kinemaics Equaions s = u + 1 a and s = v 1 a s = 1 (u + v) v = u + as 1. Displacemen-Time Graph Gradien = speed 1.3 Velociy-Time Graph Gradien = acceleraion Area under

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H.

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H. ACE 564 Spring 2006 Lecure 7 Exensions of The Muliple Regression Model: Dumm Independen Variables b Professor Sco H. Irwin Readings: Griffihs, Hill and Judge. "Dumm Variables and Varing Coefficien Models

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

Lab #2: Kinematics in 1-Dimension

Lab #2: Kinematics in 1-Dimension Reading Assignmen: Chaper 2, Secions 2-1 hrough 2-8 Lab #2: Kinemaics in 1-Dimension Inroducion: The sudy of moion is broken ino wo main areas of sudy kinemaics and dynamics. Kinemaics is he descripion

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

Electrical and current self-induction

Electrical and current self-induction Elecrical and curren self-inducion F. F. Mende hp://fmnauka.narod.ru/works.hml mende_fedor@mail.ru Absrac The aricle considers he self-inducance of reacive elemens. Elecrical self-inducion To he laws of

More information

A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS

A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS Xinping Guan ;1 Fenglei Li Cailian Chen Insiue of Elecrical Engineering, Yanshan Universiy, Qinhuangdao, 066004, China. Deparmen

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Reliability of Technical Systems

Reliability of Technical Systems eliabiliy of Technical Sysems Main Topics Inroducion, Key erms, framing he problem eliabiliy parameers: Failure ae, Failure Probabiliy, Availabiliy, ec. Some imporan reliabiliy disribuions Componen reliabiliy

More information

5 The fitting methods used in the normalization of DSD

5 The fitting methods used in the normalization of DSD The fiing mehods used in he normalizaion of DSD.1 Inroducion Sempere-Torres e al. 1994 presened a general formulaion for he DSD ha was able o reproduce and inerpre all previous sudies of DSD. The mehodology

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

2016 Possible Examination Questions. Robotics CSCE 574

2016 Possible Examination Questions. Robotics CSCE 574 206 Possible Examinaion Quesions Roboics CSCE 574 ) Wha are he differences beween Hydraulic drive and Shape Memory Alloy drive? Name one applicaion in which each one of hem is appropriae. 2) Wha are he

More information

Energy Storage Benchmark Problems

Energy Storage Benchmark Problems Energy Sorage Benchmark Problems Daniel F. Salas 1,3, Warren B. Powell 2,3 1 Deparmen of Chemical & Biological Engineering 2 Deparmen of Operaions Research & Financial Engineering 3 Princeon Laboraory

More information