Nonlinear Parametric Hidden Markov Models

Size: px
Start display at page:

Download "Nonlinear Parametric Hidden Markov Models"

Transcription

1 M.I.T. Media Laboraory Percepual Compuing Secion Technical Repor No. Nonlinear Parameric Hidden Markov Models Andrew D. Wilson Aaron F. Bobick Vision and Modeling Group MIT Media Laboraory Ames S., Cambridge, MA 39 (drew, bobickmedia.mi.edu) Absrac In previous work [], we exended he hidden Markov model (HMM) framework o incorporae a global parameric variaion in he oupu probabiliies of he saes of he HMM. Developmen of he parameric HMM was moivaed by he ask of simulaneoiusly recognizing and inerpreing gesures ha exhibi meaningful variaion. Wih sandard HMMs, such global variaion confounds he recogniion process. In his paper we exend he parameric HMM approach o handle nonlinear (non-analyic) dependencies of he oupu disribuions on he parameer of ineres. We show a generalized expecaion-maximizaion (GEM) algorihm for raining he parameric HMM and a GEM algorihm o simulaneously recognize he gesure and esimae he value of he parameer. We presen resuls on a poining gesure, where he nonlinear approach permis he naural azimuh/elevaion parameerizaion of poining direcion. Inroducion In [] we inroduce parameric hidden Markov models (HMMs) as a echnique o simulaneously recognize and inerpre parameric gesure. By parameric gesure we mean gesures ha exhibi a meaningful variaion; an example is a poin gesure where he imporan parameer is direcion. A poin gesure is hen paramerized by wo values: he Caresian coordinaes ha indicae direcion. Alernaively, direcion can be specified by spherical coordinaes. We refer he reader o [] for a deailed moivaion of he parameeric HMM approach as i relaes o gesure recogniion and inerpreaion. We briefly menion here ha wihou resoring o manual inkering wih he feaure space, a sandard dynamic ime warping (DTW) or HMM approach o he recogniion of parameric gesures faces he difficuly ha he gesure can only be recognized once he value of he parameers ha deermine he form of he gesure are recovered, and likewise he process of recovering he parameers necessarily involves recognizing he gesure. Parameric HMMs exend he sandard HMM model o include a global parameric variaion in he oupu of he HMM saes. In [] a linear model was used o model he parameric variaion a each sae of he HMM. Using he linear model, we formulaed an expecaion-maximizaion (EM) mehod for raining he parameric HMM. During esing, he parameric HMM simulaneously recognizes he gesure and esimaes he quanifying parameers, also by an EM procedure. In his paper he parameric HMM approach is exended o handle siuaions in which he dependence of he sae oupu disibuions on he parameers is no linear. Nonlinear parameric HMMs accordingly model he dependence using a single 3-layer logisic neural nework a each sae. Before presening nonlinear parameric HMMs in full we reierae he mahemaical developemen of linear parameeric HMMs. Linear parameeric hidden Markov models. Model Parameric HMMs model he dependence on he parameer of ineres explicily. We begin wih he usual HMM formulaion [3] and change he form of he oupu probabiliy disribuion (usually a normal disribuion or a mixure model) o depend on he parameer, a vecor quaniy. In he sandard coninuous HMM model, a sequence is represened by movemen hrough a se of hidden saes. The Markovian propery is encoded in a se of ransiion probabiliies, wih a i = P(q = q ; = i) being he probabiliy of moving o sae a ime given he sysem was in sae i a ime ;. Associaed wih each sae is an oupu disribuion of he feaure vecor x given he sysem is really in sae a ime : P(x q = ). In a simple Gaussian HMM, he parameers o be esimaed are he a i,, and Σ. To inroduce he parameerizaion on we modify he oupu disribuions. The simples useful model is a linear dependenceof he mean of he Gaussian on. For each sae of he HMM we have: ˆ ()=W + () P(x q = )=N(x ˆ () Σ ) () In he work presened here all values of are considered equally likely and so he prior P( q = ) is ignored. Noe ha is consan for he enire observaion sequence, bu is free o vary from sequence o sequence. When necessary, we wrie he value of associaed wih a paricular sequence k as k.. Training Training consiss of seing he HMM parameers o maximize he probabiliy of he raining sequences. Each raining sequence is paired wih a value of hea. The Baum-Welch form of he expecaion-maximizaion (EM) algorihm is used o updae he Technically here are also he iniial sae parameers o be esimaed; in his work we use causal opologies wih a unique saring sae.

2 parameers of he oupu probabiliy disribuions. The expecaion sep of he Baum-Welch algorihm (also known as he forward/backward algorihm) compues he probabiliy ha he HMM was in sae a ime given he enire sequence x denoed as. I is convenien o consider he HMM s parse of he observaion sequence as being represened by. In raining, he parameers of he HMM are updaed in he maximizaion sep of he EM algorihm. In paricular, he parameers are updaed by choosing a o maximize he auxiliary funcion Q( ). may conain all he parameers in, or only a subse if several maximizaion seps are required o esimae all he parameers. As explained in he appendix, Q is he expeced value of he log probabiliy given he parse. In he appendix we derive he derivaive of Q for HMM s: Q = P(x q = ) P(x q = ) The parameers of he parameerized Gaussian HMM include W,, Σ and he Markov model ransiion probabiliies. Updaing W and separaely has he drawback ha when esimaing W only he old value of is available, and similarly if is esimaed firs. Insead, we define new variables: Z W Ω k k such ha ˆ = Z Ω k. We hen need o only updae Z in he maximizaion sep for he means. To derive an updae equaion for Z we maximize Q by seing equaion 3 o zero (selecing Z as he parameers in ) and solving for Z. Noe ha because each observaion sequence k in he raining se is associaed wih a paricular k, we can consider all observaion sequences in he raining se before updaing Z. Accordingly we denoe associaed wih sequence k as k. Subsiuing he Gaussian disribuion and he definiion of ˆ = Z Ω k ino equaion 3: Q = Z k = Σ ; = Σ ; k " kx kω T k ; (3) () k(x k ; ˆ ( k)) T Σ ; ˆ ( k) (5) Z k(x k ; ˆ ( k))ω T k (6) kz Ω kω T k Seing his derivaive o zero and solving for Z, we ge he updae equaion for Z : Z =" kx kωk#" T kω kω T k #; Once he means are esimaed, he covariance marices Σ are updaed in he usual way: Σ = P k # (7) (8) k (x k ; ˆ ( k))(x k ; ˆ ( k)) T (9) as is he marix of ransiion probabiliies [3]..3 Tesing In esing we are given an HMM and an inpu sequence. We wish o compue he value of and he probabiliy ha he HMM produced he sequence. As compared o he usual HMM formulaion, he parameerized HMM s esing procedure is complicaed by he dependence of he parse on he unknown. Here we presen only a echnique o exrac he value of, since for a given value of he probabiliy of he sequence x is easily compued by he Vierbi algorihm or by he forward/backward algorihm. We desire he value of which maximizes he probabiliy of he observaion sequence. Again an EM algorihm is appropriae: he expecaion sep is he same forward/backward algorihm used in raining. The forward/backward algorihm compues he opimal parse given a value of. In he corresponding maximizaion sep we updae o maximize Q, he log probabiliy of he sequence given he parse. To derive an updae equaion for, we sar wih he derivaive in equaion 3 from he previous secion and selec as. As wih Z, only he means depend upon yielding: = Q (x i ; ˆ ()) T Σ ; ˆ () Seing his derivaive o zero and solving for, we have: =" W T Σ ; W #;" W T Σ ; (x ; ) () () The values of and are ieraively updaed unil he change in is small. Wih he examples we have ried, less han en ieraions are sufficien. Noe ha for efficiency, many of he inner erms of he above expression may be pre-compued. 3 Non-linear parameeric hidden Markov models 3. Model Nonlinear parameric hidden Markov models omi he linear model of secion. in favor of a logisic neural nework wih one hidden layer. As wih linear parameric HMM s, he oupu of each nework is perurbed by Gaussian noise: P(x q = )=N(x ˆ () Σ ) () The oupu ˆ () of he nework associaed wih sae can be wrien as ˆ ()=W ( ) g(w ( ) + b ( ) )+b ( ) (3) where W ( ) denoes he marix of weighs from he inpu layer o he layer of hidden logisic unis, b ( ) he biases a each inpu uni, and g(x) he vecor-valued funcion ha compues he logisic funcion of each componen of is argumen. Similarly, W ( ) and b ( ) denoe he weighs and biases for he oupu layer. 3. Training As wih sandard HMMs and linear parameric HMMs, he parameers of he nonlinear parameeric HMM are updaed in he maximizaion sep of he raining EM algorihm by choosing a o maximize he auxiliary funcion Q( ). In he nonlinear parameric HMM, he parameers include he parameers of each neural nework as well as Σ and ransiion #

3 probabiliies a i. Unlike he linear parameric HMM i is no possible o maximize Q wih respec o analyically. Insead we rely on he generalized expecaion-maximizaion (GEM) algorihm in which Q is (approximaely) maximized in he maximizaion sep using opimizaion echniques. The expecaion sep is he same as in he linear parameric and sandard HMM formulaions (he forward/backward algorihm). Gradien ascen may be used o updae he nework parameers in each maximizaion sep of he GEM algorihm. When applied o muli-layer neural neworks, gradien ascen (or gradien descen when he goal is minimize error ) is ofen referred o as he backpropagaion algorihm []. Raher han reierae he gradien descen equaions for logisic neural neworks here, we noe ha he backpropagaion algorihm is appropriae for opimizing Q wih he modificaion ha he error o be propagaed backwards ino he nework has he form Σ ; (x k ; ˆ ()) () which is simply he usual error weighed by and Σ ;. Inuiively, his weighing seers eachnework o model he appropriae par of he inpu, much as he gaing funcion of a mixures of expers model [] selecs is expers. Also, his weighing may be derived from he form of Q (equaion 3). In each maximizaion sep of he GEM algorihm, i is no necessary o compleely maximize Q. As long as Q is increased for every maximizaion sep, he GEM algorihm is guaraneed o converge o a local maximimum in he same manner as EM. In fac, since he funcional Q changes wih every expecaion sep, a complee maximizaion of Q in he maximizaion sep is probably compuaionally waseful. Accordingly in our esing we run he gradien ascen algorihm (backpropagaion algorihm) a fixed number of ieraions for each GEM ieraion. 3.3 Tesing In esing we desire he value of which maximizes he probabiliy of he observaion sequence. Again an EM algorihm o compue is appropriae. As in he raining phase, we can no maximize Q analyically, and so a GEM algorihm is necessary. To opimize Q, we use a gradien-ascen algorihm: = Q (x i ; ˆ ()) T Σ ; ˆ () (5) ˆ () = W ( ) Λ(g (W ( ) + b ( ) ))W ( ) (6) where Λ(x) forms he diagonal marix from he componens of x, and g (x) denoes he derivaive of he vecor-valued funcion ha compues he logisic funcion of each componen of is argumen. In he resuls presened in his paper, we use a gradien ascen algorihm wih adapive sep size. In addiion i was found necessary o consrain he gradien ascen sep o preven he alogirhm from wandering ouside he bounds of he raining daa, where he oupu of he neural neworks is essenially undefined. This consrain is implemened by simply limiing any componen of he sep ha akes he value of ouside he bounds of he raining daa, esablished by he minimum and maximum raining values. As wih he EM raining algorihm of he linear parameric case, wih he examples we have ried less han en GEM ieraions are required. Discussion In [] we presen an example of a poining gesure parameerized by proecion of hand posiion ono he plane parallel and in fron of he user a he momen ha he arm is fully exended. The linear parameric HMM approach works well since he proecion is a linear operaion. The nonlinear varian of he parameric HMM inroduced in he previous secion is appropriae in siuaions in which he dependence of he sae oupu disribuions on he parameers is no linear, and can no be easily made linear wih a known coordinae ransformaion of he feaure space. In pracice, a useful consequence of nonlinear modeling for parameric HMMs is ha he parameer space may be chosen more freely in relaion o he observaion feaure space. For example, in a hand gesure recogniion sysem, he naural feaure space may be he spaial posiion of he hand, while a naural parameerizaion for a poining gesure is he spherical coordinaes of he poining direcion. Conversely, here is no guaranee ha any observaion feaure space will permi he parameric HMM o learn he parameerizaion. Coninuing wih he poining example, he nonlinear parameric HMM approach will learn he smooh mapping from spherical coordinaes of he poin o hand posiion a each sae unambiguously. Obviously, a feaure space ha does no include he x coordinae (across he body) will no be enough o capure he parameerizaion, while a feaure space ha neglecs he deph away from he body may work well enough. One difficuly in assessingwheher an observaionfeaure space and a parameer space are appropriae for one anoher is wheher he mapping from parameer o observaion feaures is smooh enough o be learned by neural neworks wih a reasonable number of hidden unis. While in heory a 3-layer logisic neural nework wih sufficienly many hidden unis is capable of learning any mapping, we would like o use as few hidden unis as possible and so choose our parameerizaion and observaion feaure space o give simple, learnable mappings. Cross-validaion is probably he only pracical auomaic procedure o evaluae parameer/observaion feaure space pairings, as well as he number of hidden unis in each neural nework. The compuaional complexiy of such approaches is a drawback of he nonlinear parameeric HMM approach. In summary, wih nonlinear parameric HMMs we are free o choose inuiive parameerizaions bu we mus be careful ha i is possible o learn he mapping from parameers o observaion feaures given a paricular observaion feaure space. 5 Resuls To es he performance of he nonlinear parameric HMM, we conduced an experimen similar o he poining experimen of [] bu wih a spherical coordinae parameerizaion raher han he proecion ono a plane in fron of he user. We used a Polhemus moion capure sysem o record he posiion of he user s wris a a frame rae of 3Hz. Fify such examples were colleced, each averaging 9 ime samples (abou second) in lengh. Thiry of he sequences were randomly seleced as he raining se; he remaining comprised he es se. Before raining, he value of he parameer mus be se for each raining example, as well as for each esing example o evaluae he abiliy of he parameric HMM o recover he parameerizaion. We direcly measured he value of by finding he poin a which he deph of he wris away from he user was greaes. This poin 3

4 an(x) 8 6 azimuh elevaion x Figure : The disribuion of for he poining daa ses over he angen funcion. was ransformed o spherical coordinaes (azimuh and elevaion) via he arcangen funcion. Noe ha for poining gesures ha are confined o a small area in fron of he user (as in he experimen presened in []) he linear parameeric HMM approach will work well enough, since for small values he angen funcion is approximaely linear. The poining gesures used in he presen experimen were more broad, ranging from -36 o +8 degrees elevaion and -77 o +8 degrees azimuh. Figure shows how he populaion of associaed wih he raining se is disribued over more han he nearly linear porion of he angen funcion. An eigh sae causal nonlinear parameric HMM was rained on he fory raining examples. To simplify raining we consrained he number of hidden unis of each sae o be equal; noe ha his consrain is no presen in he model bu makes choosing he number of hidden unis via cross validaion easier. We evaluaed performance on he esing se for various numbers of hidden unis and found ha hidden unis gave he bes esing performance. We did no evaluae he performance under varying amouns of raining daa or varying numbers of saes in he HMM. The oupu of he resuling eigh neural neworks is shown in Figure 3. The oupu of he neural neworks shows how he inpu s dependence on is mos dramaic in he middle of he sequence, or a he apex of he poining gesure. The average error over he esing se was compued o be abou 6. degrees elevaion and 7.5 degrees azimuh. For comparison, an eigh sae linear parameric HMM was rained on he same raining daa and yielded an average error over he same esing se of abou.9 degrees elevaion and 8.3 degrees azimuh. Lasly, we demonsrae recogniion performance of he nonlinear parameerized HMM on our poining daa. A one minue sequence was colleced ha conained a variey of movemens including six poins disribued hroughou. To simulaneously deec he gesure and recover, we used a 3 sample (one sec) window on he sequence. Figure shows he log probabiliy as a funcion of ime and he value of recovered for a number of recovered poining gesures. All of he poining gesures were recovered. 6 Conclusion The parameric hidden Markov model framework presened in [] has been generalized o handle nonlinear dependenciesof he sae oupu disribuions on he parameerizaion. We have shown ha where he linear parameric HMM employs he EM algorihm in raining and esing, he nonlinear varian similarly uses he GEM algorihm. The drawbacks of he of he generalized approach are wofold: he number of hidden unis for he neworks mus be chosen appropriaely during raining, and secondly, during esing he GEM algorihm is more compuaionally inensive han he EM algorihm of he linear approach. The nonlinear parameric HMM is able o model a much larger class of parameric gesures and movemens han he linear parameric HMM. A pracical benefi of he increased modeling abiliy is ha wih some care, he parameer space may be chosen independenly of he observaion feaure space. Theoreically, his should allow more inuiive parameerizaions, including perhaps hose derived from more subecive qualiies of he signal (e.g. he inensiy of a walk). Addiionally, i should be easier o ailor parameer spaces o specific gesures, wih all gesures employing he same observaion feaure space ha is indiginous o he sensors. We believe ha hese are signican advanages in modeling parameeric gesure and movemen. References [] C. M. Bishop. Neural neworks for paern recogniion. Clarendon Press, Oxford, 995. [] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinon. Adapive mixures of local expers. Neural Compuaion, 3:79 87, 99. [3] L. R. Rabiner and B. H. Juang. An inroducion o hidden Markov models. IEEE ASSP Magazine, pages 6, January 986. [] A. D. Wilson and A. F. Bobick. Recogniion and inerpreaion of parameric gesure. Proc. In. Conf. Comp. Vis., 998. acceped for publicaion (see MIT Percepual Compuing Group Technical Repor, hp://wwwwhie.media.mi.edu/vismod).

5 3 log probabiliy 8., , , , , 6.7 ( 3.5, 79.).8, 3.9 ( 76.6, 7.3) ( 7., 5.) ( 7., 3.3) ( 7.6, 68.6) ( 53.7, 5.7) frame number Figure : Recogniion resuls are shown by he log probabiliy of he windowed sequence beginning a each frame number. The rue posiive sequences are labeled by he value of recovered by he EM esing algorihm and he value compued by direc measuremen (in paranheses). = x hea.5.5 hea y z Figure 3: The oupu of each nework is displayed (as a surface) for each of he observaion feaure coordinaes (x, y, and z), for each of he eigh saes of he rained parameric HMM. 5

Parametric Hidden Markov Models for Gesture Recognition

Parametric Hidden Markov Models for Gesture Recognition 884 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 21, NO. 9, SEPTEMBER 1999 Parameric Hidden Markov Models for Gesure Recogniion Andrew D. Wilson, Suden Member, IEEE Compuer Sociey,

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl Time series model fiing via Kalman smoohing and EM esimaion in TimeModels.jl Gord Sephen Las updaed: January 206 Conens Inroducion 2. Moivaion and Acknowledgemens....................... 2.2 Noaion......................................

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides Hidden Markov Models Adaped from Dr Caherine Sweeney-Reed s slides Summary Inroducion Descripion Cenral in HMM modelling Exensions Demonsraion Specificaion of an HMM Descripion N - number of saes Q = {q

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Isolated-word speech recognition using hidden Markov models

Isolated-word speech recognition using hidden Markov models Isolaed-word speech recogniion using hidden Markov models Håkon Sandsmark December 18, 21 1 Inroducion Speech recogniion is a challenging problem on which much work has been done he las decades. Some of

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Lab #2: Kinematics in 1-Dimension

Lab #2: Kinematics in 1-Dimension Reading Assignmen: Chaper 2, Secions 2-1 hrough 2-8 Lab #2: Kinemaics in 1-Dimension Inroducion: The sudy of moion is broken ino wo main areas of sudy kinemaics and dynamics. Kinemaics is he descripion

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems. Mah 2250-004 Week 4 April 6-20 secions 7.-7.3 firs order sysems of linear differenial equaions; 7.4 mass-spring sysems. Mon Apr 6 7.-7.2 Sysems of differenial equaions (7.), and he vecor Calculus we need

More information

Announcements: Warm-up Exercise:

Announcements: Warm-up Exercise: Fri Apr 13 7.1 Sysems of differenial equaions - o model muli-componen sysems via comparmenal analysis hp//en.wikipedia.org/wiki/muli-comparmen_model Announcemens Warm-up Exercise Here's a relaively simple

More information

Kinematics and kinematic functions

Kinematics and kinematic functions Kinemaics and kinemaic funcions Kinemaics deals wih he sudy of four funcions (called kinemaic funcions or KFs) ha mahemaically ransform join variables ino caresian variables and vice versa Direc Posiion

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

The expectation value of the field operator.

The expectation value of the field operator. The expecaion value of he field operaor. Dan Solomon Universiy of Illinois Chicago, IL dsolom@uic.edu June, 04 Absrac. Much of he mahemaical developmen of quanum field heory has been in suppor of deermining

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Inferring Dynamic Dependency with Applications to Link Analysis

Inferring Dynamic Dependency with Applications to Link Analysis Inferring Dynamic Dependency wih Applicaions o Link Analysis Michael R. Siracusa Massachuses Insiue of Technology 77 Massachuses Ave. Cambridge, MA 239 John W. Fisher III Massachuses Insiue of Technology

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1. Roboics I April 11, 017 Exercise 1 he kinemaics of a 3R spaial robo is specified by he Denavi-Harenberg parameers in ab 1 i α i d i a i θ i 1 π/ L 1 0 1 0 0 L 3 0 0 L 3 3 able 1: able of DH parameers of

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS SEIF, EnKF, EKF SLAM Pieer Abbeel UC Berkeley EECS Informaion Filer From an analyical poin of view == Kalman filer Difference: keep rack of he inverse covariance raher han he covariance marix [maer of

More information

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Probabilisic reasoning over ime So far, we ve mosly deal wih episodic environmens Excepions: games wih muliple moves, planning In paricular, he Bayesian neworks we ve seen so far describe

More information

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006 2.160 Sysem Idenificaion, Esimaion, and Learning Lecure Noes No. 8 March 6, 2006 4.9 Eended Kalman Filer In many pracical problems, he process dynamics are nonlinear. w Process Dynamics v y u Model (Linearized)

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Testing the Random Walk Model. i.i.d. ( ) r

Testing the Random Walk Model. i.i.d. ( ) r he random walk heory saes: esing he Random Walk Model µ ε () np = + np + Momen Condiions where where ε ~ i.i.d he idea here is o es direcly he resricions imposed by momen condiions. lnp lnp µ ( lnp lnp

More information

SPH3U: Projectiles. Recorder: Manager: Speaker:

SPH3U: Projectiles. Recorder: Manager: Speaker: SPH3U: Projeciles Now i s ime o use our new skills o analyze he moion of a golf ball ha was ossed hrough he air. Le s find ou wha is special abou he moion of a projecile. Recorder: Manager: Speaker: 0

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

!!"#"$%&#'()!"#&'(*%)+,&',-)./0)1-*23)

!!#$%&#'()!#&'(*%)+,&',-)./0)1-*23) "#"$%&#'()"#&'(*%)+,&',-)./)1-*) #$%&'()*+,&',-.%,/)*+,-&1*#$)()5*6$+$%*,7&*-'-&1*(,-&*6&,7.$%$+*&%'(*8$&',-,%'-&1*(,-&*6&,79*(&,%: ;..,*&1$&$.$%&'()*1$$.,'&',-9*(&,%)?%*,('&5

More information

Sensors, Signals and Noise

Sensors, Signals and Noise Sensors, Signals and Noise COURSE OUTLINE Inroducion Signals and Noise: 1) Descripion Filering Sensors and associaed elecronics rv 2017/02/08 1 Noise Descripion Noise Waveforms and Samples Saisics of Noise

More information

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves Rapid Terminaion Evaluaion for Recursive Subdivision of Bezier Curves Thomas F. Hain School of Compuer and Informaion Sciences, Universiy of Souh Alabama, Mobile, AL, U.S.A. Absrac Bézier curve flaening

More information

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes

More information

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2 Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes

More information

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004 Augmened Realiy II Kalman Filers Gudrun Klinker May 25, 2004 Ouline Moivaion Discree Kalman Filer Modeled Process Compuing Model Parameers Algorihm Exended Kalman Filer Kalman Filer for Sensor Fusion Lieraure

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n Module Fick s laws of diffusion Fick s laws of diffusion and hin film soluion Adolf Fick (1855) proposed: d J α d d d J (mole/m s) flu (m /s) diffusion coefficien and (mole/m 3 ) concenraion of ions, aoms

More information

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux Gues Lecures for Dr. MacFarlane s EE3350 Par Deux Michael Plane Mon., 08-30-2010 Wrie name in corner. Poin ou his is a review, so I will go faser. Remind hem o go lisen o online lecure abou geing an A

More information

Morning Time: 1 hour 30 minutes Additional materials (enclosed):

Morning Time: 1 hour 30 minutes Additional materials (enclosed): ADVANCED GCE 78/0 MATHEMATICS (MEI) Differenial Equaions THURSDAY JANUARY 008 Morning Time: hour 30 minues Addiional maerials (enclosed): None Addiional maerials (required): Answer Bookle (8 pages) Graph

More information

Single and Double Pendulum Models

Single and Double Pendulum Models Single and Double Pendulum Models Mah 596 Projec Summary Spring 2016 Jarod Har 1 Overview Differen ypes of pendulums are used o model many phenomena in various disciplines. In paricular, single and double

More information

Innova Junior College H2 Mathematics JC2 Preliminary Examinations Paper 2 Solutions 0 (*)

Innova Junior College H2 Mathematics JC2 Preliminary Examinations Paper 2 Solutions 0 (*) Soluion 3 x 4x3 x 3 x 0 4x3 x 4x3 x 4x3 x 4x3 x x 3x 3 4x3 x Innova Junior College H Mahemaics JC Preliminary Examinaions Paper Soluions 3x 3 4x 3x 0 4x 3 4x 3 0 (*) 0 0 + + + - 3 3 4 3 3 3 3 Hence x or

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j = 1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Maintenance Models. Prof. Robert C. Leachman IEOR 130, Methods of Manufacturing Improvement Spring, 2011

Maintenance Models. Prof. Robert C. Leachman IEOR 130, Methods of Manufacturing Improvement Spring, 2011 Mainenance Models Prof Rober C Leachman IEOR 3, Mehods of Manufacuring Improvemen Spring, Inroducion The mainenance of complex equipmen ofen accouns for a large porion of he coss associaed wih ha equipmen

More information

5.1 - Logarithms and Their Properties

5.1 - Logarithms and Their Properties Chaper 5 Logarihmic Funcions 5.1 - Logarihms and Their Properies Suppose ha a populaion grows according o he formula P 10, where P is he colony size a ime, in hours. When will he populaion be 2500? We

More information

not to be republished NCERT MATHEMATICAL MODELLING Appendix 2 A.2.1 Introduction A.2.2 Why Mathematical Modelling?

not to be republished NCERT MATHEMATICAL MODELLING Appendix 2 A.2.1 Introduction A.2.2 Why Mathematical Modelling? 256 MATHEMATICS A.2.1 Inroducion In class XI, we have learn abou mahemaical modelling as an aemp o sudy some par (or form) of some real-life problems in mahemaical erms, i.e., he conversion of a physical

More information

Západočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France

Západočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France ADAPTIVE SIGNAL PROCESSING USING MAXIMUM ENTROPY ON THE MEAN METHOD AND MONTE CARLO ANALYSIS Pavla Holejšovsá, Ing. *), Z. Peroua, Ing. **), J.-F. Bercher, Prof. Assis. ***) Západočesá Univerzia v Plzni,

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Theory of! Partial Differential Equations!

Theory of! Partial Differential Equations! hp://www.nd.edu/~gryggva/cfd-course/! Ouline! Theory o! Parial Dierenial Equaions! Gréar Tryggvason! Spring 011! Basic Properies o PDE!! Quasi-linear Firs Order Equaions! - Characerisics! - Linear and

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information