An Ensemble Approach for Incremental Learning in Nonstationary Environments

Size: px
Start display at page:

Download "An Ensemble Approach for Incremental Learning in Nonstationary Environments"

Transcription

1 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens Michael D. Muhlbaier and Robi Polikar * Signal Processing and Paern Recogniion laboraory Elecrical and Compuer Engineering, Rowan Universiy, Glassboro, NJ 828 USA muhlbaier@ieee.org, polikar@rowan.edu Absrac. We describe an ensemble of classifiers based algorihm for incremenal learning in nonsaionary environmens. In his formulaion, we assume ha he learner is presened wih a series of raining daases, each of which is drawn from a differen snapsho of a disribuion ha is drifing a an unknown rae. Furhermore, we assume ha he algorihm mus learn he new environmen in an incremenal manner, ha is, wihou having access o previously available daa. Insead of a ime window over incoming insances, or an aged based forgeing as used by mos ensemble based nonsaionary learning algorihms a sraegic weighing mechanism is employed ha racks he classifiers performances over drifing environmens o deermine appropriae voing weighs. Specifically, he proposed approach generaes a single classifier for each daase ha becomes available, and hen combines hem hrough a dynamically modified weighed majoriy voing, where he voing weighs hemselves are compued as weighed averages of classifiers individual performances over all environmens. We describe he implemenaion deails of his approach, as well as is iniial resuls on simulaed non-saionary environmens. Keywords: Nonsaionary environmen, concep drif, Learn ++.NSE. Inroducion The problem of learning in nonsaionary environmens (NSE) has radiionally received much less aenion han is saionary counerpar. I is perhaps due o inheren difficuly of he problem: afer all, machine learning algorihms require a formal and precise definiion of he problem, and in his case, i is even difficul o define he problem: wha exacly is a nonsaionary environmen? Informally, i refers o he variaion of he underlying daa disribuion ha defines he concep o be learned: he environmen subsequenly provides new daa, for which he inpu/oupu mapping (decision boundary) is differen han hose of he previous daases. Early work in NSE learning, also known as concep drif, have concenraed on formalizing exacly wha kind of drif and/or how fas of a drif can be learned [-4]. Therein lies he difficuly wih his problem; change can be slow or fas, abrup or gradual, random or sysemaic, cyclical, expanding or conracing in he feaure space. * Corresponding auhor. M. Haindl, J. Kiler, and F. Roli (Eds.): MCS 27, LNCS 4472, pp. 49, 27. Springer-Verlag Berlin Heidelberg 27

2 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens 49 Several approaches have been proposed for various combinaions of such changes, which ypically include a mechanism o (i) deec he drif and/or is magniude; (ii) learn he change in he environmen; and/or (iii) forge wha is no longer relevan. Many algorihms specifically designed for concep drif are generally based in par on he ideas used in FLORA [3]: use a iming window (fixed or variable) o choose a block of (new) insances as hey come in, and hen rain a new classifier. The window size can be increased or decreased depending on how fas he environmen is changing. Of course, such algorihms have a buil-in forgeing mechanism wih he implici assumpion ha hose insances ha fall ouside of he window are no longer relevan, and he informaion carried by hem can be forgoen. Oher approaches ry o deec when a subsanial change has occurred (novely deecion), and hen updae he classifier [-8], or find he exreme examples ha carry mos relevan and novel informaion o rain a classifier (parial insance memory) [9]; sill ohers deec and purge hose insances ha no longer carry relevan informaion, and rain a new classifier wih he remaining block of daa []. A differen group of approaches rea he nonsaionary learning as a predicion problem, and use appropriae classificaion (e.g., PNN) [] or racking algorihms (such as Kalman filers) [2]. More recenly, ensemble based approaches have also been proposed. As reviewed by Kuncheva in [3], such algorihms ypically fall ino one of he following caegories: (i) dynamic combiners where, for a previously rained fixed ensemble, he combinaion rule is changed o rack he concep drif, e.g., weighed majoriy voing or Winnow based algorihms [4]; (ii) algorihms ha use he new raining o updae an online learner or all members of an ensemble, e.g. online boosing []; and (iii) algorihms ha add new ensemble members [6] and/or replace he leas conribuing member of an ensemble wih a classifier rained on new daa [7,8]. In his paper we propose an alernaive formulaion, Learn ++.NSE, buil upon our previously inroduced incremenal learning algorihm Learn ++ [9], by suiably modifying i for nonsaionary environmens. Learn ++.NSE does no compleely fi ino any of he above caegories, bu raher combines ideas from each. I does use new daa o creae new ensemble members, and i also adjuss he combinaion rule by dynamically modifying voing weighs. I does no use a iming window, or any of he previously seen daa (hence an incremenal learning approach), and i does no discard old classifiers, in case old classifiers become relevan again in he fuure. I simply reweighs hem based on heir prediced experise on he curren environmen. I is reasonable o quesion wheher here is need for ye anoher ensemble based algorihm for nonsaionary learning, in paricular considering ha we make no claim on he superioriy of Learn ++.NSE over any of he oher algorihms ensemble based or oherwise. In he spiri of he no-free-lunch heorem, we believe ha no single algorihm can ouperform all ohers on all applicaions, ha he success depends much on he mach of he characerisics of he algorihm wih hose of he problem, and herefore i is bes o have access o a oolbox of algorihms each wih is own paricular srenghs and weaknesses. Considering he differences of Learn ++.NSE menioned above, along wih he promising iniial resuls and oucomes discussed laer in his paper, we believe ha Learn ++.NSE can be a beneficial alernaive on a variey of nonsaionary learning scenarios. The algorihm is described in deail in Secion 2,

3 492 M.D. Muhlbaier and R. Polikar followed by a descripion of our simulaion ess and resuls in Secion 3. Specific nonsaionary environmen scenarios in which he algorihm is expeced o be paricularly useful are discussed in Secion 4. 2 Learn ++.NSE Learn ++.NSE uses a similar srucural framework as Learn ++ : incremenally build an ensemble of classifiers from new daa ha are combined by weighed majoriy voing, and he exising ensemble decides on he conribuion of each successive classifier. There are a number of differences, however, insiued for suiably modifying he algorihm for nonsaionary environmens. In summary, we assume ha each new daase represens a new snapsho of he environmen. The amoun of change in he environmen since he previous daase may be minor or subsanial, and is racked by he performance of he exising ensemble on he curren daase. Learn ++.NSE creaes a single classifier for each daase ha becomes available from he new environmen, and a weighed performance measure (based on is error) is compued for each classifier on each environmen i has experienced. Each performance measure represens he experise of ha classifier on a paricular snapsho of he environmen. These performance measures are hen averaged using a nonlinear (sigmoidal) funcion, giving higher weighs o he performances of he classifier on he more recen environmens. A any ime, he classifiers can be combined hrough weighed majoriy voing, using he mos recen averaged weighs, o obain he final hypohesis. In Learn ++.NSE, he change in he environmen is racked boh by addiion of new classifiers, as well as he voing weighs of exising classifiers, and no classifier is ever discarded. If cyclical drif makes earlier classifiers once again relevan, he algorihm recognizes his change, and assigns higher weighs o earlier classifiers. The algorihm is described in deail below, wih is pseudocode given in Fig.. We assume ha an oracle provides us wih a raining daase a cerain inervals, (no necessarily uniform). A each inerval, he disribuion from which he oracle draws he raining daa changes in some manner and rae, also unknown o us. The raining daase D of cardinaliy m a ime provides us wih a snapsho of he hen curren environmen. Learn ++.NSE generaes one classifier for each such new daase ha becomes available. To do so, he algorihm firs evaluaes he classificaion accuracy of he currenly available composie hypohesis H - on he newly available daa D. H -, obained by he weighed majoriy voing of all classifiers generaed during he previous - ieraions, represens he exising ensemble s knowledge of he environmen. The error of he composie hypohesis E is compued as a simple raio of he correcly idenified insances of he new daase. We demand ha his error be less ha a hreshold, say ½, o ensure ha i has a meaningful classificaion capaciy. For his selecion of he hreshold, we normalize he error so ha he normalized error B remains beween and (Sep inside he Do loop). We hen updae a se of weighs for each insance, ha is normally iniialized o be uniform (/m ): he weighs of he insances misclassified by he ensemble are reduced by a facor of B. The weighs are hen normalized o obain a disribuion D (sep 2).

4 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens 493 Inpu: For each daase D =,2, represening a new environmen Sequence of i=,,m insances x i wih labels y i Y = {,,c} Supervised learning algorihm BaseClassifier. Do for =,2, D i = w i = / m, i, skip o sep 3. () If = Iniialize () (). Compue error of exising ensemble on new daa m ( ) ( ) i= i Normalize error B E ( E ) i E = m H x y (2) 2. Updae insance weighs = (3) ( ) = wi = m, oherwise Se B, H xi yi m i i= D = w w so ha Dk is a disribuion. () 3. Call BaseClassifier wih D, obain h : X Y 4. Evaluae all exising classifiers on new daase D m () ( ) = D i h x y, for k =,..., (6) ε = >, generae a new h. If ε < > 2, se ε = 2 ε k i= k i i If k 2 ( ) k βk = εk εk, for k =,..., (7). Compue a weighed sum of all normalized errors for k h classifier h k a ( k b) k j ( ) ω k = + e, k j j k k k j= β = ω β, for k =,..., k k j= k k (4) ω = ω ω (8) (9) 6. Calculae classifier voing weighs ( βk) W = log, for k =,..., () k 7. Obain he composie hypohesis as he curren final hypohesis ( ) arg max ( ) H x i = W k k h x c k i = c () Fig.. The pseudocode of he algorihm Learn ++.NSE The algorihm hen calls he BaseClassifier and asks i o creae he h classifier h using daa drawn from he curren raining daase D provided by he oracle (sep 3). All classifiers generaed hus far h k, k=,..., are hen evaluaed on he curren daase, by compuing heir weighed error (Equaion 6, sep 4). Noe ha a curren ime sep, we

5 494 M.D. Muhlbaier and R. Polikar now have error measures, one for each classifier generaed hus far. Hence he error erm ε k represens he error of he k h classifier h k a he h ime sep. The errors are again assumed o be less han ½. Here, we make a disincion: if he error of he mos recen classifier (on is naive raining se) is greaer han ½, hen we discard ha classifier and generae a new one. For any of he oher (older) classifiers, if is error is greaer han ½, i is se o ½. This effecively ses he normalized error of ha classifier a ha ime sep o, which in urn removes all of is voing power laer during he weighed majoriy voing. Noe ha unlike AdaBoos and similar algorihms ha use such an error hreshold, Learn ++.NSE does no abor, nor does i hrow away (he previously generaed) classifiers when heir error exceeds ½. This is because, i is no unreasonable for a classifier o perform poorly if he environmen has changed drasically since i was creaed, and furhermore, i does no mean ha his classifier will never be useful again in he fuure. Should he environmen goes hrough a cyclical change and becomes similar o he ime when he classifier in quesion was generaed, is error will hen be less han ½, and i will conribue o he curren decision of he ensemble. In order o combine he classifiers, however, we need one weigh for each, even hough he algorihm mainains a se of such error measures ε k for each classifier k=,,. The error measures are herefore combined hrough a weighed averaging (sep ). The weighing is done hrough a sigmoid funcion (Equaion 8), which gives a large weigh o errors on mos recen environmens, and less o errors on older environmens. We emphasize, however, ha his process does no give less weigh o old classifiers. I is heir error on old environmens ha is weighed less. Therefore, a classifier generaed long ime ago can in fac receive a large voing weigh, if is error on he recen environmens is low. The errors so weighed as described above are hen combined o obain a weighed average error (Eq. 9). The logarihm of he inverse of his weighed error average hen consiues he final voing weigh W k for classifier k a ime insan (Sep 6, Eq. ). Finally all classifiers are combined hrough weighed majoriy voing (Sep 7, Eq. ). 3 Experimens and Resuls Two experimens were designed o es he abiliy of he algorihm o rack a nonsaionary environmen. In boh experimens, daa was drawn from Gaussian disribuions so ha he opimal Bayes error could be compued and compared a each ime sep agains he Learn ++.NSE ensemble as well as single classifier performances. Naive Bayes was used as he base classifier in all experimens. In he firs experimen, concepually illusraed in Fig. 2, he disribuions for wo of he hree classes were moved in a concep drif scenario, where boh he means and he variances of he disribuions were changed. In fac, he disribuion of one class has compleely replaced he oher. The enire change was compleed in ime seps, however, in each case, we allowed he algorihm o see very lile of he environmen: a mere 2 samples per class. The disribuion for he hird class lef unchanged. We hen racked he performance of he Learn ++.NSE ensemble, a single classifier, and he Bayes classifier, a each ime sep. Figure 3 shows four independen rials of he percen classificaion performances on he enire feaure space, calculaed wih

6 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens 49 y 8 Class [ 8 3.] Class 3 (no change) [ ] Class new [8 ] Class 2 [8 4 ] 2 [ μ x μ y σ x σ y ] Pah of drif [8 2 2] Class 2 new 8 x Fig. 2. Concep drif simulaion 3-class experimen. Classes and 2 move and change variance. % performance on he enire feaure space Learn ++.NSE Single Classifier Bayes Classifier Time sep Fig. 3. Comparaive performances on four independen single rials respec o individual insances probabiliy of occurrence (o preven insances away from decision boundaries arificially increasing he performance). Several ineresing observaions can be made. Firs, all four performance rends indicae ha he Learn ++.NSE racks he Bayes classifier very closely, and as expeced, i has smaller performance variance han he single classifier.

7 496 M.D. Muhlbaier and R. Polikar Second, he ensemble performance is, in general, beer han he single classifier, and more closely follows he Bayes classifier. This is imporan since a single Naïve Bayes classifier esed on he mos recen daa on which i was rained would normally be expeced o do very well. The ensemble is using is combined knowledge from all raining sessions o correcly idenify hose areas ha have changed due o concep drif as well as hose ha have no changed. Third, he performance gap beween he ensemble and he single classifier appear o be widening in ime, in favour of he ensemble. Tha is, he performances of single classifier and Learn ++.NSE sar similar, bu in ime he ensemble increasingly ouperforms he single classifier despie occasional good single classifier performances. Since he large variance in he single classifier performance makes i difficul o deermine he validiy of such a claim, we looked a he average performance of independen rials of he same experimen. Performance resuls in Figure 4 indicae ha he Learn ++.NSE ensemble increasingly and significanly ouperforms he single classifier. Finally, noe ha wha appears o look like an overall performance decline (from 9% o 7%-8% range) in Fig.2 and Fig. 4 is irrelevan for our purposes. Such a decline merely indicaes ha he underlying classificaion problem is geing increasingly difficul in ime (see Fig. 2), as evidenced by he declining opimal Bayes classifier performance. We are primarily concerned wih how well Learn ++.NSE racks Bayes classifier, as we canno expec any algorihm o do any beer. % performance on he enire feaure space Learn ++.NSE Singe Classifier Bayes Classifier Time sep Fig. 4. Comparaive performances - average of independen rials We have also designed a second, four-class experimen, he non-saionary environmen of which was subsanially more challenging, including classes swiching places, changing variances, drifing in and ou of heir original disribuions in a cyclical manner. Figure provides snapshos of he disribuions of four classes c ~ c 4 a =, =T/3, =2T/3 and =T, where T indicaes he ime of he las environmen updae. During = o =T/3, he variances of he class disribuions were modified; hen he means of he class disribuions drifed unil =2T/3, and finally boh he variances and means were changed during he las hird secion of he simulaion. There were a oal of 2 ime seps from = o =T, during which he disribuions briefly reurned o he viciniy of heir original neighbourhoods wice, before drifing again in oher direcions. Hence, he design simulaed a semi-cyclical non-saionary behaviour.

8 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens ime = C C ime = T/3 C C 3 C 2 C 3 C 2 C 4.4 ime = 2T/3.4 ime = T.2 C C 2 C 3 C 4.2 C C 3 C 2 C 4 Fig.. The snapshos of he drifing disribuions a four ime seps % Classificaion performance on he enire feaure space Variances of he class disribuions drif Learn++.NS Single Classifier Bayes Classifier Class disribuions drif from one locaion o anoher Boh class locaion drif and variance drif Time sep each indicaing a differen environmen Fig. 6. Classificaion performance resuls on he second experimen

9 498 M.D. Muhlbaier and R. Polikar The classificaion performance resuls of Learn ++.NSE, single classifier and he Bayes classifier are shown in Figure 6, where all performances are averages of independen rials. Once again, he ensemble was able o rack he Bayes classifier much closer han he single classifier, despie he convolued changes in he environmen. The ensemble performance was also significanly beer han he single Naïve Bayes classifier a he 9% level for all > seps. 4 Conclusions and Discussions We described an ensemble based algorihm for nonsaionary environmens. The algorihm creaes a single classifier for each daase ha becomes available, and keeps a record of he performance of each classifier on all environmens hroughou he raining. The classifiers are hen combined hrough a modified dynamically weighed majoriy voing, where he voing weighs are deermined as a measure of each classifier s performance on he curren environmen, weighed along wih he performances on previous environmens. All classifiers are reained, which allows he previously generaed classifiers o make significan conribuions o he ensemble decision, if such classifiers provide relevan informaion for he curren environmen. The algorihm has many characerisics ha are deemed desirable for online learners [3]: (i) he algorihm learns from only single pass of each daase, wihou revisiing hem. This propery of he algorihm is inheried from is predecessor Learn ++, as i was designed o learn incremenally wihou requiring access o previously seen daa; (ii) he algorihm has a relaively small compuaional complexiy ha is linear in he number of daases, or even possibly in he daa size, depending on he base classifier (since i can be used wih any supervised classifier); (iii) he algorihm possesses any-ime-learning capabiliy, i.e., if he algorihm is sopped a ime, he algorihm provides he curren bes represenaion of he environmen a ha ime. As menioned earlier in our jusificaion for anoher ensemble based nonsaionary learning algorihm, he success of any algorihm depends much on how well is characerisics mach hose of he problem. I is herefore appropriae, and in fac necessary, o esablish wha such characerisics are for Learn ++.NSE. Specifically, when would we expec his algorihm o do well? The srucure of Learn ++.NSE makes i paricularly useful if he nonsaionary environmen provides a sequence of relaively small daa, ha by iself is no sufficien o adequaely represen he curren sae of he environmen. Then, a single classifier generaed wih such daa would no be able o appropriaely characerize he decision boundary. Only a subse of classes being represened in each daase is anoher scenario of nonsaionary learning, and he performance of Learn ++.NSE on such scenarios is par of our planned fuure work. The algorihm described here is cerainly no fully developed ye, and much work needs o be done. The algorihm, while inuiive, is based on heurisic ideas, and here are no heoreical performance guaranees. This is in par because we have no placed any resricions on he environmen. However, a careful analysis of he algorihm is necessary o deermine how i reacs o differen scenarios of nonsaionary environmens, and specifically o differen raes of change. Of paricular ineres is how well he algorihm would be able o rack he nonsaionary environmen, if he environmen changed faser, say for example, i made he same oal change in T/2 or T/4 ime seps raher han in T seps?

10 An Ensemble Approach for Incremenal Learning in Nonsaionary Environmens 499 While he algorihm is a is early sages of developmen, is iniial performance has been promising, moivaing us for furher opimizaion and developmen. Fuure effors will include he above menioned analysis for racking / esimaing he environmen s rae of change, and he algorihm s response o such change. Acknowledgmens. This maerial is based upon work suppored by he Naional Science Foundaion under Gran No. ECS References [] Schlimmer, J. C. and Granger, R. H.; Incremenal Learning from Noisy Daa. Machine Learning (986) [2] Helmbold, D. P. and Long, P. M.; Tracking drifing conceps by minimizing disagreemens. Machine Learning 4 (994) [3] Widmer, G. and Kuba, M.; Learning in he presence of concep drif and hidden conexs. Machine Learning 23 (996) 69-. [4] Case, J., Jain, S., Kaufmann, S., Sharma, A., and Sephan, F.; Predicive learning models for concep drif. Theoreical Compuer Science 268 (2) [] Liao, Y., Vemuri, V. R., and Pasos, A.; Adapive anomaly deecion wih evolving connecionis sysems. Journal of Nework and Compuer Applicaions 3 (27) 6-8. [6] Casillo, G., Gama, J., and Medas, P.; Adapaion o Drifing Conceps. Progress in Arificial Inelligence, Lecure Noes in Compuer Science 292 (23) [7] Gama, J., Medas, P., Casillo, G., and Rodrigues, P.; Learning wih Drif Deecion. Advances in Arificial Inelligence - SBIA 24, Lecure Noes in Compuer Science 37 (24) [8] Cohen, L., Avrahami-Bakish, G., Las, M., Kandel, A., and Kiperszok, O.; Real-ime daa mining of non-saionary daa sreams from sensor neworks. Informaion Fusion In Press, (27). [9] Maloof, M. A. and Michalski, R. S.; Incremenal learning wih parial insance memory. Arificial Inelligence 4 (24) [] Black, M. and Hickey, R.; Learning classificaion rules for elecom cusomer call daa under concep drif. Sof Compuing - A Fusion of Foundaions, Mehodologies and Applicaions 8 (23) 2-8. [] Rukowski, L.; Adapive probabilisic neural neworks for paern classificaion in imevarying environmen. IEEE Transacions on Neural Neworks (24) [2] Cheng-Kui, G., Zheng-Ou, W., and Ya-Ming, S.; Encoding a priori informaion in neural neworks o improve is modeling performance under non-saionary environmen. Inernaional Conference on Machine Learning and Cyberneics (ICMLC 24) (24) [3] Kuncheva, L. I.; Classifier Ensembles for Changing Environmens. Muliple Classifier Sysems (MCS 24), Lecure Noes in Compuer Science 377 (24) -. [4] Blum, A.; Empirical Suppor for Winnow and Weighed-Majoriy Algorihms: Resuls on a Calendar Scheduling Domain. Machine Learning 26 (997) -23. [] Oza, N.; Online Ensemble Learning, Ph.D. Disseraion, (2) Universiy of California, Berkeley. [6] Kyosuke, N., Koichiro, Y., and Takashi, O.; ACE: Adapive Classifiers-Ensemble Sysem for Concep-Drifing Environmens. Muliple Classifier Sysems (MCS 2), Lecure Noes in Compuer Science 34 (2) 76-8.

11 M.D. Muhlbaier and R. Polikar [7] Sree, W. N. and Kim, Y.; A sreaming ensemble algorihm (SEA) for large-scale classificaion. Sevenh ACM SIGKDD Inernaional Conference on Knowledge Discovery & Daa Mining (KDD-), (2) [8] Chu, F. and Zaniolo, C.; Fas and Ligh Boosing for Adapive Mining of Daa Sreams. Advances in Knowledge Discovery and Daa Mining, Lecure Noes in Compuer Science 36 (24) [9] Polikar, R., Upda, L., Upda, S. S., and Honavar, V.; Learn++: an incremenal learning algorihm for supervised neural neworks. Sysems, Man and Cyberneics, Par C, IEEE Transacions on 3 (2)

Ensemble Confidence Estimates Posterior Probability

Ensemble Confidence Estimates Posterior Probability Ensemble Esimaes Poserior Probabiliy Michael Muhlbaier, Aposolos Topalis, and Robi Polikar Rowan Universiy, Elecrical and Compuer Engineering, Mullica Hill Rd., Glassboro, NJ 88, USA {muhlba6, opali5}@sudens.rowan.edu

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems Single-Pass-Based Heurisic Algorihms for Group Flexible Flow-shop Scheduling Problems PEI-YING HUANG, TZUNG-PEI HONG 2 and CHENG-YAN KAO, 3 Deparmen of Compuer Science and Informaion Engineering Naional

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Solutions for Assignment 2

Solutions for Assignment 2 Faculy of rs and Science Universiy of Torono CSC 358 - Inroducion o Compuer Neworks, Winer 218 Soluions for ssignmen 2 Quesion 1 (2 Poins): Go-ack n RQ In his quesion, we review how Go-ack n RQ can be

More information

Written HW 9 Sol. CS 188 Fall Introduction to Artificial Intelligence

Written HW 9 Sol. CS 188 Fall Introduction to Artificial Intelligence CS 188 Fall 2018 Inroducion o Arificial Inelligence Wrien HW 9 Sol. Self-assessmen due: Tuesday 11/13/2018 a 11:59pm (submi via Gradescope) For he self assessmen, fill in he self assessmen boxes in your

More information

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18 A Firs ourse on Kineics and Reacion Engineering lass 19 on Uni 18 Par I - hemical Reacions Par II - hemical Reacion Kineics Where We re Going Par III - hemical Reacion Engineering A. Ideal Reacors B. Perfecly

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Lecture 4 Notes (Little s Theorem)

Lecture 4 Notes (Little s Theorem) Lecure 4 Noes (Lile s Theorem) This lecure concerns one of he mos imporan (and simples) heorems in Queuing Theory, Lile s Theorem. More informaion can be found in he course book, Bersekas & Gallagher,

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H.

ACE 564 Spring Lecture 7. Extensions of The Multiple Regression Model: Dummy Independent Variables. by Professor Scott H. ACE 564 Spring 2006 Lecure 7 Exensions of The Muliple Regression Model: Dumm Independen Variables b Professor Sco H. Irwin Readings: Griffihs, Hill and Judge. "Dumm Variables and Varing Coefficien Models

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Solutions to Odd Number Exercises in Chapter 6

Solutions to Odd Number Exercises in Chapter 6 1 Soluions o Odd Number Exercises in 6.1 R y eˆ 1.7151 y 6.3 From eˆ ( T K) ˆ R 1 1 SST SST SST (1 R ) 55.36(1.7911) we have, ˆ 6.414 T K ( ) 6.5 y ye ye y e 1 1 Consider he erms e and xe b b x e y e b

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 RL Lecure 7: Eligibiliy Traces R. S. Suon and A. G. Baro: Reinforcemen Learning: An Inroducion 1 N-sep TD Predicion Idea: Look farher ino he fuure when you do TD backup (1, 2, 3,, n seps) R. S. Suon and

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Methodology. -ratios are biased and that the appropriate critical values have to be increased by an amount. that depends on the sample size.

Methodology. -ratios are biased and that the appropriate critical values have to be increased by an amount. that depends on the sample size. Mehodology. Uni Roo Tess A ime series is inegraed when i has a mean revering propery and a finie variance. I is only emporarily ou of equilibrium and is called saionary in I(0). However a ime series ha

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

Self assessment due: Monday 4/29/2019 at 11:59pm (submit via Gradescope)

Self assessment due: Monday 4/29/2019 at 11:59pm (submit via Gradescope) CS 188 Spring 2019 Inroducion o Arificial Inelligence Wrien HW 10 Due: Monday 4/22/2019 a 11:59pm (submi via Gradescope). Leave self assessmen boxes blank for his due dae. Self assessmen due: Monday 4/29/2019

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides Hidden Markov Models Adaped from Dr Caherine Sweeney-Reed s slides Summary Inroducion Descripion Cenral in HMM modelling Exensions Demonsraion Specificaion of an HMM Descripion N - number of saes Q = {q

More information

SPH3U: Projectiles. Recorder: Manager: Speaker:

SPH3U: Projectiles. Recorder: Manager: Speaker: SPH3U: Projeciles Now i s ime o use our new skills o analyze he moion of a golf ball ha was ossed hrough he air. Le s find ou wha is special abou he moion of a projecile. Recorder: Manager: Speaker: 0

More information

More Digital Logic. t p output. Low-to-high and high-to-low transitions could have different t p. V in (t)

More Digital Logic. t p output. Low-to-high and high-to-low transitions could have different t p. V in (t) EECS 4 Spring 23 Lecure 2 EECS 4 Spring 23 Lecure 2 More igial Logic Gae delay and signal propagaion Clocked circui elemens (flip-flop) Wriing a word o memory Simplifying digial circuis: Karnaugh maps

More information

Lecture 3: Exponential Smoothing

Lecture 3: Exponential Smoothing NATCOR: Forecasing & Predicive Analyics Lecure 3: Exponenial Smoohing John Boylan Lancaser Cenre for Forecasing Deparmen of Managemen Science Mehods and Models Forecasing Mehod A (numerical) procedure

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

RC, RL and RLC circuits

RC, RL and RLC circuits Name Dae Time o Complee h m Parner Course/ Secion / Grade RC, RL and RLC circuis Inroducion In his experimen we will invesigae he behavior of circuis conaining combinaions of resisors, capaciors, and inducors.

More information

Mathematical Theory and Modeling ISSN (Paper) ISSN (Online) Vol 3, No.3, 2013

Mathematical Theory and Modeling ISSN (Paper) ISSN (Online) Vol 3, No.3, 2013 Mahemaical Theory and Modeling ISSN -580 (Paper) ISSN 5-05 (Online) Vol, No., 0 www.iise.org The ffec of Inverse Transformaion on he Uni Mean and Consan Variance Assumpions of a Muliplicaive rror Model

More information

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints IJCSI Inernaional Journal of Compuer Science Issues, Vol 9, Issue 1, No 1, January 2012 wwwijcsiorg 18 Applying Geneic Algorihms for Invenory Lo-Sizing Problem wih Supplier Selecion under Sorage Capaciy

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

A Reinforcement Learning Approach for Collaborative Filtering

A Reinforcement Learning Approach for Collaborative Filtering A Reinforcemen Learning Approach for Collaboraive Filering Jungkyu Lee, Byonghwa Oh 2, Jihoon Yang 2, and Sungyong Park 2 Cyram Inc, Seoul, Korea jklee@cyram.com 2 Sogang Universiy, Seoul, Korea {mrfive,yangjh,parksy}@sogang.ac.kr

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Block Diagram of a DCS in 411

Block Diagram of a DCS in 411 Informaion source Forma A/D From oher sources Pulse modu. Muliplex Bandpass modu. X M h: channel impulse response m i g i s i Digial inpu Digial oupu iming and synchronizaion Digial baseband/ bandpass

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Numerical Dispersion

Numerical Dispersion eview of Linear Numerical Sabiliy Numerical Dispersion n he previous lecure, we considered he linear numerical sabiliy of boh advecion and diffusion erms when approimaed wih several spaial and emporal

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Version April 30, 2004.Submied o CTU Repors. EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Per Krysl Universiy of California, San Diego La Jolla, California 92093-0085,

More information

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model 1 Boolean and Vecor Space Rerieval Models Many slides in his secion are adaped from Prof. Joydeep Ghosh (UT ECE) who in urn adaped hem from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) Rerieval

More information

Mechanical Fatigue and Load-Induced Aging of Loudspeaker Suspension. Wolfgang Klippel,

Mechanical Fatigue and Load-Induced Aging of Loudspeaker Suspension. Wolfgang Klippel, Mechanical Faigue and Load-Induced Aging of Loudspeaker Suspension Wolfgang Klippel, Insiue of Acousics and Speech Communicaion Dresden Universiy of Technology presened a he ALMA Symposium 2012, Las Vegas

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Probabilisic reasoning over ime So far, we ve mosly deal wih episodic environmens Excepions: games wih muliple moves, planning In paricular, he Bayesian neworks we ve seen so far describe

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Shiva Akhtarian MSc Student, Department of Computer Engineering and Information Technology, Payame Noor University, Iran

Shiva Akhtarian MSc Student, Department of Computer Engineering and Information Technology, Payame Noor University, Iran Curren Trends in Technology and Science ISSN : 79-055 8hSASTech 04 Symposium on Advances in Science & Technology-Commission-IV Mashhad, Iran A New for Sofware Reliabiliy Evaluaion Based on NHPP wih Imperfec

More information

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes Half-Range Series 2.5 Inroducion In his Secion we address he following problem: Can we find a Fourier series expansion of a funcion defined over a finie inerval? Of course we recognise ha such a funcion

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

The field of mathematics has made tremendous impact on the study of

The field of mathematics has made tremendous impact on the study of A Populaion Firing Rae Model of Reverberaory Aciviy in Neuronal Neworks Zofia Koscielniak Carnegie Mellon Universiy Menor: Dr. G. Bard Ermenrou Universiy of Pisburgh Inroducion: The field of mahemaics

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

Adaptive Compressive Tracking Based on Perceptual Hash Algorithm Lei ZHANG, Zheng-guang XIE * and Hong-jun LI

Adaptive Compressive Tracking Based on Perceptual Hash Algorithm Lei ZHANG, Zheng-guang XIE * and Hong-jun LI 2017 2nd Inernaional Conference on Informaion Technology and Managemen Engineering (ITME 2017) ISBN: 978-1-60595-415-8 Adapive Compressive Tracking Based on Percepual Hash Algorihm Lei ZHANG, Zheng-guang

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Probabilistic Robotics

Probabilistic Robotics Probabilisic Roboics Bayes Filer Implemenaions Gaussian filers Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel Gaussians : ~ π e p N p - Univariae / / : ~ μ μ μ e p Ν p d π Mulivariae

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

2017 3rd International Conference on E-commerce and Contemporary Economic Development (ECED 2017) ISBN:

2017 3rd International Conference on E-commerce and Contemporary Economic Development (ECED 2017) ISBN: 7 3rd Inernaional Conference on E-commerce and Conemporary Economic Developmen (ECED 7) ISBN: 978--6595-446- Fuures Arbirage of Differen Varieies and based on he Coinegraion Which is under he Framework

More information

Learning Naive Bayes Classifier from Noisy Data

Learning Naive Bayes Classifier from Noisy Data UCLA Compuer Science Deparmen Technical Repor CSD-TR No 030056 1 Learning Naive Bayes Classifier from Noisy Daa Yirong Yang, Yi Xia, Yun Chi, and Richard R Munz Universiy of California, Los Angeles, CA

More information

DEPARTMENT OF STATISTICS

DEPARTMENT OF STATISTICS A Tes for Mulivariae ARCH Effecs R. Sco Hacker and Abdulnasser Haemi-J 004: DEPARTMENT OF STATISTICS S-0 07 LUND SWEDEN A Tes for Mulivariae ARCH Effecs R. Sco Hacker Jönköping Inernaional Business School

More information

Time series Decomposition method

Time series Decomposition method Time series Decomposiion mehod A ime series is described using a mulifacor model such as = f (rend, cyclical, seasonal, error) = f (T, C, S, e) Long- Iner-mediaed Seasonal Irregular erm erm effec, effec,

More information

Lecture 2 April 04, 2018

Lecture 2 April 04, 2018 Sas 300C: Theory of Saisics Spring 208 Lecure 2 April 04, 208 Prof. Emmanuel Candes Scribe: Paulo Orensein; edied by Sephen Baes, XY Han Ouline Agenda: Global esing. Needle in a Haysack Problem 2. Threshold

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Demodulation of Digitally Modulated Signals

Demodulation of Digitally Modulated Signals Addiional maerial for TSKS1 Digial Communicaion and TSKS2 Telecommunicaion Demodulaion of Digially Modulaed Signals Mikael Olofsson Insiuionen för sysemeknik Linköpings universie, 581 83 Linköping November

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Bias-Variance Error Bounds for Temporal Difference Updates

Bias-Variance Error Bounds for Temporal Difference Updates Bias-Variance Bounds for Temporal Difference Updaes Michael Kearns AT&T Labs mkearns@research.a.com Sainder Singh AT&T Labs baveja@research.a.com Absrac We give he firs rigorous upper bounds on he error

More information

DATA DRIVEN DONOR MANAGEMENT: LEVERAGING RECENCY & FREQUENCY IN DONATION BEHAVIOR

DATA DRIVEN DONOR MANAGEMENT: LEVERAGING RECENCY & FREQUENCY IN DONATION BEHAVIOR DATA DRIVEN DONOR MANAGEMENT: LEVERAGING RECENCY & FREQUENCY IN DONATION BEHAVIOR Peer Fader Frances and Pei Yuan Chia Professor of Markeing Co Direcor, Wharon Cusomer Analyics Iniiaive The Wharon School

More information

Stability and Bifurcation in a Neural Network Model with Two Delays

Stability and Bifurcation in a Neural Network Model with Two Delays Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem.

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem. Noes, M. Krause.. Problem Se 9: Exercise on FTPL Same model as in paper and lecure, only ha one-period govenmen bonds are replaced by consols, which are bonds ha pay one dollar forever. I has curren marke

More information